paper_id
stringlengths
43
43
summaries
sequence
abstractText
stringlengths
98
40k
authors
list
references
list
sections
list
year
int64
1.98k
2.02k
title
stringlengths
4
183
SP:feed1c549e9d8bc680bfb92dbd0979b3fb103904
[ "This paper proposes a novel method for Unsupervised Domain Adaptation (UDA) when the source domain's privacy should be preserved. The authors propose EMTL, which is a generative method using multivariate densities using RNADE (Uria et al., 2013) and a mediator joint density function bridging both source and target domains. EMTL achieves comparable performances to those of DANN (Ganin et al., 2016) on a single dataset." ]
We propose an unsupervised domain adaptation approach based on generative models. We show that when the source probability density function can be learned, one-step Expectation–Maximization iteration plus an additional marginal density function constraint will produce a proper mediator probability density function to bridge the gap between the source and target domains. The breakthrough is based on modern generative models (autoregressive mixture density nets) that are competitive to discriminative models on moderate-dimensional classification problems. By decoupling the source density estimation from the adaption steps, we can design a domain adaptation approach where the source data is locked away after being processed only once, opening the door to transfer when data security or privacy concerns impede the use of traditional domain adaptation. We demonstrate that our approach can achieve state-of-the-art performance on synthetic and real data sets, without accessing the source data at the adaptation phase.
[]
[ { "authors": [ "Kamyar Azizzadenesheli", "Anqi Liu", "Fanny Yang", "Animashree Anandkumar" ], "title": "Regularized learning for domain adaptation under label shifts", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Marouan Belhaj", "Pavlos Protopapas", "Weiwei Pan" ], "title": "Deep variational transfer: Transfer learning through semi-supervised deep generative models", "venue": "arXiv preprint arXiv:1812.03123,", "year": 2018 }, { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Alex Kulesza", "Fernando Pereira", "Jennifer Wortman Vaughan" ], "title": "A theory of learning from different domains", "venue": "Machine learning,", "year": 2010 }, { "authors": [ "Albert Bifet", "Ricard Gavaldà" ], "title": "Adaptive learning from evolving data streams", "venue": "In International Symposium on Intelligent Data Analysis,", "year": 2009 }, { "authors": [ "Christopher M. Bishop" ], "title": "Mixture density networks", "venue": "Technical report,", "year": 1994 }, { "authors": [ "N. Courty", "R. Flamary", "D. Tuia", "A. Rakotomamonjy" ], "title": "Optimal transport for domain adaptation", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2017 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017", "venue": "URL http://archive. ics.uci.edu/ml", "year": 2017 }, { "authors": [ "Basura Fernando", "Amaury Habrard", "Marc Sebban", "Tinne Tuytelaars" ], "title": "Unsupervised visual domain adaptation using subspace alignment", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2013 }, { "authors": [ "Yaroslav Ganin", "Evgeniya Ustinova", "Hana Ajakan", "Pascal Germain", "Hugo Larochelle", "François Laviolette", "Mario Marchand", "Victor Lempitsky" ], "title": "Domain-adversarial training of neural networks", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Boqing Gong", "Yuan Shi", "Fei Sha", "Kristen Grauman" ], "title": "Geodesic flow kernel for unsupervised domain adaptation", "venue": "In 2012 IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2012 }, { "authors": [ "Mingming Gong", "Kun Zhang", "Tongliang Liu", "Dacheng Tao", "Clark Glymour", "Bernhard Schölkopf" ], "title": "Domain adaptation with conditional transferable components", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Arthur Gretton", "Karsten Borgwardt", "Malte Rasch", "Bernhard Schölkopf", "Alex J Smola" ], "title": "A kernel method for the two-sample-problem", "venue": "In Advances in neural information processing systems,", "year": 2007 }, { "authors": [ "Arthur Gretton", "Alex Smola", "Jiayuan Huang", "Marcel Schmittfull", "Karsten Borgwardt", "Bernhard Schölkopf" ], "title": "Covariate shift by kernel mean matching", "venue": "Dataset shift in machine learning,", "year": 2009 }, { "authors": [ "Jiayuan Huang", "Arthur Gretton", "Karsten Borgwardt", "Bernhard Schölkopf", "Alex J Smola" ], "title": "Correcting sample selection bias by unlabeled data", "venue": "In Advances in neural information processing systems,", "year": 2007 }, { "authors": [ "Alireza Karbalayghareh", "Xiaoning Qian", "Edward R Dougherty" ], "title": "Optimal bayesian transfer learning", "venue": "IEEE Transactions on Signal Processing,", "year": 2018 }, { "authors": [ "Durk P Kingma", "Shakir Mohamed", "Danilo Jimenez Rezende", "Max Welling" ], "title": "Semi-supervised learning with deep generative models", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Wouter Marco Kouw", "Marco Loog" ], "title": "A review of domain adaptation without target labels", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2019 }, { "authors": [ "Zachary C Lipton", "Yu-Xiang Wang", "Alex Smola" ], "title": "Detecting and correcting for label shift with black box predictors", "venue": "arXiv preprint arXiv:1802.03916,", "year": 2018 }, { "authors": [ "Mingsheng Long", "Yue Cao", "Jianmin Wang", "Michael Jordan" ], "title": "Learning transferable features with deep adaptation networks", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Sérgio Moro", "Paulo Cortez", "Paulo Rita" ], "title": "A data-driven approach to predict the success of bank telemarketing", "venue": "Decision Support Systems,", "year": 2014 }, { "authors": [ "Sinno Jialin Pan", "Qiang Yang" ], "title": "A survey on transfer learning", "venue": "IEEE Transactions on knowledge and data engineering,", "year": 2009 }, { "authors": [ "Sinno Jialin Pan", "Ivor W Tsang", "James T Kwok", "Qiang Yang" ], "title": "Domain adaptation via transfer component analysis", "venue": "IEEE Transactions on Neural Networks,", "year": 2010 }, { "authors": [ "Swami Sankaranarayanan", "Yogesh Balaji", "Carlos D Castillo", "Rama Chellappa" ], "title": "Generate to adapt: Aligning domains using generative adversarial networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Masashi Sugiyama", "Matthias Krauledat", "Klaus-Robert MÞller" ], "title": "Covariate shift adaptation by importance weighted cross validation", "venue": "Journal of Machine Learning Research,", "year": 2007 }, { "authors": [ "Alexandre B Tsybakov" ], "title": "Introduction to nonparametric estimation", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "Eric Tzeng", "Judy Hoffman", "Kate Saenko", "Trevor Darrell" ], "title": "Adversarial discriminative domain adaptation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Benigno Uria", "Iain Murray", "Hugo Larochelle" ], "title": "Rnade: The real-valued neural autoregressive density-estimator", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Kun Zhang", "Bernhard Schölkopf", "Krikamol Muandet", "Zhikun Wang" ], "title": "Domain adaptation under target and conditional shift", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Fuzhen Zhuang", "Zhiyuan Qi", "Keyu Duan", "Dongbo Xi", "Yongchun Zhu", "Hengshu Zhu", "Hui Xiong", "Qing He" ], "title": "A comprehensive survey on transfer learning", "venue": "Proceedings of the IEEE,", "year": 2020 } ]
[ { "heading": null, "text": "We propose an unsupervised domain adaptation approach based on generative models. We show that when the source probability density function can be learned, one-step Expectation–Maximization iteration plus an additional marginal density function constraint will produce a proper mediator probability density function to bridge the gap between the source and target domains. The breakthrough is based on modern generative models (autoregressive mixture density nets) that are competitive to discriminative models on moderate-dimensional classification problems. By decoupling the source density estimation from the adaption steps, we can design a domain adaptation approach where the source data is locked away after being processed only once, opening the door to transfer when data security or privacy concerns impede the use of traditional domain adaptation. We demonstrate that our approach can achieve state-of-the-art performance on synthetic and real data sets, without accessing the source data at the adaptation phase." }, { "heading": "1 INTRODUCTION", "text": "In the classical supervised learning paradigm, we assume that the training and test data come from the same distribution. In practice, this assumption often does not hold. When the pipeline includes massive data labeling, models are routinely retrained after each data collecion campaign. However, data labeling costs often make retraining impractical. Without labeled data, it is still possible to train the model by using a training set which is relevant but not identically distributed to the test set. Due to the distribution shift between the training and test sets, the performance usually cannot be guaranteed.\nDomain adaptation (DA) is a machine learning subdomain that aims at learning a model from biased training data. It explores the relationship between source (labeled training data) and target (test data) domains to find the mapping function and fix the bias, so that the model learned on the source data can be applied in target domain. Usually some target data is needed during the training phase to calibrate the model. In unsupervised domain adaptation (UDA) only unlabeled target data is needed during training phase. UDA is an appealing learning paradigm since obtaining unlabeled data is usually easy in a lot of applications. UDA allows the model to be deployed in various target domains with different shifts using a single labeled source data set.\nDue to these appealing operational features, UDA has became a prominent research field with various approaches. Kouw & Loog (2019) and Zhuang et al. (2020) surveyed the latest progress on UDA and found that most of the approaches are based on discriminative models, either by reweighting the source instances to approximate the target distribution or learning a feature mapping function to reduce the statistical distance between the source and target domains. After calibrating, a discriminative model is trained on the adjusted source data and used in target domain. In this workflow, the adaptation algorithm usually have to access the source and target data simultaneously. However, accessing the source data during the adaptation phase is not possible when the source data is sensitive (for example because of security or privacy issues). In particular, in our application workflow an industrial company is selling devices to various service companies which cannot share their customer data with each other. The industrial company may contract with one of the service companies to access their data during an R&D phase, but this data will not be available when the industrial company sells the device (and the predictive model) to other service companies.\nIn this paper we propose EMTL, a generative UDA algorithm for binary classification that does not have to access the source data during the adaptation phase. We use density estimation to estimate the joint source probability function ps(x, y) and the marginal target probability function pt(x) and use them for domain adaption. To solve the data security issue, EMTL decouples source density estimation from the adaptation steps. In this way, after the source preprocessing we can put away or delete the source data. Our approach is motivated by the theory on domain adaptation (Ben-David et al., 2010) which claims that the error of a hypothesis h on the target domain can be bounded by three items: the error on the source domain, the distance between source and target distributions, and the expected difference in labeling functions. This theorem motivated us to define a mediator density function pm(x, y) i) whose conditional probability y|x is equal to the conditional probability of the source and ii) whose marginal density on x is equal to the marginal density of the target. We can then construct a Bayes optimal classifier on the target domain under the assumption of covariate shift (the distribution y|x is the same in the source and target domains). Our approach became practical with the recent advances in (autoregressive) neural density estimation (Uria et al., 2013). We learn pm(x, y) from ps(x, y) and pt(x) to bridge the gap between the source and target domains. We regard the label on the target data as a latent variable and show that if ps(x |y = i) be learned perfectly for i ∈ {0, 1}, then a one-step Expectation–Maximization (and this is why our algorithm named EMTL) iteration will produce a density function pm(x, y) with the following properties on the target data: i) minimizing the Kullback–Leibler divergence between pm(yi|xi) and ps(yi|xi); ii) maximizing the log-likelihood ∑ log pm(xi). Then, by adding an additional marginal constraint on pm(xi) to make it close to pt(xi) on the target data explicitly, we obtain the final objective function for EMTL. Although this analysis assumes a simple covariate shift , we will experimentally show that EMTL can go beyond this assumption and work well in other distribution shifts.\nWe conduct experiments on synthetic and real data to demonstrate the effectiveness of EMTL. First, we construct a simple two-dimensional data set to visualize the performance of EMTL. Second, we use UCI benchmark data sets and the Amazon reviews data set to show that EMTL is competitive with state-of-the-art UDA algorithms, without accessing the source data at the adaptation phase. To our best knowledge, EMTL is the first work using density estimation for unsupervised domain adaptation. Unlike other existing generative approaches (Kingma et al., 2014; Karbalayghareh et al., 2018; Sankaranarayanan et al., 2018), EMTL can decouple the source density estimation process from the adaption phase and thus it can be used in situations where the source data is not available at the adaptation phase due to security or privacy reasons." }, { "heading": "2 RELATED WORK", "text": "Zhuang et al. (2020), Kouw & Loog (2019) and Pan & Yang (2009) categorize DA approaches into instance-based and feature-based techniques. Instance-based approaches reweight labeled source samples according to the ratio of between the source and the target densities. Importance weighting methods reweight source samples to reduce the divergence between the source and target densities (Huang et al., 2007; Gretton et al., 2007; Sugiyama et al., 2007). In contrast, class importance weighting methods reweight source samples to make the source and target label distribution the same (Azizzadenesheli et al., 2019; Lipton et al., 2018; Zhang et al., 2013). Feature-based approaches learn a new representation for the source and the target by minimizing the divergence between the source and target distributions. Subspace mapping methods assume that there is a common subspace between the source and target (Fernando et al., 2013; Gong et al., 2012). Courty et al. (2017) proposed to use optimal transport to constrain the learning process of the transformation function. Other methods aim at learning a representation which is domain-invariant among domains (Gong et al., 2016; Pan et al., 2010).\nBesides these shallow models, deep learning has also been widely applied in domain adaptation (Tzeng et al., 2017; Ganin et al., 2016; Long et al., 2015). DANN (Ganin et al., 2016) learns a representation using a neural network which is discriminative for the source task while cannot distinguish the source and target domains from each other. Kingma et al. (2014) and Belhaj et al. (2018) proposed a variational inference based semi-supervised learning approach by regarding the missing label as latent variable and then performing posterior inference." }, { "heading": "3 NOTATION AND PROBLEM DEFINITION", "text": "We consider the unsupervised domain adaptation problem in a binary classification setting (the setup is trivial to extend to multi-class classification). Let p(x, y) be a joint density function defined on X × Y , where x ∈ Rp is the feature vector and y ∈ {0, 1} is the label. We denote the conditional probability p(y = 1|x) by q(x). A hypothesis or model is a function h : X 7→ [0, 1]. We define the error of h as the expected disagreement between h(x) and q(x), i.e.,\n(h) = Ex∼p |h(x)− q(x)|. (1)\nWe use superscripts s and t to distinguish the source and target domains, that is, ps(x, y) and pt(x, y) are the joint density functions in the source and target domains respectively. In general, we assume that ps(x, y) 6= pt(x, y).\nLet Ds = {(xsi , ysi)}n s i=1 and U t = {xti}n t\ni=1 be i.i.d. data sets generated from the source distribution ps(x, y) and the marginal target distribution pt(x), respectively, where ns and nt are source and target sample sizes. The objective of unsupervised domain adaptation is to learn a model ĥ by using labeled Ds and unlabeled U t, which achieves lowest error in target domain." }, { "heading": "4 GENERATIVE APPROACH", "text": "Ben-David et al. (2010) proved that the error of a hypothesis h in the target domain t(h) can be bounded by the sum of error in source domain s(h), the distribution distance between the two domains, and the expected L1 distance between two conditional probabilities.\nTheorem 1 (Ben-David et al. (2010), Theorem 1) For a hypothesis h,\nt(h) ≤ s(h) + d1(ps(x), pt(x)) + min{Ex∼ps |qs(x)− qt(x)|,Ex∼pt |qs(x)− qt(x)|}, (2)\nwhere d1(ps(x), pt(x)) = 2sup B∈B |Prs(B) − Prt(B)| is the twice the total variation distance of two domain distributions and qs(x) and qt(x) are the source and target probabilities of y = 1|x, respectively.\nIn the covariate shift setting, we assume that the conditional probability p(y|x) is invariant between the source and the target domains. Thus in the right hand side of Eq. (2), the third component will be zero, which means that the target error is bounded by the source error plus the distance between two domains. Many current unsupervised domain adaptation solutions work on how to reduce the distance between the two domain densities. Importance-sampling-based approaches manage to resample the source data to mimic the target data distribution, and feature-mapping-based approaches do that by learning a transformation function φ(x) for the source data. However, both approaches need to access source and target data simultaneously.\nIn this paper, we propose a domain adaptation approach based on generative models. First, we learn all multivariate densities using RNADE (Uria et al., 2013), an autoregressive version of Bishop (1994)’s mixture density nets. We found RNADE excellent in learning medium-dimensional densities, and in a certain sense it is RNADE that made our approach feasible. Second, we introduce a mediator joint density function pm(x, y) that bridges the gap between ps(x, y) and pt(x, y). Since the source distribution information is stored in the learned generative model after training, we do not need to access source data in the adaptation phase." }, { "heading": "4.1 DENSITY FUNCTION", "text": "Due to recent developments in neural density estimation, we can estimate moderate-dimensional densities efficiently. In this paper, we use real-valued autoregressive density estimator (RNADE) of Uria et al. (2013). RNADE is an autoregressive version of mixture density nets of Bishop (1994) which fights the curse of dimensionality by estimating conditional densities, and provides explicit likelihood by using mixtures of Gaussians.\nTo estimate p(x), let x = [x1, x2, · · · , xp] be a p dimensional random vector. RNADE decomposes the joint density function using the chain rule and models each p(xi|x<i) with a mixture of\nGaussians whose parameters depend on observed x<i. Formally,\np(x) = p∏ i=1 p(xi|x<i) = p∏ i=1 ( d∑ j=1 αj(x<i)N (xi;µj(x<i), σ2j (x<i)) ) , (3)\nwhere x<i = [x1, · · · , xi−1] and d is the number of Gaussian components. The weights αj , means µj , and variances σj are modeled by a single neural net whose architecture makes sure that the parameter ·j(x<i) depends only on x<i. The neural net is trained to maximize the likelihood of the training data. We denote the RNADE model by the function f(x;ω), where ω represents all the parameters (neural net weights) in RNADE, and use it to approximate p(x). The conditional density p(x |y) can be estimated in the same way by just selecting x |y as the training data. In following sections, we denote the maximum likelihood parameters of ps(x |y = 0), ps(x |y = 1), and pt(x) by ωs0, ωs1, and ωt, respectively. We further denote the proportion of class 0 in the source domain by τs0 = #{ys=0} ns . The full parameter vector [ωs0, ωs1, τs0] of p s(x, y) and ps(x) is denoted by θs." }, { "heading": "4.2 THE MEDIATOR DISTRIBUTION", "text": "By Eq. (2), the target error can be bounded by the source error plus the distance between the two marginal distributions plus the expected difference in p(y = 1|x) between two domains. This motivated us to construct a mediator distribution pm(x, y) (Figure 1) which has two properties:\nIn the covariate shift setting, we can then solve the unsupervised domain adaptation problem perfectly: i) the first property forces p(y|x) to be the same in source and mediator distributions, and in the covariate shift setting we have ps(y|x) = pt(y|x), then this property makes pm(y|x) = pt(y|x); ii) the second property makes the marginal distributions of the mediator and the target the same, which leads to d1(pm(x), pt(x)) = 0. Under these two conditions, for any model h, we will have t(h) ≤ m(h) since the last two terms of Eq. (2) will be zero. Furthermore, given the mediator distribution pm(x, y), it is easy to learn the best model (Bayes classifier)\nĥ(x) = pm(x |y = 1) pm(y = 1)\npm(x) , (4)\nwhich achieves the tightest bound for the target error. In summary, by introducing the mediator distribution pm(x, y), we can bound the target error by the mediator error. In the following sections, we will introduce how to learn pm(x, y) from ps(x, y) and pt(x) using the expectation-maximization (EM) algorithm combined with a marginal constraint term." }, { "heading": "5 EMTL", "text": "If we regard the missing label y as a latent variable that generates observed x in the target domain, we can use the EM algorithm to infer y. We consider that the target density p(x; θ) is a mixture with two components p(x |y = i; θ) where i ∈ {0, 1}. When θ converges to its limit θ∗ in EM, we can recover the joint density function p(x, y; θ∗). We denote this joint density function by pm(x, y). However, this pm(x, y) may be far away from the ground truth pt(x, y). The mismatch comes from two facts: i) EM can easily converge to a bad local minimum because of a bad initialization, and ii) EM tends to find inner structure (e.g., clusters) of the data but this structure may be irrelevant\nto the true label. The local minimum problem is due to parameter initialization, and the structurelabel mismatching problem comes from not having a-priori information of the label. When we have a fully known source distribution ps(x, y), these two issues can be solved by selecting a proper initialization plus a constraint on marginal distribution.\nThe first observation is that in a lot of cases we can directly use the source model in the target domain and it is better than random guess. We use this intuition to make the source model ps(x, y) as the initial guess of pm(x, y). Following section 4.1, we use RNADE to model pm(x |y) and denote parameters of pm(x, y) by θm = [ωm0, ωm1, τm0]. Initializing pm(x, y) by using ps(x, y) means we set θ(0)m , the initial state of θm in the EM algorithm, to θs. The next EM iterations can be seen as a way to fine-tune θm using the target data. In the next sections we will formally analyze this intuitive algorithm.\n5.1 ANALYSIS θ(1)m\nFirst we link the EM algorithm with initial θ(0)m = θs to Theorem 1. In each iteration, EM alternates between two steps: E step defines a Q function as Q(θ|θ(t)) = Ey|x,θ(t) log p(θ;x, y) and M step do the maximization θ(t+1) = argmaxθ Q(θ|θ(t)). After the first EM iteration, we have\nθ(1)m = argmax θ Q(θ| θ(0)m ) = argmax θ\n1\nnt nt∑ i=1 Eyi|xti ,θs log p(x t i , yi; θ). (5)\nSuppose θs is learned perfectly from source data, which means that we can replace p(x, y; θ (0) m ) by ps(x, y). Thus the expectation operation in Eq. (5) can be written as\nEyi|xti ,θs [ξ] = ∑\nj∈{0,1}\np(yi = j|xti ; θs)ξ = ∑\nj∈{0,1}\nps(yi = j|xti)ξ (6)\nfor any random variable ξ. This expectation links the source distribution with the target. We rewrite the full expectation expression of Eq. (5) as\nEyi|xti ,θs log p(x t i , yi; θ) = ∑ j∈{0,1} ps(yi = j|xti) log p(xti , yi = j; θ)\n= −DKL(ps(yi|xti)‖p(yi|xti ; θ)) + log p(xti ; θ)−Hps(yi|xti), (7)\nwhere Hps(yi|xti) is the conditional entropy on probability ps. This equation shows that the expected log-likelihood can be decomposed into the sum of three items. the first item is the negative KL-divergence between the two conditional distributions ps(yi|xti) and p(yi|xti ; θ); the second item is the target log-likelihood log p(xti |θ); the last item is the negative entropy of the source conditional distribution, which is irrelevant to parameter θ so can be ignored during the optimization.\nTherefore, by setting θ(0)m as θs and maximizing the Q function in the first EM iteration, we will get a pm(x, y) which minimizes the KL-divergence between pm(y|x) with ps(y|x) and maximizes log pm(x). Minimizing the KL-divergence reduces the third term of Eq. (2) and maximizing the log-likelihood forces pm(x) to move towards pt(x) implicitly, which reduces the second item of Eq. (2). This suggests that the Bayes classifier pm(y|x) can be a proper classifier for target domain." }, { "heading": "5.2 MARGINAL CONSTRAINT", "text": "In the previous section, we implicitly reduce the distance between pm(x) and pt(x) by maximizing the log-likelihood of p(x; θ) on the target data. To further control the target error bound Eq. (2), we explicitly add a marginal constraint for pm(x, y) by minimizing the distance between the two marginal distributions. Rather than calculating d1(pm(x), pt(x)) directly, we use the KL-divergence to measure the distance between two distributions since we can explicitly calculate the pm(xti) and pt(xti) by using our density estimators. Furthermore, according to Pinsker’s inequality (Tsybakov, 2008), we have\nd1(p m(x), pt(x)) ≤ √ 2DKL(pm(x)‖ pt(x)), (8)\nthus minimizing the KL-divergence also controls d1(pm(x), pt(x)). Since we only have samples xti from the target domain, we use an empirical version of the KL-divergence. The marginal constraint is defined as\nM(θ) = √ 2× ( nt∑ i=1 ṗt(xti) log ṗt(xti) ṗm(xti) ) 1 2 = √ 2× ( nt∑ i=1 ḟ(xti ;ωt) log ḟ(xti ;ωt) ṗ(xti ; θ) ) 1 2 , (9)\nwhere ṗ = p/ ∑ p and ḟ = f/ ∑ f are normalized discrete distributions on the target samples." }, { "heading": "5.3 OBJECTIVE FUNCTION OF EMTL", "text": "By putting the Q and M functions together, we get the objective function\nθ∗ = argmin θ −Q(θ| θ(0)m ) + ηM(θ) (10)\nof our generative domain adaptation approach, where θ(0)m = θs and η is a non-negative hyperparameter that controls the trade-off of the two terms.\nIn real-life scenarios, both p(x) and p(y|x) can be different in the source and target domains so the covariate shift assumption may be violated. To go beyond this assumption, we need to relax the constraint on ps(y|x) = pt(y|x) which is used in justifying Q(θ|θ(0)). As we will show in Section 6, by setting a large η and doing more iterations, EMTL will reduce the weight on the Q function and allow us to escape from covariate shift constraints. We summarize the process of EMTL in Algorithm 1.\nAlgorithm 1: EMTL Algorithm Result: EMTL classifier pm(y = 1|x) Initialize θs = [ωs0, ωs1, τs0] and ωt using Ds and U t, respectively; Initialize θ(0)m by θs and t = 1; while t ≤ n itr do\nθ (t) m = argminθ −Q(θ|θ (t−1) m ) + ηM(θ);\nt = t+ 1; end pm(x, y) = p(x, y; θ (t) m ); pm(y = 1|x) = p m(x |y=1) pm(y=1)\npm(x) = (1−τ(t)m0 )f(x;ω (t) m1 )\n(1−τ(t)m0 )f(x;ω (t) m1 )+τ (t) m0 f(x;ω (t) m0 )\n;" }, { "heading": "6 EXPERIMENTS", "text": "In this section, we present experiments on both synthetic (Section 6.1) and real-life data (Section 6.2) to validate the effectiveness of EMTL." }, { "heading": "6.1 EXPERIMENTS ON SYNTHETIC DATA SET", "text": "We study the performance of EMTL under conditional shift where ps(x |y) 6= pt(x |y) using a variant of inter-twinning moons example (Ganin et al., 2016). In the source domain we generate an upper moon (class 0) and a lower moon (class 1) with 1000 points in each class. In the target domain, we first generate 2000 samples as in the source then rotate the data by 40◦ to make the target distribution of x |y different from the source. Figure 2 (left) shows the source and target distributions. In this experiments, we set the number of Gaussian components to 10 and the hidden layer dimension to 30 in the RNADE model.\nWe set η to 1 and 200 to illustrate how a large η helps the model to escape from covariate shift constraint. Figure 2 (upper right) shows the prediction results in the target data using η = 1. When n itr = 0, the EMTL classifier is the source Bayes classifier. In the upper moon, the model misclassifies the middle and the tail parts as class 1. This is because according to the source distribution, these areas are closer to class 1. The same misclassification occurs in lower moon. As n itr\nincreases, the misclassification reduces slightly, because the objective function focuses more on optimizing the Q function thus keeping p(y|x) stable in each iteration. As a contrast, in Figure 2 (bottom right), when setting η to 200, the first iteration reduces the misclassification significantly and finally the error converges to zero. By setting a large η, the conclusion of this example is twofold: i) the ps(y|x) = pt(y|x) constraint will be relieved thus resulting in a better adaptation result, and ii) one-step iteration will increase the performance significantly thus suggesting that we do not need too many iterations. According to ii), in our following experiments the n itr is fixed as 1. We show more experimental results using different ηs in Appendix A.1 and Figure 3." }, { "heading": "6.2 EXPERIMENTS ON REAL-LIFE DATA SETS", "text": "In this section, we validate EMTL on real-life data sets by comparing its performance with two standard supervised learning and three domain adaptation algorithms. The validation is conducted on three UCI data sets and the Amazon reviews data set. First, we create two benchmarks: the source RF/SVM is the model trained only using source data (as a baseline) and the target RF/SVM is the model trained only using labeled target data (as an upper bound). A random forest (RF) classifier is used on the UCI data sets and a support vector machine (SVM) is used on the Amazon reviews data set. The three DA algorithms are kernel mean matching (KMM, Huang et al. (2007)), subspace alignment (SA, Fernando et al. (2013)) and domain adversarial neural network (DANN, Ganin et al. (2016)). For the UCI data sets, both KMM and SA are based on RF and for Amazon reviews data set SVM is used. In KMM, we us an RBF kernel with the kernel width set as the median distance among the data. In DANN, λ is fixed as 0.1. In EMTL, we set the number of components to 5 and the hidden layer size to 10 for RNADE model and η to 1. For each transfer task, five-fold cross validation (CV) is conducted. In each CV fold, we randomly select 90% source samples and 90% target samples respectively to train the model. We average the output of the five models and calculate the 95% confidence interval of the mean. For the UCI tasks, ROC AUC score is the used metric since we are dealing with imbalanced classification tasks. For Amazon reviews tasks accuracy is the used metric. Table 1 and 2 summarize the experimental results. Numbers marked in bold indicate the top performing DA algorithms (more than one bold means they are not significantly different).\nUCI data sets. Three UCI data sets (Abalone, Adult, and Bank Marketing) are used in our experiments (Dua & Graff, 2017; Moro et al., 2014). We preprocess the data first: i) only select numerical features; ii) add uniform noise to smooth the data from integer to real for Adult and Bank data sets. Since the original goal in these data sets is not transfer learning, we use a variant biased sampling approach proposed by Gretton et al. (2009) and Bifet & Gavaldà (2009) to create different domains for each data set. More precisely, for each data set we train a RF classifier to find the most important feature, then sort the data along this feature and split the data in the middle. We regard the first 50% (denoted by A) and second 50% (denoted by B) as the two domains. When doing domain\nadaptation, we use 75% of the target domain samples to train the model and use the other 25% target domain samples as test data. Finally, we use normal quantile transformation to normalize the source and target data sets respectively. Table 3 Appendix A.2 summarizes the features of the data sets we created for the experiments. Table 1 shows the results on the test data for UCI data sets. We find that the performance of EMTL is not significantly different from DANN in all tasks (remember that our goal was not the beat the state of the art but to match it, without accessing the source data at the adaptation phase). On the two Adult tasks and Bank B→ A, although the average score of EMTL is less than that of Target RF, the differences are small.\nAmazon reviews. This data set (Ganin et al., 2016) includes four products, books (B), DVD (D), electronics (E) and kitchen (K) reviews from the Amazon website. Each product (or domain) has 2000 labeled reviews and about 4000 unlabeled reviews. Each review is encoded by a 5000- dimensional feature vector and a binary label (if it is labeled): 0 if its ranking is lower than three stars, and 1 otherwise. We create twelve transfer learning tasks using these four domains. As RNADE is not designed for ultra high dimensional cases, we overcome this constraint by reducing the number of features from 5000 to 5 using a feed forward Neuronal Network (FNN). More precisely, for each task we train a 2-hidden layer FNN on the source data. Then, we cut the last layer and we use the trained network to encode both source and target to 5 dimensions. Table 2 shows the results on the test data for Amazon reviews data set. We notice that EMTL is slightly better than DANN in most of the tasks and still comparable with both KMM and SA." }, { "heading": "7 CONCLUSIONS AND FUTURE WORK", "text": "In this paper, we have presented a density-estimation-based unsupervised domain adaptation approach EMTL. Thanks to the excellent performance of autoregressive mixture density models (e.g., RNADE) on medium-dimensional problems, EMTL is competitive to state-of-the-art solutions. The advantage of EMTL is to decouple the source density estimation phase from the model adaptation phase: we do not need to access the source data when adapting the model to the target domain. This property allows our solution to be deployed in applications where the source data is not available after preprocessing. In our future work, we aim to extend EMTL to more general cases, including high-dimensional as well as more complex data (e.g., time series)." }, { "heading": "A APPENDIX", "text": "A.1 INTER-TWINNING MOONS EXAMPLE\nWe test three η settings and compare the corresponded AUC and accuracy in Appendix Figure 3. We find that as n itr increase, the AUC and accuracy will increase too. In each fixed n itr, a larger η always has higher AUC and accuracy.\nA.2 UCI EXPERIMENTS\nWe summarize the size and class ratio information of UCI data sets in Appendix Table 3.\nParameter settings in UCI data sets. We enumerate the parameter settings on UCI experiment here.\n• Random forest models with 100 trees are used as the classifier. • For DANN, we set the feature extractor, the label predictor, and the domain classifier as\ntwo-layer neural networks with hidden layer dimension 20. The learning rate is fixed as 0.001.\n• For EMTL, we fix the learning rate as 0.1 except for the task Abalone B→ A (where we set it to 0.001) as it did not converge. As mentioned in section 6.1, we only do one EM iteration.\nParameter settings in Amazon reviews dataset. We enumerate the parameter settings choice of Amazon reviews experiment here.\n• SVM has been chosen over RF because it showed better results in the case of Amazon reviews experimentation\n• We run a grid search to find the best C parameter for SVM over one task (from books to dvd) the best result C = 4.64E − 04 is then used for all tasks and for source svm, target svm, KMM and SA solutions.\n• For DANN, we set the feature extractor, the label predictor, and the domain classifier as one-layer neural networks with hidden layer dimension 50. The learning rate is fixed as 0.001.\n• FNN is composed of 2 hidden layers of dimensions 10 and 5 (the encoding dimension). we added a Gaussian Noise, Dropout, Activity Regularization layers in order to generalize better and guarantee better encoding on target data.\n• For EMTL, we fix the learning rate as 0.001 and only do one EM iteration.\nNote that the presented result of Amazon reviews data set in Table 2 have been rounded to one digit. This explains why the 95% confidence interval of the mean is sometimes equal to 0.0 and why some values are not in bold." } ]
2,020
EMTL: A GENERATIVE DOMAIN ADAPTATION APPROACH
SP:4ebb53f9acc9e99dc57bb71b548aabde7dccbef7
[ "The paper presents a new model based on the Graphical Neural Network (GNN). The proposed model adopts probability distributions called copulas and is called the Copula Graphical Neural Network (CopulaGNN). Two parametrizations of the CopulaGNN are given, and the learning of the proposed model is discussed. Experiments suggest that the CopulaGNN outperforms existing GNNs and MLP in almost all setups." ]
Graph-structured data are ubiquitous. However, graphs encode diverse types of information and thus play different roles in data representation. In this paper, we distinguish the representational and the correlational roles played by the graphs in node-level prediction tasks, and we investigate how Graph Neural Network (GNN) models can effectively leverage both types of information. Conceptually, the representational information provides guidance for the model to construct better node features; while the correlational information indicates the correlation between node outcomes conditional on node features. Through a simulation study, we find that many popular GNN models are incapable of effectively utilizing the correlational information. By leveraging the idea of the copula, a principled way to describe the dependence among multivariate random variables, we offer a general solution. The proposed Copula Graph Neural Network (CopulaGNN) can take a wide range of GNN models as base models and utilize both representational and correlational information stored in the graphs. Experimental results on two types of regression tasks verify the effectiveness of the proposed method1.
[ { "affiliations": [], "name": "Jiaqi Ma" }, { "affiliations": [], "name": "Bo Chang" }, { "affiliations": [], "name": "Xuefei Zhang" } ]
[ { "authors": [ "Alexander Bauer", "Claudia Czado", "Thomas Klein" ], "title": "Pair-copula constructions for non-gaussian dag models", "venue": "Canadian Journal of Statistics,", "year": 2012 }, { "authors": [ "Ronald S Burt" ], "title": "Structural holes: The social structure of competition", "venue": "Harvard university press,", "year": 2009 }, { "authors": [ "A Colin Cameron", "Frank AG Windmeijer" ], "title": "R-squared measures for count data regression models with applications to health-care utilization", "venue": "Journal of Business & Economic Statistics,", "year": 1996 }, { "authors": [ "Ines Chami", "Sami Abu-El-Haija", "Bryan Perozzi", "Christopher Ré", "Kevin Murphy" ], "title": "Machine learning on graphs: A model and comprehensive taxonomy", "venue": "arXiv preprint arXiv:2005.03675,", "year": 2020 }, { "authors": [ "Claudia Czado" ], "title": "Analyzing dependent data with vine copulas", "venue": "Lecture Notes in Statistics,", "year": 2019 }, { "authors": [ "Adrian Dobra", "Alex Lenkoski" ], "title": "Copula Gaussian graphical models and their application to modeling functional disability data", "venue": "The Annals of Applied Statistics,", "year": 2011 }, { "authors": [ "Gal Elidan" ], "title": "Copula bayesian networks. In Advances in neural information processing systems, pp", "venue": null, "year": 2010 }, { "authors": [ "Aditya Grover", "Jure Leskovec" ], "title": "node2vec: Scalable feature learning for networks", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Peter D Hoff", "Adrian E Raftery", "Mark S Handcock" ], "title": "Latent space approaches to social network analysis", "venue": "Journal of the american Statistical association,", "year": 2002 }, { "authors": [ "Junteng Jia", "Austion R Benson" ], "title": "Residual correlation in graph neural network regression", "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2020 }, { "authors": [ "Harry Joe" ], "title": "Dependence Modeling with Copulas", "venue": null, "year": 2014 }, { "authors": [ "Michael I Jordan" ], "title": "Graphical models", "venue": "Statistical science,", "year": 2004 }, { "authors": [ "Hannes Kazianka", "Jürgen Pilz" ], "title": "Copula-based geostatistical modeling of continuous and discrete data including covariates", "venue": "Stochastic environmental research and risk assessment,", "year": 2010 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Johannes Klicpera", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Predict then propagate: Graph neural networks meet personalized pagerank", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Steffen L Lauritzen" ], "title": "Graphical models, volume 17", "venue": null, "year": 1996 }, { "authors": [ "Cheng Li", "Jiaqi Ma", "Xiaoxiao Guo", "Qiaozhu Mei" ], "title": "Deepcas: An end-to-end predictor of information cascades", "venue": "In Proceedings of the 26th international conference on World Wide Web,", "year": 2017 }, { "authors": [ "Tianxi Li", "Elizaveta Levina", "Ji Zhu" ], "title": "Prediction models for network-linked data", "venue": "The Annals of Applied Statistics,", "year": 2019 }, { "authors": [ "Han Liu", "Fang Han", "Ming Yuan", "John Lafferty", "Larry Wasserman" ], "title": "High-dimensional semiparametric Gaussian copula graphical models", "venue": "The Annals of Statistics,", "year": 2012 }, { "authors": [ "Tiancheng Lou", "Jie Tang" ], "title": "Mining structural hole spanners through information diffusion in social networks", "venue": "In Proceedings of the 22nd international conference on World Wide Web,", "year": 2013 }, { "authors": [ "Jiaqi Ma", "Weijing Tang", "Ji Zhu", "Qiaozhu Mei" ], "title": "A flexible generative framework for graph-based semi-supervised learning", "venue": "arXiv preprint arXiv:1905.10769,", "year": 2019 }, { "authors": [ "Miller McPherson", "Lynn Smith-Lovin", "James M Cook" ], "title": "Birds of a feather: Homophily in social networks", "venue": "Annual review of sociology,", "year": 2001 }, { "authors": [ "Qiaozhu Mei", "Duo Zhang", "ChengXiang Zhai" ], "title": "A general optimization framework for smoothing language models on graph structures", "venue": "In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval,", "year": 2008 }, { "authors": [ "Bryan Perozzi", "Rami Al-Rfou", "Steven Skiena" ], "title": "Deepwalk: Online learning of social representations", "venue": "In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2014 }, { "authors": [ "Meng Qu", "Yoshua Bengio", "Jian Tang" ], "title": "Gmnn: Graph markov neural networks", "venue": "arXiv preprint arXiv:1905.06214,", "year": 2019 }, { "authors": [ "Benedek Rozemberczki", "Carl Allen", "Rik Sarkar" ], "title": "Multi-scale attributed node embedding", "venue": "arXiv preprint arXiv:1909.13021,", "year": 2019 }, { "authors": [ "A. Sklar" ], "title": "Fonctions de répartition à n dimensions et leurs marges", "venue": "Publications de l’Institut de Statistique de l’Université de Paris,", "year": 1959 }, { "authors": [ "Jian Tang", "Meng Qu", "Mingzhe Wang", "Ming Zhang", "Jun Yan", "Qiaozhu Mei" ], "title": "Line: Large-scale information network embedding", "venue": "In Proceedings of the 24th international conference on world wide web,", "year": 2015 }, { "authors": [ "Jie Tang", "Jing Zhang", "Limin Yao", "Juanzi Li", "Li Zhang", "Zhong Su" ], "title": "Arnetminer: Extraction and mining of academic social networks", "venue": "In KDD’08,", "year": 2008 }, { "authors": [ "Johan Ugander", "Lars Backstrom", "Cameron Marlow", "Jon Kleinberg" ], "title": "Structural diversity in social contagion", "venue": "Proceedings of the National Academy of Sciences,", "year": 2012 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "arXiv preprint arXiv:1810.00826,", "year": 2018 }, { "authors": [ "Rex Ying", "Ruining He", "Kaifeng Chen", "Pong Eksombatchai", "William L Hamilton", "Jure Leskovec" ], "title": "Graph convolutional neural networks for web-scale recommender systems", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Bing Yu", "Haoteng Yin", "Zhanxing Zhu" ], "title": "Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting", "venue": "arXiv preprint arXiv:1709.04875,", "year": 2017 }, { "authors": [ "Xiaojin Zhu", "Zoubin Ghahramani", "John D Lafferty" ], "title": "Semi-supervised learning using gaussian fields and harmonic functions", "venue": "In Proceedings of the 20th International conference on Machine learning", "year": 2003 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graphs, as flexible data representations that store rich relational information, have been commonly used in data science tasks. Machine learning methods on graphs (Chami et al., 2020), especially Graph Neural Networks (GNNs), have attracted increasing interest in the research community. They are widely applied to real-world problems such as recommender systems (Ying et al., 2018), social network analysis (Li et al., 2017), and transportation forecasting (Yu et al., 2017). Among the heterogeneous types of graph-structured data, it is worth noting that graphs usually play diverse roles in different contexts, different datasets, and different tasks. Some of the roles are relational, as a graph may indicate certain statistical relationships of connected observations; some are representational, as the topological structure of a graph may encode important features/patterns of the data; some are even causal, as a graph may reflect causal relationships specified by domain experts.\nIt is crucial to recognize the distinct roles of a graph in order to correctly utilize the signals in the graph-structured data. In this paper, we distinguish the representational role and the correlational role of graphs in the context of node-level (semi-)supervised learning, and we investigate how to design better GNNs that take advantage of both roles.\nIn a node-level prediction task, the observed graph in the data may relate to the outcomes of interest (e.g., node labels) in multiple ways. Conceptually, we call that the graph plays a representational\n1The code is available at https://github.com/jiaqima/CopulaGNN.\nrole if one can leverage it to construct better feature representations. For example, in social network analysis, aggregating user features from one’s friends is usually helpful (thanks to the well-known homophily phenomenon (McPherson et al., 2001)). In addition, the structural properties of a user’s local network, e.g., structural diversity (Ugander et al., 2012) and structural holes (Burt, 2009; Lou & Tang, 2013), often provide useful information for making predictions about certain outcomes of that user. On the other hand, sometimes a graph directly encodes correlations between the outcomes of connected nodes, and we call it playing a correlational role. For example, hyper-linked Webpages are likely to be visited together even if they have dissimilar content. In spatiotemporal predictions, the outcome of nearby locations, conditional on all the features, may still be correlated. We note that the graph structure may provide useful predictive information through both roles but in distinct ways.\nWhile both the representational and the correlational roles are common in graph-structured data, we find that, through a simulation study, many existing GNN models are incapable of utilizing the correlational information encoded in a graph. Specifically, we design a synthetic dataset for the node-level regression. The node-level outcomes are drawn from a multivariate normal distribution, with the mean and the covariance as functions of the graph to reflect the representational and correlation roles respectively. We find that when the graph only provides correlational information of the node outcomes, many popular GNN models underperform a multi-layer perceptron which does not consider the graph at all.\nTo mitigate this deficiency of GNNs, we propose a principled solution, the Copula Graph Neural Network (CopulaGNN), which can take a wide range of GNNs as the base model and improve their capabilities of modeling the correlational graph information. The key insight of the proposed method is that, by decomposing the joint distribution of node outcomes into the product of marginal densities and a copula density, the representational information and correlational information can be separately modeled. The former is modeled by the marginal densities through a base GNN while the latter is modeled by a Gaussian copula. The proposed method also enjoys the benefit of easy extension to various types of node outcome variables including continuous variables, discrete count variables, or even mixed-type variables. We instantiate CopulaGNN with normal and Poisson marginal distributions for continuous and count regression tasks respectively. We also implement two types of copula parameterizations combined with two types of base GNNs.\nWe evaluate the proposed method on both synthetic and real-world data with both continuous and count regression tasks. The experimental results show that CopulaGNNs significantly outperform their base GNN counterparts when the graph in the data exhibits both correlational and representational roles. We summarize our main contributions as follows:\n1. We raise the question of distinguishing the two roles played by the graph and demonstrate that many existing GNNs are incapable of utilizing the graph information when it plays a pure correlational role. 2. We propose a principled solution, the CopulaGNN, to integrate the representational and correlational roles of the graph. 3. We empirically demonstrate the effectiveness of CopulaGNN compared to base GNNs on semi-supervised regression tasks." }, { "heading": "2 RELATED WORK", "text": "There have been extensive existing works that model either the representational role or the correlational role of the graph in node-level (semi-)supervised learning tasks. However, there are fewer methods that try to model both sides simultaneously, especially with a GNN.\nMethods focusing on the representational role. As we mentioned in Section 1, the graph can help construct better node feature representations by both providing extra topological information and guiding node feature aggregation. There have been vast existing studies on both directions, and among them we can only list a couple of examples. Various methods have been proposed to leverage the topological information of graph-structured data in machine learning tasks, such as graph kernels (Vishwanathan et al., 2010), node embeddings (Perozzi et al., 2014; Tang et al., 2015; Grover & Leskovec, 2016), and GNNs (Xu et al., 2018). Aggregating node features on an attributed graph has also been widely studied, e.g., through feature smoothing (Mei et al., 2008) or GNNs (Kipf\n& Welling, 2016; Hamilton et al., 2017). In this work, we restrict our focus on the GNN models, which have been the state-of-the-art graph representation learning method on various tasks.\nMethods focusing on the correlational role. On the other hand, there has also been extensive literature on modeling the dependence of variables on connected nodes in a graph. One group of methods is called the graph-based regularization (Zhu et al., 2003; Li et al., 2019), where it is assumed that the variables associated with linked objects change smoothly and pose an explicit similarity regularization among them. The correlational role of the graph is also closely related to undirected graphical models (Lauritzen, 1996; Jordan et al., 2004; Wainwright & Jordan, 2008). In graphical models, the edges in a graph provide a representation of the conditional (in)dependence structure among a set of random variables, which are represented by the node set of the graph. Finally, there has been a line of research that combines graphical models with copulas and leads to more flexible model families (Elidan, 2010; Dobra et al., 2011; Liu et al., 2012; Bauer et al., 2012). Our proposed method integrates the benefits of copulas and GNNs to capture both the representational and correlational roles.\nMethods improving GNNs by leveraging the correlational graph information. A few methods explicitly leverage the correlational graph information to improve the GNN training, but most of them focus on a classification setting (Qu et al., 2019; Ma et al., 2019). A recent study (Jia & Benson, 2020) that we have been aware of only lately shares a similar motivation to ours, yet our methodology differs significantly. In particular, Jia & Benson (2020) apply a multivariate normal distribution to model the correlation of node outcomes, which can be viewed as a special case of our proposed CopulaGNN when a Gaussian copula with normal marginals is used. Our method not only generalizes to other marginals (we show the effectiveness of some of them), but also has a more flexible parameterization on the correlation matrix of the copula distribution. In addition, we differ with these previous works by explicitly distinguishing the two roles of the graph in the data." }, { "heading": "3 SIMULATING THE TWO ROLES OF THE GRAPH", "text": "In this section, we investigate, through a simulation study, the representational and correlational roles of the graph in the context of node-level semi-supervised learning." }, { "heading": "3.1 NODE-LEVEL SEMI-SUPERVISED LEARNING", "text": "We start by formally introducing the problem of node-level semi-supervised learning. A graph is a tuple: G = (V, E), where V = {1, 2, . . . , n} is the set of n nodes; E ∈ V × V is the set of edges and let s = |E| be the number of edges. The graph is also associated with X ∈ Rn×d and y ∈ Rn, which are the node features and outcome labels. In the semi-supervised learning setting, we only observe the labels of 0 < m < n nodes. Without loss of generality, we assume the labels of nodes {1, 2, . . . ,m} are observed and those of {m+ 1, . . . , n} are missing. Therefore, the label vector y can be partitioned as y = (yTobs,y T miss)\nT . The goal of a node-level semi-supervised learning task is to infer ymiss based on (yobs,X,G)." }, { "heading": "3.2 SYNTHETIC DATA", "text": "To simulate the representational and correlational roles of the graph, we first design a synthetic dataset by specifying the joint distribution of y conditional on X and G. In particular, we let the joint distribution of the node outcomes take the form of y|X,G ∼ N (µ(X,G),Σ(G)), for some µ,Σ to be specified. In this way, the graph G plays a representational role through µ(X,G) and a correlational role through Σ(G). Specifically, we generate synthetic node-level regression data on a graph with n nodes and s edges (see Appendix A.1 for the whole procedure). We first randomly generate a feature matrix X ∈ Rn×d0 . AssumeA is the adjacency matrix of the graph,D is the degree matrix, and L = D−A is the graph Laplacian. Let à = A + I and D̃ = D + I . Given parameters wy ∈ Rd0 , we generate the node label vector y ∼ N (µ,Σ), where, for some γ > 0, τ > 0, and σ2 > 0,\n(a) µ = D̃−1ÃXwy , Σ = σ2I; (b) µ = Xwy , Σ = τ(L+ γI)−1;\n(c) µ = D̃−1ÃXwy , Σ = τ(L+ γI)−1.\nDepending on how (µ,Σ) are configured, we get three types of synthetic data settings: (a), (b), and (c). Intuitively, the graph plays a pure representational role in setting (a) since the label of a node depends on the aggregated features of its local neighborhood and the node labels are independent conditional on the node features. In setting (b), the graph plays a pure correlational role; while the means of node labels only depend on their own node features, the node labels are still correlated conditional on the features, and the correlation is determined by the graph structure. Finally, setting (c) is a combination of (a) and (b) where the graph plays both representational and correlational roles.\nIn the rest of this section, we test the performance of a few widely used GNNs under setting (a) and (b) to examine their capabilities of utilizing the representational and correlational information. We defer the experimental results under setting (c) to Section 5.2 for ease of reading." }, { "heading": "3.3 SIMULATION STUDY", "text": "Simulation Setup. We set the number of nodes n = 300, the number of edges s = 5000, and the feature dimension d0 = 10. Elements of both Wg and wy are generated from i.i.d. standard normal distribution. For setting (a), we vary σ2 ∈ {2.5, 5, 10, 20}. For settings (b) and (c), we set γ = 0.1 and vary τ ∈ {0.5, 1, 2, 5}. We test 4 common GNN models, GCN (Kipf & Welling, 2016), GraphSAGE (Hamilton et al., 2017) (denoted as SAGE), GAT (Veličković et al., 2018), and APPNP (Klicpera et al., 2018), as well as the multi-layer perceptron (MLP).\nSimulation Results. First, we observe that all 4 types of GNNs outperform MLP under setting (a) (Figure 1a), where the graph plays a pure representational role. This is not surprising as the architectures of the GNNs encode a similar feature aggregation structure as the data. However, under setting (b) (Figure 1b) where the graph plays a pure correlational role, all 4 types of GNNs underperform MLP. This suggests that a majority of popular GNN models might be incapable of fully utilizing the correlational graph information.\nMotivated by our findings in the simulation study, in the following section, we seek for methods that augment existing GNN models in order to better utilize both representational and correlational information in the graph." }, { "heading": "4 COPULA GRAPH NEURAL NETWORK", "text": "In this section, we propose a principled solution called the Copula Graph Neural Network (CopulaGNN). At the core of our method is the application of copulas, which are widely used for modeling multivariate dependence. In the rest of this section, we first provide a brief introduction to copulas (more detailed expositions can be found in the monographs by Joe (2014) and Czado (2019)), then present the proposed CopulaGNN and its parameterization, learning, and inference." }, { "heading": "4.1 INTRODUCTION TO COPULAS", "text": "In order to decompose the joint distribution of the labels y into representational and correlational components, we make use of copulas, which are widely used in multivariate statistics to model the joint distribution of a random vector.\nGeneral formulation. Sklar’s theorem (Sklar, 1959) states that any multivariate joint distribution F of a random vector Y = (Y1, . . . , Yn) can be written in terms of one-dimensional marginal distributions Fi(y) = P(Yi ≤ y) and a copula C : [0, 1]n → [0, 1] that describes the dependence structures among variables:\nF (y1, . . . , yn) = C(F1(y1), . . . , Fp(yn)).\nIn other words, one can decompose a joint distribution into two components: the marginals and the copula. Such decomposition allows a two-step approach to modeling a joint distribution: (1) learning the marginals Fi; (2) learning the copula C, where various parametric copula families are available. Furthermore, a copula C can also be regarded as the Cumulative Distribution Function (CDF) of a corresponding distribution on the unit hypercube [0, 1]n. Its copula density is denoted by c(u1, . . . , un) := ∂nC(u1, u2, . . . , un)/∂u1 · · · ∂un. The Probability Density Function (PDF) of a random vector can be represented by its corresponding copula density. If the random vector Y is continuous, its PDF can be written as\nf(y) = c(u1, . . . , un) n∏ i=1 fi(yi), (1)\nwhere fi is the PDF of Yi, ui = Fi(yi), and c is the copula density. For discrete random vectors, the form of the Probability Mass Function (PMF) is more complex. See Appendix B.2 for details.\nGaussian copula. One of the most popular copula family is the Gaussian copula. When the joint distribution F is multivariate normal with a mean of 0 and a covariance matrix of Σ, the corresponding copula is the Gaussian copula:\nC(u1, u2, · · · , un; Σ) = Φn(Φ−1(u1), · · · ,Φ−1(un); 0,R),\nwhere Φn(·; 0,R) is the multivariate normal CDF, R is the correlation matrix of Σ, and Φ−1(·) is the quantile function of the univariate standard normal distribution. Its copula density is\nc(u1, u2, . . . , un; Σ) = (detR) −1/2 exp ( −1\n2 Φ−1(u)T (R−1 − In)Φ−1(u)\n) ,\nwhere In is the identity matrix of size n and u = (u1, u2, . . . , un)." }, { "heading": "4.2 THE PROPOSED MODEL", "text": "Recall that our goal is to model both representational and correlational graph information in the conditional joint distribution of the node outcomes,\nf(y; X,G) = c(u1, . . . , un; X,G) n∏\ni=1\nfi(yi; X,G), (2)\nwhich can be decomposed into the copula density and marginal densities. In this formulation, the representational information and correlational information are naturally separated into the marginal densities fi, for i = 1, . . . , n, and the copula density c, respectively. Note that both the marginal densities and the copula density are conditional on the node features X and the graph G. Next, we need to choose a proper distribution family for each of these densities and parameterize the distribution parameters as functions of X and G.\nChoice of distribution family and parameterization for the copula density. For the distribution family, we choose the Gaussian copula as the copula family, c(u1, . . . , un; Σ(X,G;θ)), where the form of Σ(·;θ) and the learnable parameters θ remain to be specified. To leverage the correlational graph information, we draw a connection between the graph structure G and the covariance matrix Σ in the Gaussian copula density. Let K = Σ−1 be the precision matrix; if two nodes i and j are not linked in the graph, we constrain the corresponding (i, j)-th entry in K to be 0. In other\nwords, the absence of an edge between nodes i and j leads to their outcome variables yi and yj being conditionally independent given all other variables. The motivation of parameterizing the precision matrix K instead of the covariance matrix Σ is closely related to undirected graphical models (Lauritzen, 1996; Jordan et al., 2004; Wainwright & Jordan, 2008), where the conditional dependence structure among a set of variables is fully represented by edges in an underlying graph. In our use case, we could view our assumption onK as a graphical model among random variables (y1, . . . , yn), where the underlying graph structure is known.\nThe conditional independence assumption has significantly reduced the number of non-zero entries in K to be estimated. However, without any further constraints, there are still |E| free parameters growing with the graph size, which can hardly be estimated accurately given only one observation of (y1, . . . , ym). In practice, we consider two ways of parametrizingK with fewer parameters.\nTwo-parameter parametrization. A rather strong but simple constraint is to assume the non-zero off-diagonal entries of K have the same value or they are proportional to the corresponding entries in the normalized adjacency matrix, and introduce two global parameters controlling the overall strength of correlation. For example, we could have K = τ−1(L + γI) as what we did in the simulation study in Section 3, or K = β(In − αD−1/2AD−1/2) as used in Jia & Benson (2020), where (τ, γ) or (α, β) are learnable parameters.\nRegression-based parametrization. We further propose a more flexible parameterization that allows the non-zero entries inK to be estimated by a regressor taking node features as inputs. In pariticular, for any (i, j)-pair corresponding to a non-zero entry of K, we set Âi,j = softplus(h(xi,xj ;θ)), where h is a two-layer MLP that takes the concatenation of xi and xj as input and outputs a scalar. Let D̂ be the degree matrix if we treat  as a weighted adjacency matrix, and we set the precision matrix K = In + D̂ − Â. This parameterization improves the flexibility on estimating K while keeping the number of learnable parameters θ independent of the graph size n. It also ensures that K is positive-definite and thus invertible.\nChoice of distribution families and parameterization for the marginal densities. One benefit of the copula framework is the flexibility on the choice of distribution families for the marginal densities. In this work, we choose the marginal densities to be normal distributions if the labels y are continuous variables, and we choose them to be Poisson distributions if the y are discrete. We denote the i-th marginal density function by fi(yi;ηi(X,G;θ)) and the corresponding CDF by Fi(yi;ηi(X,G;θ)), where ηi(·;θ) denotes the distribution parameters to be specified. If the i-th marginal distribution takes the form of a normal distribution N (µi, σ2i ), then ηi(X,G;θ) = (µi(X,G;θ), σ2i (X,G;θ)). We define µi(X,G;θ) as the output of a base GNN model for node i, and σ2i (X,G;θ) as the i-th diagnoal element of the covariance matrix Σ(X,G;θ) as we specified in the Gaussian copula. If the i-th marginal distribution takes the form of a Poisson distribution Pois(λi), then ηi(X,G;θ) = λi(X,G;θ), and we define λi(X,G;θ) as the output of a base GNN model for node i. Either way, the representational role of the graph is reflected in the location parameters (µi or λi) computed by a base GNN model. In practice, we can also choose other distribution families such as the log-normal or negative binomial, depending on our belief on the true distributions of the node outcomes. One can even choose different distribution families for different nodes simultaneously if necessary." }, { "heading": "4.3 MODEL LEARNING AND INFERENCE", "text": "For simplicity of notation, we write ηi(X,G;θ) and Σ(X,G;θ) as ηi and Σ throughout this section.\nModel learning. The model parameters θ are learned by maximizing the log-likelihood on the observed node labels. Given the partition of y, we can further partition the covariance matrix Σ accordingly:\ny = ( yobs ymiss ) and Σ = ( Σ00 Σ01 Σ10 Σ11 ) ,\nwhere yobs = (y1, . . . , ym) and ymiss = (ym+1, . . . , yn). In other words, Σ00 and Σ11 are the covariance matrices of the observed and missing nodes. We further denote ui = Fi(yi;ηi) for i = 1, . . . , n, uobs = (u1, . . . , um), and umiss = (um+1, . . . , un); that is, ui is the probability integral transform of the i-th label yi. According to Equation (1), the joint density can be written\nas the product of the copula density and the marginal densities. Therefore the loss function, i.e., the negative log-likelihood, is\nL(θ) = − log f(yobs; X,G) = − log c(uobs; Σ00)− m∑ i=1 log fi(yi;ηi). (3)\nThe parameters θ are learned end-to-end using standard optimization algorithms such as Adam (Kingma & Ba, 2015).\nModel inference. At inference time, we are interested in the conditional distribution f(ymiss|yobs; X,G). The inference of the conditional distribution can be done via sampling. Since f(y; X,G) is modeled by the Gaussian copula, we have(\nΦ−1(uobs) Φ−1(umiss)\n) ∼ N (( 0 0 ) , ( R00 R01 R10 R11 )) ,\nwhere R is the correlation matrix corresponding to the covariance matrix Σ. By the property of the multivatiate normal distribution, the conditional distribution of Φ−1(umiss)|Φ−1(uobs) is also multivatiate normal:\nΦ−1(umiss)|Φ−1(uobs) ∼ N ( R10R −1 00 Φ −1(uobs),R11 −R10R−100 R01 ) . (4)\nThis provides a way to draw samples from f(ymiss|yobs; X,G), which we describe in Algorithm 1.\nAlgorithm 1: Model inference by sampling. Input: The node features X, the graph G, the observed node labels y1, . . . , ym, the learned\nparameters θ, the marginal CDF functions Fi(·;ηi(X,G;θ)) their inverse F−1i (·;ηi(X,G;θ)) for i = 1, . . . , n, and the number of samples L.\nOutput: Predicted missing node labels ŷi, i = m+ 1, . . . , n. 1 for i = m+ 1,m+ 2, . . . , n do 2 ŷi ← 0; 3 for i = 1, 2, . . . ,m do 4 ui ← Fi(yi;ηi(X,G;θ)); 5 zi ← Φ−1(ui); 6 zobs ← [z1, . . . , zm]T ; 7 R← the correlation matrix corresponding to Σ(X,G;θ); 8 µcond ← R10R−100 zobs; 9 Σcond ← R11 −R10R−100 R01;\n10 for ` = 1, 2, . . . , L do 11 [z`m+1, . . . , z ` n]\nT ∼ N (µcond,Σcond); 12 for i = m+ 1,m+ 2, . . . , n do 13 y`i ← F −1 i (Φ(z ` i );ηi(X,G;θ)); 14 ŷi ← ŷi + y`i/L;\n15 return ŷi, i = m+ 1, . . . , n;\nScalability. Finally, we make a brief remark on the scalability of the proposed method. During model learning, the calculation of log c(uobs; Σ00) in Eq. (3) involves the log-determinant of the precision matrix K and its submatrix. During model inference, both the conditional mean and variance involve the evaluation of R−100 , but they can be transformed (via Schur complement) in a form that only requires solving a linear system with a submatrix of K as the coefficients. Thanks to the sparse structure of K, both (one-step) learning and inference can be made efficient with a computation cost linear in the graph size if K is well-conditioned. Jia & Benson (2020) provide an introduction of numerical techniques that can efficiently calculate the log-determinant (and its gradients) of a sparse matrix, as well as the solution of a linear system with sparse coefficients, which can be directly applied to accelerate our proposed method. In addition, as real-world graphs often present community structures, we can further improve the scalability of the proposed method by enforcing a block structure for the precision matrix." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 GENERAL SETUP", "text": "We instantiate CopulaGNN with either GCN or GraphSAGE as the base GNN models, and implement both the two-parameter parameterization (the (α, β) parameterization, denoted by “αβ-C-”, where “C” stands for copula) and the regression-based parameterization (denoted by “R-C-”). In combination, we have four variants of CopulaGNN: αβ-C-GCN, R-C-GCN, αβ-C-SAGE, and RC-SAGE. When the outcome is a continuous variable, the normal margin is used; and when the outcome is a count variable, the Poisson margin is used. In particular, in the former case, the αβ-CGNN degenerates to the Correlation GNN proposed by Jia & Benson (2020). We compare different variants of CopulaGNN with their base GNN counterparts, as well as an MLP model, on two types of regression tasks: continuous outcome variables and count outcome variables. More experiment details can be found in Appendix A.3." }, { "heading": "5.2 REGRESSION WITH CONTINUOUS OUTCOME VARIABLES", "text": "We use two groups of datasets with continuous outcome variables. The first group is the synthetic data of setting (c) as described in Sections 3.2 and 3.3, where a graph provides both representational and correlational information. The second group includes four regression tasks constructed from the U.S. Election data (Jia & Benson, 2020). We use the coefficient of determination R2 to measure the model performance.\nResults. For the synthetic datasets (Table 1), we vary the value of τ , which controls the overall magnitude of the label covariance. Unsurprisingly, as τ increases, the labels become noisier and the test R2 of all models decreases. In all configurations, R-C-GCN and R-C-SAGE respectively outperform their base model counterparts, GCN and SAGE, by significant margins. This verifies the effectiveness of the proposed method when the graph provides both representational and correlational information. Another interesting observation is that GCN outperforms MLP when τ is small (0.5 and 1.0), but underperforms MLP when τ becomes large (2.0 and 5.0), whereas R-CGCN consistently outperforms MLP. Note that τ can also be viewed as the tradeoff between the representational role and the correlational role served by the graph. The correlational role of the graph will have more influence on the outcome variables when τ becomes larger. This explains the intriguing observation: GCN fails to utilize the correlational information and its advantages on the representational information diminish as τ increases.\nFor the U.S. Election dataset (Table 2), we observe that all variants of CopulaGNN significantly outperform their base GNN counterparts. It is interesting that the simpler two-parameter parameterization outperforms the regression-based parameterization in most setups on this dataset. One possible explanation is that the outcome variables that are connected in the graph tend to have strong correlations, since adjacent counties usually have similar statistics. This is indeed suggested by Jia & Benson (2020). The Unemployment task in particular, where the two-parameter parameterization appears to have the largest advantage, is shown to have the strongest correlation." }, { "heading": "5.3 REGRESSION WITH COUNT OUTCOME VARIABLES", "text": "We use two groups of datasets with count outcome variables. The first group consists of two Wikipedia datasets: Wiki-Chameleon and Wiki-Squirrel (Rozemberczki et al., 2019); both are pagepage networks of Wikipedia pages with the visiting traffic as node labels. The second group is a co-citation network of papers at the EMNLP conferences. The goal is to predict the overall number of citations of each paper (including citations from outside EMNLP). We use the R2-deviance, an R2 measure for count data (Cameron & Windmeijer, 1996), to measure the model performance.\nResults. The results of the count regression tasks are shown in Table 3. Intuitively, hyper-linked web pages or co-cited papers are more likely to be visited or cited together, therefore leading to correlated outcome variables captured by the graph. Indeed, we observe that the different variants of CopulaGNN outperform their base model counterparts in almost all setups. However, as the correlation may not be as strong as in the U.S. Election dataset, we observe that the regressionbased parameterization (R-C-GCN and R-C-SAGE) has a greater advantage." }, { "heading": "6 CONCLUSION", "text": "In this work, we explicitly distinguish the representational and correlational roles of the graph representation of data. We demonstrate through a simulation study that many popular GNN models are incapable of fully utilizing the correlational graph information. Furthermore, we propose CopulaGNN, a principled method that improves upon a wide range of GNNs to achieve better prediction performance when the graph plays both representational and correlational roles. Compared with the corresponding base GNN models, multiple variants of CopulaGNN yield consistently superior results on both synthetic and real-world datasets for continuous and count regression tasks." }, { "heading": "ACKNOWLEDGEMENT", "text": "Jiaqi Ma and Qiaozhu Mei were in part supported by the National Science Foundation under grant numbers 1633370 and 1620319." }, { "heading": "A EXPERIMENT DETAILS", "text": "A.1 DETAILS OF SYNTHETIC DATA GENERATION\nWe generate the synthetic data with the following procedure.\n1. Sample a node feature matrix X ∼ N (0, Id0), where X ∈ Rn×d0 and d0 is the feature dimension. 2. Generate the graph in a way similarly to a latent space model (Hoff et al., 2002). First compute the latent variables Z = XWg , where Wg ∈ Rd0×d1 is a given weight matrix. Then calculate the latent distance ‖zi − zj‖2 for each node pair (i, j), 1 ≤ i < j ≤ n. Finally assign edges between them pairs of nodes with the shortest latent distances to form a graph with n nodes and s edges. 3. Assume A is the adjacency matrix of the graph, D is the degree matrix, and L = D −A is the graph Laplacian. Let à = A + I and D̃ = D + I . Given parameters wy ∈ Rd0 , generate the node label vector y ∼ N (µ,Σ), where, for some γ > 0, τ > 0, and σ2 > 0, (a) µ = D̃−1ÃXwy , Σ = σ2I; (b) µ = Xwy , Σ = τ(L+ γI)−1; (c) µ = D̃−1ÃXwy , Σ = τ(L+ γI)−1.\nA.2 SIMULATION DETAILS FOR SECTION 3\nFor each configuration we randomly generate 100 datasets with different seeds. For each dataset, we randomly split the nodes into training, validation, and test sets equally to form a semi-supervised learning task.\nWe set the number of layers as 2 and the total hidden units as 16 for all models. We use the Adam optimizer (Kingma & Ba, 2015) with an initial learning rate of 0.01 to train all models by minimizing the MSE loss, with early stopping on the validation set. Finally, we report the R2 score on the test set.\nA.3 EXPERIMENT DETAILS FOR SECTION 5\nTraining details. For all neural networks involved in the experiments, we set the number of layers as 2 and the number of hidden units as 16. We use the Adam optimizer to train all the models and apply early stopping on a validation set. For the real-world datasets, the initial learning rate is chosen from {0.01, 0.001} on the validation set.\nThe U.S. Election dataset. The nodes in the election data are U.S. counties and edges connect adjacent counties on the map. Each county is associated with demographic and election statistics. In each of the regression tasks, one statistic is selected as the node outcome and the remaining statistics are used as the node features. The four regression tasks are named by the outcome statistics: Education, Election, Income, Unemployment. We randomly split the data into training, validation, and test sets with ratio 6:2:2 following Jia & Benson (2020), and we refer to their work for more details of the datasets.\nThe Wikipedia datasets. Each of the two Wikipedia datasets consists of a graph on Wikipedia, where each node is a Wikipedia page related to the animal of the page title and edges reflect mutual hyper-links between the pages. The node features are principal components of binary indicators for the presence of certain nouns. The count outcome variable of each node label is the monthly traffic in the unit of thousands.\nThe EMNLP dataset. This dataset is constructed from the DBLP citation data provided by AMiner (Tang et al., 2008). We first extract a set of papers published on the EMNLP conference and treat each paper as a node. Then we construct a graph where two papers have an edge if they are cited simultaneously by at least two EMNLP papers. The node features are principal components of the bag-of-words of paper titles and abstracts as well as the year of publication. The node label is the number of citations of each paper from outside EMNLP. For both types of datasets, we randomly split the data into training, validation, and test sets with ratio 1:1:1." }, { "heading": "B MORE DETAILS ABOUT COPULAS", "text": "B.1 TWO-DIMENSIONAL EXAMPLES\nFigure 2 shows PDFs of two-dimensional distributions constructed using different parametric copulas; the marginal distributions are all standard normal.\nB.2 APPROXIMATING THE COPULAS FOR DISCRETE RANDOM VARIABLES\nIf the random vector Y is discrete, the copula representation of its PMF is more complex. Take n = 2 as an example, the PMF of Y = (Y1, Y2) is\nf(y) = P(Y1 = y1, Y2 = y2) = P(Y1 ≤ y1, Y2 ≤ y2)− P(Y1 < y1, Y2 ≤ y2)− P(Y1 ≤ y1, Y2 < y2) + P(Y1 < y1, Y2 < y2) = C(u12, u22)− C(u11, u22)− C(u12, u21) + C(u11, u21),\nwhere ui1 = limx→y−i Fi(x) = Fi(y − i ) and ui2 = Fi(yi). In general, the PMF has the following form:\nf(y) = 2∑ j1=1 · · · 2∑ jn=1 (−1)j1+···+jnC(u1j1 , . . . , unjn), (5)\nwhich is computationally intractable because there are 2n summands. Kazianka & Pilz (2010) propose an approximation of the above PMF based on the generalized quantile transform. It smooths the CDF of ordinal discrete variables from a step function to a piece-wise linear one:\nf(y) ≈ c(v1, . . . , vn) n∏\ni=1\nfi(yi), (6)\nwhere vi = (ui1 + ui2)/2. It has been shown that the approximation works well as long as the marginal variance is not too small. We apply this method to approximate PMF to handle discrete random variables." } ]
2,021
COPULAGNN: TOWARDS INTEGRATING REPRESEN-
SP:78d44eef96138ddcb2b86cd1de3d9c6a63e33e32
[ "Paper proposes a way to adapt an autoregressive model (RNN in examples) to the incoming noisy signal to generate noise-free data output. The approach is interesting due to applying updates to the hidden state of the past observation. The proposed approached is named Active Tuning and evaluated on 3 toy tasks. The idea sounds interesting, however the lack of comparisons with other approaches and theoretical justification of why this approach is superior makes it hard to convince reader. " ]
We introduce Active Tuning, a novel paradigm for optimizing the internal dynamics of recurrent neural networks (RNNs) on the fly. In contrast to the conventional sequence-to-sequence mapping scheme, Active Tuning decouples the RNN’s recurrent neural activities from the input stream, using the unfolding temporal gradient signal to tune the internal dynamics into the data stream. As a consequence, the model output depends only on its internal hidden dynamics and the closedloop feedback of its own predictions; its hidden state is continuously adapted by means of the temporal gradient resulting from backpropagating the discrepancy between the signal observations and the model outputs through time. In this way, Active Tuning infers the signal actively but indirectly based on the originally learned temporal patterns, fitting the most plausible hidden state sequence into the observations. We demonstrate the effectiveness of Active Tuning on several time series prediction benchmarks, including multiple super-imposed sine waves, a chaotic double pendulum, and spatiotemporal wave dynamics. Active Tuning consistently improves the robustness, accuracy, and generalization abilities of all evaluated models. Moreover, networks trained for signal prediction and denoising can be successfully applied to a much larger range of noise conditions with the help of Active Tuning. Thus, given a capable time series predictor, Active Tuning enhances its online signal filtering, denoising, and reconstruction abilities without the need for additional training.
[]
[ { "authors": [ "Shaojie Bai", "J. Zico Kolter", "Vladlen Koltun" ], "title": "An empirical evaluation of generic convolutional and recurrent networks for sequence modeling", "venue": null, "year": 2018 }, { "authors": [ "Martin V. Butz", "David Bilkey", "Dania Humaidan", "Alistair Knott", "Sebastian Otte" ], "title": "Learning, planning, and control in a monolithic neural event inference architecture", "venue": "Neural Networks,", "year": 2019 }, { "authors": [ "Junyoung Chung", "Caglar Gulcehre", "KyungHyun Cho", "Yoshua Bengio" ], "title": "Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling", "venue": "[cs],", "year": 2014 }, { "authors": [ "Robert Geirhos", "Carlos R.M. Temme", "Jonas Rauber", "Heiko H. Schütt", "Matthias Bethge", "Felix A. Wichmann" ], "title": "Generalisation in humans and deep neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Yoshua Bengio", "Aaron Courville" ], "title": "Deep Learning", "venue": null, "year": 2016 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long Short-Term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Nal Kalchbrenner", "Lasse Espeholt", "Karen Simonyan", "Aaron van den Oord", "Alex Graves", "Koray Kavukcuoglu" ], "title": "Neural machine translation in linear time", "venue": null, "year": 2016 }, { "authors": [ "Matthias Karlbauer", "Sebastian Otte", "Hendrik P.A. Lensch", "Thomas Scholten", "Volker Wulfmeyer", "Martin V. Butz" ], "title": "Inferring, predicting, and denoising causal wave dynamics", "venue": null, "year": 2020 }, { "authors": [ "Diederik P. Kingma", "Jimmy L. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "3rd International Conference for Learning Representations,", "year": 2015 }, { "authors": [ "Hans Jürgen Korsch", "Hans-Jörg Jodl", "Timo Hartmann" ], "title": "Chaos: a program collection for the PC", "venue": "rev. and enlarged ed edition,", "year": 2008 }, { "authors": [ "Danil Koryakin", "Johannes Lohmann", "Martin V. Butz" ], "title": "Balanced echo state networks", "venue": "Neural Networks,", "year": 2012 }, { "authors": [ "Xugang Lu", "Yu Tsao", "Shigeki Matsuda", "Chiori Hori" ], "title": "Speech enhancement based on deep denoising autoencoder", "venue": "In Interspeech,", "year": 2013 }, { "authors": [ "Sebastian Otte", "Marcus Liwicki", "Andreas Zell" ], "title": "Dynamic Cortex Memory: Enhancing Recurrent Neural Networks for Gradient-Based Sequence Learning", "venue": "In Artificial Neural Networks and Machine Learning – ICANN 2014,", "year": 2014 }, { "authors": [ "Sebastian Otte", "Marcus Liwicki", "Andreas Zell" ], "title": "An Analysis of Dynamic Cortex Memory Networks", "venue": "In International Joint Conference on Neural Networks (IJCNN),", "year": 2015 }, { "authors": [ "Sebastian Otte", "Martin V. Butz", "Danil Koryakin", "Fabian Becker", "Marcus Liwicki", "Andreas Zell" ], "title": "Optimizing recurrent reservoirs with neuro-evolution", "venue": null, "year": 2016 }, { "authors": [ "Sebastian Otte", "Patricia Rubisch", "Martin V. Butz" ], "title": "Gradient-based learning of compositional dynamics with modular rnns", "venue": "ISBN 978-3-030-30487-4", "year": 2019 }, { "authors": [ "Jaideep Pathak", "Alexander Wikner", "Rebeckah Fussell", "Sarthak Chandra", "Brian R Hunt", "Michelle Girvan", "Edward Ott" ], "title": "Hybrid forecasting of chaotic processes: Using machine learning in conjunction with a knowledge-based model", "venue": "Chaos: An Interdisciplinary Journal of Nonlinear Science,", "year": 2018 }, { "authors": [ "William Press" ], "title": "Numerical recipes : the art of scientific computing", "venue": "UK ;;New York, 3rd ed. edition,", "year": 2007 }, { "authors": [ "Jürgen Schmidhuber", "Daan Wierstra", "Matteo Gagliolo", "Faustino Gomez" ], "title": "Training recurrent neural networks by evolino", "venue": "Neural Computation,", "year": 2007 }, { "authors": [ "Yuuya Sugita", "Jun Tani", "Martin V Butz" ], "title": "Simultaneously emerging braitenberg codes and compositionality", "venue": "Adaptive Behavior,", "year": 2011 }, { "authors": [ "Paul J. Werbos" ], "title": "Backpropagation through time: what it does and how to do it", "venue": "Proceedings of the IEEE,", "year": 1990 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recurrent neural networks (RNNs) are inherently only robust against noise to a limited extent and they often generate unsuitable predictions when confronted with corrupted or missing data (cf., e.g., Otte et al., 2015). To tackle noise, an explicit noise-aware training procedure can be employed, yielding denoising networks, which are targeted to handle particular noise types and levels. Recurrent oscillators, such as echo state networks (ESNs) (Jaeger, 2001; Koryakin et al., 2012; Otte et al., 2016), when initialized with teacher forcing, however, are highly dependent on a clean and accurate target signal. Given an overly noisy signal, the system is often not able to tune its neural activities into the desired target dynamics at all. Here, we present a method that can be seen as an alternative to regular teacher forcing and, moreover, as a general tool for more robustly tuning and thus synchronizing the dynamics of a generative differentiable temporal forward model—such as a standard RNN, ESN, or LSTM-like RNN (Hochreiter & Schmidhuber, 1997; Otte et al., 2014; Chung et al., 2014; Otte et al., 2016)—into the observed data stream.\nThe proposed method, which we call Active Tuning, uses gradient back-propagation through time (BPTT) (Werbos, 1990), where the back-propagated gradient signal is used to tune the hidden activities of a neural network instead of adapting its weights. The way we utilize the temporal gradient signal is related to learning parametric biases (Sugita et al., 2011) and applying dynamic context inference (Butz et al., 2019). With Active Tuning, two essential aspects apply: First, during signal inference, the model is not driven by the observations directly, but indirectly via prediction errorinducted temporal gradient information, which is used to infer the hidden state activation sequence that best explains the observed signal. Second, the general stabilization ability of propagating signal hypotheses through the network is exploited, effectively washing out activity components (such as noise) that cannot be modeled with the learned temporal structures within the network. As a result, the vulnerable internal dynamics are kept within a system-consistent activity milieu, effectively decoupling it from noise or other unknown distortions that are present in the defective actual signal.\nIn this work we show that Active Tuning elicits enhanced signal filtering abilities, without the need for explicitly training distinct models for exactly such purposes. Excitingly, this method allows for instance the successful application of an entirely noise-unaware RNN (trained on clean, ideal data) under highly noisy and unknown conditions.\nIn the following, we first detail the Active Tuning algorithm. We then evaluate the RNN on three time series benchmarks—multiple superimposed sine waves, a chaotic pendulum, and spatiotemporal wave dynamics. The results confirm that Active Tuning enhances noise robustness in all cases. The mechanism mostly even beats the performance of networks that were explicitly trained to handle a particular noise level. It can also cope with missing data when tuning the predictor’s state into the observations. In conclusion, we recommend to employ Active Tuning in all time series prediction cases, when the data is known to be noisy, corrupted, or to contain missing values and the generative differentiable temporal forward model—typically a particular RNN architecture—knows about the potential underlying system dynamics. The resulting data processing system will be able to handle a larger range of noise and corrupted data, filtering the signal, generating more accurate predictions, and thus identifying the underlying data patterns more accurately and reliably." }, { "heading": "2 ACTIVE TUNING", "text": "Starting point for the application of Active Tuning is a trained temporal forward model. This may be, as mentioned earlier, an RNN, but could also be another type of temporal model. The prerequisite is, however, a differentiable model that implements dependencies over time, such that BPTT can be used to reversely route gradient information through the computational forward chain. Without loss of generality, we assume that the model of interest, whose forward function may be referred to as fM , fulfills the following structure:\n(σt,xt) fM7−−−→ (σt+1, x̃t+1), (1)\nwhere σt is the current latent hidden state of the model (e.g. the hidden outputs of LSTM units, their cell states, or any other latent variable of interest) and xt is the current signal observation. Based on this information fM generates a prediction for the next input x̃t+1 and updates its latent state σt+1 accordingly.\nFollowing the conventional inference scheme, we feed a given sequence time step by time step into the network and receive a one-time step ahead prediction after each particular step. Over time, this effectively synchronizes the network with the observed signal. Once the network dynamics are initialized, which is typically realized by teacher forcing, the network can generate prediction and its dynamics can be driven further into the future in a closed-loop manner, whereby the network feeds itself with its own predictions. To realize next time step- and closed-loop predictions, direct contact with the signal is inevitable to drive the teacher forcing process. In contrast, Active Tuning decouples the network from the direct influence of the signal. Instead, the model is permanently kept in closed-loop mode, which initially prevents the network from generating meaningful predictions. Over a certain time frame, Active Tuning keeps track of the recent signal history, the recent hidden states of the model, as well as its recent predictions. We call this time frame (retrospective) tuning horizon or tuning length (denoted with R).\nThe principle of Active Tuning can best be explained with the help of Figure 1 and Algorithm 1. The latter gives a more formal perspective onto the principle. Note that for every invocation of the procedure a previously unrolled forward chain (from the previous invocation or an initial unrolling) is assumed. L refers to the prediction error of the entire unrolled prediction sequence and the respective observations, whereas Lt′ refers to the local prediction error just for a time step t′. With every new perceived and potentially noise-affected signal observation xt, one or multiple tuning cycles are performed. Every tuning cycle hereby consists of the following stages: First, from the currently believed sequence of signal predictions (which is in turn based on a sequence of hidden states) and the actual observed recent inputs, a prediction error is calculated and propagated back into the past reversely along the unfolded forward computation sequence. The temporal gradient travels to the very left of the tuning horizon and is finally projected onto the seed hidden state σt−R, which is then adapted by applying the gradient signal in order to minimize the encountered prediction error. This adaption can be done using any gradient-based optimizer. Note that in this paper, we exclusively use Adam (Kingma & Ba, 2015), but other optimizers are possible as well. Second, after the adap-\nAlgorithm 1: Active Tuning procedure Input : Current observation xt Output: Prediction x̃t (filtered output), predictive hidden state σt\nx̃t,σt← fM (x̃t−1,σt−1) /* Generate current prediction based on previous forward chain */ for c← 1 to C do /* Perform multiple tuning cycles */\nfor t′← t to t−R do /* Back-propagate prediction error */\ngt ′ ← ∂L\n∂σt ′ =\n∂Lt ′ ∂σt ′ + g t′+1 ∂σ t′+1 ∂σt ′ if t ′ < t\n0 otherwise end for\nσt−R← update(σt−R,gt−R) /* Perform tuning step (e.g. with Adam update rule) */ for t′← t−R+ 1 to t do /* Roll out forward chain again based on adapted hidden state */\nx̃t ′ ,σt ′ ← fM (x̃t ′−1,σt ′−1)\nend for end for\nreturn x̃t, σt\ntion of this seed state (and maybe the seed input as well) the prediction sequence is rolled out from the past into the present again, effectively refining the output sequence towards a better explanation of the recently observed signal. Each tuning cycle thus updates the current prediction x̃t and the current hidden state σt from which a closed-loop future prediction can be rolled out, if desired. To transition into the next world time step, one forward step has to be computed. The formerly leftmost seed states can be discarded and the recorded history is shifted by one time step, making σt−R+1 the new seed state that will be tuned within the next world time step. From then on, the procedure is\nrepeated, yielding the continuous adaptive tuning process. As a result, the model is predominantly driven by its own imagination, that is, its own top down predictions. Meanwhile, the predictions themselves are adapted by means of the temporal gradients based on the accumulated prediction error, but not by the signal directly. In a nutshell, Active Tuning realizes a gradient-based minioptimization procedure on any of the model’s latent variables within one world time step. While it needs to be acknowledged that this process draws on additional computational resources, in this paper we investigate the resulting gain in signal processing robustness.\nIntuitively speaking, Active Tuning tries to fit known temporal patterns, as memorized within the forward model, to the concurrently observed data. Due to the strong pressure towards consistency maintenance, which is naturally enforced by means of the temporal gradient information in combination with the repeatedly performed forward passes of the hidden state activities, the network will generate adaptations and potential recombinations of patterns that it has learned during training. Occurrences that cannot be generated from the repertoire of neural dynamics will therefore not appear (or only in significantly suppressed form) in the model’s output. As a consequence, there is a much smaller need to strive for noise robustness during training. Our results below indeed confirm that the model may be trained on clean, idealized target signals. However, imprinting a slight denoising tendency during training proves to be useful when facing more noisy data. Enhanced with our Active Tuning scheme, the model will be able to robustly produce high-quality outputs even under extremely adverse conditions—as long as (some of) the assumed target signals are actually present. Our scheme is thus a tool that can be highly useful in various application scenarios for signal reconstruction and flexible denoising. Nevertheless, it should be mentioned that with Active Tuning the computational overhead for inference scales with the number of tuning cycles and the tuning length." }, { "heading": "3 EXPERIMENTS", "text": "In order to investigate the abilities of Active Tuning we studied its behavior at considering three different types of time series data, namely, one-dimensional linear dynamics, two-dimensional nonlinear dynamics, and distributed spatiotemporal dynamics. For all three problem domains we used a comparable setup except for the particular recurrent neural network architectures applied. We trained the networks as one time step ahead predictors whose task is to predict the next input given both the current input and the history of inputs aggregated in the latent hidden state of the models. The target sequences were generated directly from the clean input sequences by realizing a shift of one time step. Moreover, we trained networks under six different denoising conditions (normally distributed) per experiment, where we fed a potentially noisy signal into the network and provide the true signal (one time step ahead) as the target value (Lu et al., 2013; Otte et al., 2015; Goodfellow et al., 2016). These conditions are determined by their relative noise ratios: 0.0 (no noise), 0.05, 0.1, 0.2, 0.5, and 1.0, where the ratios depend on the respective base signal statistics. For instance, a noise ratio of 0.1 means that the noise added to the input has a standard deviation of 0.1 times the standard deviation of the base signal. As a result we obtained predictive denoising experts for each of these conditions. All models were trained with Adam (Kingma & Ba, 2015) using its default parameters (learning rate η = 0.001, β1 = 0.9 and β2 = 0.999) over 100 (first two experiments) or 200 (third experiment) epochs, respectively." }, { "heading": "3.1 MULTI-SUPERIMPOSED OSCILLATOR", "text": "The first experiment is a variant of the multiple superimposed oscillator (MSO) benchmark (Schmidhuber et al., 2007; Koryakin et al., 2012; Otte et al., 2016). Multiple sine waves with different frequencies, phase-shifts, and amplitudes are superimposed into one signal (cf. Eq. 2 in the Section A.1), where n gives the number of superimposed waves, fi the frequency, ai the amplitude, and ϕi the phase-shift of each particular wave, respectively. Typically, the task of this benchmark is to predict the further progression of the signal given some initial data points (e.g. the first 100 time steps) of the sequence. The resulting dynamics are comparably simple as they can, in principle, be learned with a linear model. It is, however, surprisingly difficult for BPTT-based models, namely LSTM-like RNNs, to precisely continue a given sequence for more than a few time steps (Otte et al., 2019). For this experiment we considered the MSO5 dynamics with the default frequencies f1 = 0.2, f2 = 0.311, f3 = 0.42, f4 = 0.51, and f5 = 0.63. An illustration of an exemplary the ground truth signal can be found in Figure 2.\nFor training, we generated 10 000 examples with 400 time steps each, using random amplitudes ai ∼ [0, 1] and random phase-shifts ϕi ∼ [0, 2π]. For testing, another 1 000 examples were generated. As base model, we used an LSTM network with one input, 32 hidden units, one linear output neuron, and no biases, resulting in 4 256 parameters. Additionally, to contrast our results with another stateof-the-art sequence-to-sequence model, temporal convolution networks (TCNs) (Kalchbrenner et al., 2016) were trained. Preliminary experiments showed that seven layers with 1, 8, 12, 16, 12, 8, and 1 feature maps, a kernel size of 3, and standard temporal dilation rate—yielding a temporal horizon of 64 time steps—tended to generate the best performance with a comparable number of parameters (i.e. 4 682). Code was taken from Bai et al. (2018)." }, { "heading": "3.2 CHAOTIC PENDULUM", "text": "The second experiment is based on the simulation of a chaotic double pendulum. As illustrated in Figure 5 (cf. Section A.2), the double pendulum consists of two joints whose angles are specified by θ1 and θ2 and two rods of length l1 and l2. Besides the length of the rods, the masses m1 and m2 affect the behavior of the pendulum. The pendulum’s end-effector (where m2 is attached) generates smooth, but highly non-linear trajectories. More precisely, it exhibits chaotic behavior, meaning that already slight changes of the current system state can quickly cause major changes of the pendulum’s state over time (Korsch et al., 2008; Pathak et al., 2018). It is thus typically difficult to precisely predict the dynamics of such a system for more than a few time steps into the future, making it a challenging benchmark problem for our purposes.\nIn the literature, the double pendulum’s dynamics are typically described using the equations of motion, given by Eq. 3 and Eq. 4 (cf. ), respectively, which are derived from the Lagrangian of the system and the Euler-Lagrange equations; see Korsch et al. (2008) for details. For simulating the double pendulum, we applied the fourth-order Runge-Kutta (RK4) (Press, 2007) method to numerically integrate the equations of motion. All four parameters l1, l2, m1, and m2 were set to 1.0. A temporal step size of h = 0.01 was chosen for numerical integration. The initial state of the pendulum is described by its two angles, which were selected randomly for each sample to be within θ1 ∼ [90◦, 270◦] and θ2 ∼ [θ1 ± 30◦] to ensure sufficient energy in the system. One out of ten sequences was initiated with zero angle momenta, that is θ̇1, θ̇2 = 0.0. The number of train and test samples, as well as the sequence lengths were chosen analogously to experiment one. As base model we used an LSTM network with two inputs, 32 hidden units, two linear output neurons, and again no biases. Again, we trained TCNs on this data by changing the number of input and output feature maps to two. Otherwise, the settings were identical to the ones used in experiment one." }, { "heading": "3.3 SPATIOTEMPORAL WAVE DYNAMICS", "text": "In the third experiment we considered a more complex spatiotemporal wave propagation process, based the wave dynamics formalized by Eq. 5 (cf. Section A.3). Here, x and y correspond to a spatial position in the simulated field, while t denotes the time step and c = 3.0 the propagation speed factor of the waves. The temporal and spatial approximation step sizes were set to ht = 0.1 and hx = hy = 1.0, respectively. No explicit boundary condition was applied, resulting in the waves being reflected at the borders and the overall energy staying constant over time.\nWe generated sequences for a regular grid of 16 × 16 pixels. See Figure 4 or Figure 6 for illustrations of the two-dimensional wave. In contrast to the previous two experiments, 200 samples with a sequence length of 80 were generated for training, whereas 20 samples over 400 time steps were used for evaluation. As base network we used a distributed graph-RNN called DISTANA (Karlbauer et al., 2020), which is essentially a mesh of the same RNN module (an LSTM, which consists here of four units only), which is distributed over the spatial dimensions of the problem space (here a two-dimensional grid), where neighboring modules are laterally connected. We chose this wave benchmark, and this recurrent graph network in particular, to demonstrate the effectiveness of Active Tuning in a setup of higher complexity. Moreover, we again trained TCNs, here with three layers, having 1, 8, and 1 feature maps, respectively, using 3 × 3 spatial kernel sizes and standard dilation rates for the temporal dimension. Noteworthy, while the applied DISTANA model counts 200 parameters, the applied TCN has an order of magnitude more parameters (2 306). Less parameters yielded a significant drop in performance." }, { "heading": "4 RESULTS AND DISCUSSION", "text": "The quantitative evaluations are based on the root mean square error (RMSE) between the network outputs and the ground truth. All reported values are averages over ten independently randomly initialized and trained models. In order to elaborate on the applicability of each denoising expert on unseen noise conditions, we evaluated all models using the noise ratios 0.0, 0.1, 0.2, 0.5, and 1.0, resulting in 25 baseline scores for each experiment. These baselines were compared on all noise ratios against eight Active Tuning setups, which were based on models trained without any noise (0.0) or with only a small portion of input noise (0.05). The individual parameters of Active Tuning used to produce the results are reported in Section A.5 of the appendix. Note that in all experiments, the latent hidden outputs of the LSTM units (not the cell states) were chosen as optimization target for the Active Tuning algorithm. Furthermore, these hidden states were initialized normally distributed with standard deviation 0.1 in all cases, whereas the cell state were initialized with zero." }, { "heading": "4.1 MSO RESULTS", "text": "The results of the MSO experiments are summarized in Table 1. Active Tuning improves the results for the weakest model (0.0) in all cases (column 3 vs. 7), partially almost by an order of magnitude. Noteworthy, for the inference noise ratio 0.1, driven with Active Tuning the noise-unaware model becomes better than the actual expert. Recall that the model was no retrained, only the paradigm how the model is applied was changed. On the other hand, there is no advantage for Active Tuning when the base network encountered minimal noise (0.05) during training in this experiment. For comparison, a noise-uninformed TCN (0.0) performs better than the respective RNN (cf. column 2 vs. column 3 in Table 1). Active Tuning reverses this disparity. On this benchmark, however, denoising expert TCNs clearly outperform the expert RNNs (cf. Table 6 in Section A.4).\nTo get an impression of the actual improvement of the output quality, consider Figure 2. The noiseunaware model (0.0) produces poor predictions when confronted with strong signal noise (1.0). When driven with Active Tuning instead of regular inference (teacher forcing), the output of the same model becomes smooth and approximates the ground truth reasonably well. Active Tuning thus helps to catch most of the trend information while mostly ignoring noise.\nAs an additional evaluation, Table 2 demonstrates the ability of Active Tuning to cope with missing data. The results are based on the noise-unaware model. While tuning into the signal, particular observation are missing (dropped out) with a certain probability ranging from 0.1 to 0.9. In case of a missing observation, the prediction of the model is used instead. Already with a dropout chance of 20% the RNN struggles to tune its hidden neural states into the signal, thus generating an error larger than 1. In contrast, exactly the same RNN model remains significantly more stable when driven with Active Tuning. Even with a dropout chance of 50 – 60% the RNN still produces errors clearly below the reference error of approximately 0.9, which is the RMSE error generated when always predicting zero. Note that with regular inference, the error decreases slightly with the highest dropout rate. This is the case because here the network receives so few inputs such that it starts to produce zero outputs for some sequences.\nIt seems that during regular inference the network dynamics are overly driven by the input data. When parts of the input is missing, the internal dynamics do not synchronize with the true dynamics because multiple consecutive time steps of consistent signal observations appear necessary. Active Tuning enables the network to reflect on the recent past including its own prediction history. While it attempts to maintain consistency with the learned dynamics, it infers a hidden state sequence that best explains the (even sparsely) encountered observations and thus effectively fills the input gaps retrospectively with predictions that seem maximally plausible." }, { "heading": "4.2 PENDULUM RESULTS", "text": "For the pendulum experiment, the potential of Active Tuning becomes even more evident. The results presented in Table 3 indicate that for all noise ratios Active Tuning outperforms the respective expert RNNs, particularly when applied to the model that was trained on small noise (0.05). With increasing noise level, the problem becomes progressively impossible to learn. For example, the 1.0-expert-model does not seem to provide any reasonable function at all, indicated by the worse RMSE score compared to other models (1.0 inference noise row). In contrast, Active Tuning can still handle these extremely unfavorable conditions surprisingly well. Figure 3 shows an exemplary case. The unknown ground truth is plotted against the noisy observations (shown in the left image). The center image shows the prediction of the reference LSTM (trained with 0.05 noise) when regular inference is applied. It is clearly difficult to recognize a coherent trajectory reflecting the dynamics of the double pendulum. With Active Tuning, on the other hand, the same network produces a mostly clean prediction that is relatively close to the ground truth sequence.\nAnalogously to MSO, we evaluated the robustness against missing data on the pendulum. The results are reported in Table 4. In contrast to the previous scenario (cf. Table 2) it is noticeable that the base model is intrinsically more stable in this experiment. Still, Active Tuning yields a significant improvement in all cases. For the mid-range dropout rates, it decreases the prediction error by approximately an order of magnitude. Even with a dropout rate of 80 % still somewhat accurate predictions are generated. Please note again that the comparison here uses exactly the same RNN (the same structure as well as the same weights) for both regular inference and Active Tuning." }, { "heading": "4.3 WAVE RESULTS", "text": "The results of the wave experiment (Table 5) consistently support the findings from the pendulum experiments. When driven with Active Tuning, the considered models produce better results than the explicitly trained denoising experts on all noise levels. Figure 4 shows in accordance with the previous experiments that the noisy signal observations (1.0) is filtered effectively and latency-free exclusively when using Active Tuning, yielding a smooth signal prediction across the entire spatiotemporal sequence. While the two-dimensional output of the network operating in conventional inference mode is hardly recognizable as a wave, the network output of the same model combined with Active Tuning clearly reveals the two-dimensional wave structure with hardly perceivable deviations from the ground truth. More qualitative results can be found in Figure 6 (Section A.4).\nWe furthermore compared performance with a common, non-recurrent, sequence-to-sequence learning architecture. Here we considered a standard TCN architecture (Bai et al., 2018), which we also trained to focus on all considered denoising levels. The full performance table is shown in Table 8 (Section A.4). The first result column in Table 5 shows that even the best TCN performance is always outperformed by Active Tuning. Importantly, DISTANA with Active Tuning outperforms the best TCN results on all noise levels even when the DISTANA model was not trained for denoising.\nWe also performed experiments with other noise distributions (e.g. salt-and-pepper noise). Somewhat surprisingly this manipulation affected the quality of the output only marginally. Thus, in contrast to deep convolutional networks (Geirhos et al., 2018), the denoising RNNs applied here did not overfit to the noise type." }, { "heading": "5 CONCLUSION", "text": "In this work we augmented RNN architectures with Active Tuning, which decouples the internal dynamics of an RNN from the data stream. Instead of relying on the input signal to set the internal network states, Active Tuning retrospectively projects the dynamic loss signal onto its internal latent states, effectively tuning them. We have shown that RNNs driven with Active Tuning can reliably denoise various types of time series dynamics, mostly yielding higher accuracy than specifically trained denoising expert RNNs. In all cases, however, the augmentation with Active Tuning has beaten the reference RNN with teacher forcing. Moreover, we have shown that Active Tuning increases the tolerance against missing data by a large extent allowing the models to generate accurate prediction even if more than 50 % of the input are missing. Comparisons with TCN have shown that Active Tuning yields superior performance. In the wave experiments, TCNs are consistently outperformed by the similarly trained recurrent graph neural network DISTANA (Karlbauer et al., 2020). When adding Active Tuning, even the noise uniformed DISTANA version outperformed the best TCN networks. Note that even though we used Active Tuning exclusively for RNNs in this paper, it is in general not restricted to such models. We are particularly interested in adapting the principle to other sequence learning models such as TCNs.\nWhile the presented results are all very encouraging, it should be noted that in our experience Active Tuning is slightly slower to tune the network into a clean signal. Seeing that Active Tuning can in principle be mixed with traditional teacher forcing, we are currently exploring switching teacher forcing on and off in an adaptive manner depending on the present signal conditions.\nAnother concern lies in the applied tuning length and number of tuning cycles. In the presented experiments, we used up to 16 time steps with partially up to 30 tuning cycles. Additional ongoing research aims at reducing the resulting computational overhead. Ideally, Active Tuning will work reliably with a single update cycle over a tuning length of a very few time steps, which would allow to perform Active Tuning along with the regular forward pass of the model in a fused computation step. Additionally, we aim at applying Active Tuning to real-world denoising and forecasting challenges, including speech recognition and weather forecasting." }, { "heading": "A APPENDIX", "text": "A.1 MSO EXPERIMENT\nThe MSO experiment were based on the following equation:\nMSOn(t) = n∑ i=1 ai sin(fit+ ϕi) (2)\nA.2 CHAOTIC PENDULUM EXPERIMENT\nThe pendulum experiments were based on the following equations:\nθ̈1 = µg1 sin(θ2) cos(θ2 − θ1) + µθ̇21 sin(θ2 − θ1) cos(θ2 − θ1)− g1 sin(θ1) +\nµ λ θ̇22 sin(θ2 − θ1)\n1− µ cos2(θ2 − θ1) (3)\nθ̈2 = g2 sin(θ1) cos(θ2 − θ1)− µθ̇22 sin(θ2 − θ1) cos(θ2 − θ1)− g2 sin(θ2)− λθ̇21 sin(θ2 − θ1)\n1− µ cos2(θ2 − θ1) , (4)\nwhere\nλ = l1 l2 , g1 = g l1 , g2 = g l2 , µ = m2 m1 +m2 ,\nand g = 9.81 being the gravitational constant.\nA.3 WAVE DYNAMICS EXPERIMENT\nThe wave experiments were based on the following equation:\n∂2u ∂t2 = c2\n( ∂2u\n∂x2 + ∂2u ∂y2\n) . (5)\nThis equation was solved numerically using the method of second order central difference, yielding u(x, y, t+ ht) ≈ c2h2t ( ∂2u\n∂x2 + ∂2u ∂y2\n) + 2u(x, y, t)− u(x, y, t− ht) (6)\nwith, after solving ∂2u/∂x2 and analogously ∂2u/∂y2 via the same method,\n∂2u ∂x2 = u(x+ hx, y, t)− 2u(x, y, t) + u(x− hx, y, t) h2x , (7) ∂2u\n∂y2 = u(x, y + hy, t)− 2u(x, y, t) + u(x, y − hy, t) h2y . (8)\nA.4 FURTHER RESULTS\nThe performance of the temporal convolution networks (TCNs) at predicting and denoising the MSO, pendulum and spatiotemporal wave dynamics are reported in the following tables (Table 6, Table 7 and Table 8). An additional qualitative evaluation of the wave benchmark on a larger grid shown in Figure 6.\nA.5 ACTIVE TUNING PARAMETERS\nThe tables below report all parameters of Active Tuning including the parameters for state adaptation with Adam for all experiments." } ]
2,020
null
SP:2de60266ac8f4832460bd1da6451a74f63fd8f28
[ "This paper try to leverage the benefit of Hebb learning to reduce CNN training time cost. In order to achieve this, a learning mode selection algorithm is proposed to progressively increase number of layers using Hebb learning. The writing of this paper is good and the idea is also interesting, however, the experimental part should be improved:", "This paper proposes a combination of SGD with selective application of a non-backprop learning rule (Hebbian). The two learning rules are not applied together, but rather a boundary is determined where layers prior use SGD, and the ones after use the Hebbian approach. A selection algorithm dynamically adjusts the boundary over training. For accuracy reasons, they include weak supervision by using the overall classification loss to control the sign of the update. " ]
Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. We propose LoCal+SGD, a new algorithmic approach to accelerate DNN training by selectively combining localized or Hebbian learning within a Stochastic Gradient Descent (SGD) based training framework. Back-propagation is a computationally expensive process that requires 2 Generalized Matrix Multiply (GEMM) operations to compute the error and weight gradients for each layer. We alleviate this by selectively updating some layers’ weights using localized learning rules that require only 1 GEMM operation per layer. Further, since the weight update is performed during the forward pass itself, the layer activations for the mini-batch do not need to be stored until the backward pass, resulting in a reduced memory footprint. Localized updates can substantially boost training speed, but need to be used selectively and judiciously in order to preserve accuracy and convergence. We address this challenge through the design of a Learning Mode Selection Algorithm, where all layers start with SGD, and as epochs progress, layers gradually transition to localized learning. Specifically, for each epoch, the algorithm identifies a Localized→SGD transition layer, which delineates the network into two regions. Layers before the transition layer use localized updates, while the transition layer and later layers use gradient-based updates. The trend in the weight updates made to the transition layer across epochs is used to determine how the boundary between SGD and localized updates is shifted in future epochs. We also propose a low-cost weak supervision mechanism by controlling the learning rate of localized updates based on the overall training loss. We applied LoCal+SGD to 8 image recognition CNNs (including ResNet50 and MobileNetV2) across 3 datasets (Cifar10, Cifar100 and ImageNet). Our measurements on a Nvidia GTX 1080Ti GPU demonstrate upto 1.5× improvement in end-to-end training time with ∼0.5% loss in Top-1 classification accuracy.
[]
[ { "authors": [ "Pulkit Agrawal", "Ross B. Girshick", "Jitendra Malik" ], "title": "Analyzing the performance of multilayer neural networks for object recognition", "venue": "CoRR, abs/1407.1610,", "year": 2014 }, { "authors": [ "Takuya Akiba", "Shuji Suzuki", "Keisuke Fukuda" ], "title": "Extremely large minibatch SGD: training resnet-50 on imagenet", "venue": "minutes. CoRR,", "year": 2017 }, { "authors": [ "Léon Bottou" ], "title": "Large-scale machine learning with stochastic gradient descent", "venue": "In in COMPSTAT,", "year": 2010 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations, 2020", "venue": null, "year": 2020 }, { "authors": [ "Jeffrey Dean", "Greg S. Corrado", "Rajat Monga", "Kai Chen", "Matthieu Devin", "Quoc V. Le", "Mark Z. Mao", "Marc’Aurelio Ranzato", "Andrew Senior", "Paul Tucker", "Ke Yang", "Andrew Y. Ng" ], "title": "Large scale distributed deep networks", "venue": "In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1,", "year": 2012 }, { "authors": [ "Jeffrey Dean", "Greg S. Corrado", "Rajat Monga", "Kai Chen", "Matthieu Devin", "Quoc V. Le", "Mark Z. Mao", "Marc’Aurelio Ranzato", "Andrew Senior", "Paul Tucker", "Ke Yang", "Andrew Y. Ng" ], "title": "Large scale distributed deep networks", "venue": "In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1,", "year": 2012 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR09,", "year": 2009 }, { "authors": [ "Yoav Goldberg", "Graeme Hirst" ], "title": "Neural Network Methods in Natural Language Processing", "venue": "Morgan Claypool Publishers,", "year": 2017 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross B. Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch SGD: training imagenet in 1 hour", "venue": "CoRR, abs/1706.02677,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "CoRR, abs/1512.03385,", "year": 2015 }, { "authors": [ "Gao Huang", "Yu Sun", "Zhuang Liu", "Daniel Sedra", "Kilian Q. Weinberger" ], "title": "Deep networks with stochastic depth", "venue": "CoRR, abs/1603.09382,", "year": 2016 }, { "authors": [ "Olivier J. Hénaff", "Aravind Srinivas", "Jeffrey De Fauw", "Ali Razavi", "Carl Doersch", "S.M. Ali Eslami", "Aaron van den Oord" ], "title": "Data-efficient image recognition with contrastive predictive coding, 2019", "venue": null, "year": 2019 }, { "authors": [ "Angela H. Jiang", "Daniel L.K. Wong", "Giulio Zhou", "David G. Andersen", "Jeffrey Dean", "Gregory R. Ganger", "Gauri Joshi", "Michael Kaminksy", "Michael Kozuch", "Zachary C. Lipton", "Padmanabhan Pillai" ], "title": "Accelerating deep learning by focusing on the biggest losers, 2019", "venue": null, "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1,", "year": 2012 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "Commun. ACM,", "year": 2017 }, { "authors": [ "Yann LeCun", "Corinna Cortes. MNIST handwritten digit database." ], "title": "URL http://yann", "venue": "lecun.com/exdb/mnist/.", "year": 2010 }, { "authors": [ "Dong-Hyun Lee", "Saizheng Zhang", "Asja Fischer", "Yoshua Bengio" ], "title": "Difference target propagation", "venue": "In Proceedings of the 2015th European Conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I,", "year": 2015 }, { "authors": [ "Duo Li", "Aojun Zhou", "Anbang Yao" ], "title": "Hbonet: Harmonious bottleneck on two orthogonal dimensions", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Qianli Liao", "Joel Z. Leibo", "Tomaso A. Poggio" ], "title": "How important is weight symmetry in backpropagation", "venue": "CoRR, abs/1510.05067,", "year": 2015 }, { "authors": [ "Sangkug Lym", "Esha Choukse", "Siavash Zangeneh", "Wei Wen", "Mattan Erez", "Sujay Shanghavi" ], "title": "Prunetrain: Gradual structured pruning from scratch for faster neural network training", "venue": "URL http://arxiv.org/abs/1901.09290", "year": 1901 }, { "authors": [ "Joe Yue-Hei Ng", "Matthew J. Hausknecht", "Sudheendra Vijayanarasimhan", "Oriol Vinyals", "Rajat Monga", "George Toderici" ], "title": "Beyond short snippets: Deep networks for video classification", "venue": "CoRR, abs/1503.08909,", "year": 2015 }, { "authors": [ "Arild Nøkland" ], "title": "Direct feedback alignment provides learning in deep neural networks", "venue": "In Proceedings of the 30th International Conference on Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Erkki Oja" ], "title": "Simplified neuron model as a principal component analyzer", "venue": "Journal of Mathematical Biology,", "year": 1982 }, { "authors": [ "Mark Sandler", "Andrew G. Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation", "venue": "CoRR, abs/1801.04381,", "year": 2018 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Xiao Sun", "Jungwook Choi", "Chia-Yu Chen", "Naigang Wang", "Swagath Venkataramani", "Vijayalakshmi Srinivasan", "Xiaodong Cui", "Wei Zhang", "Kailash Gopalakrishnan" ], "title": "Hybrid 8-bit floating point (hfp8) training and inference for deep neural networks", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding, 2018", "venue": null, "year": 2018 }, { "authors": [ "Yang You", "Igor Gitman", "Boris Ginsburg" ], "title": "Scaling SGD batch size to 32k for imagenet training", "venue": "CoRR, abs/1708.03888,", "year": 2017 }, { "authors": [ "Yang You", "Zhao Zhang", "Cho-Jui Hsieh", "James Demmel" ], "title": "100-epoch imagenet training with alexnet in 24", "venue": "minutes. CoRR,", "year": 2017 }, { "authors": [ "Jiong Zhang", "Hsiang-Fu Yu", "Inderjit S. Dhillon" ], "title": "Autoassist: A framework to accelerate training of deep neural networks. CoRR, abs/1905.03381, 2019", "venue": null, "year": 1905 }, { "authors": [ "Shi Hui Zhong" ], "title": "Efficient online spherical k-means clustering", "venue": "IEEE International Joint Conference on Neural Networks,", "year": 2005 }, { "authors": [ "Chunting Zhou", "Chonglin Sun", "Zhiyuan Liu", "Francis C.M. Lau" ], "title": "A C-LSTM neural network for text classification", "venue": "CoRR, abs/1511.08630,", "year": 2015 } ]
[ { "heading": null, "text": "Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. We propose LoCal+SGD, a new algorithmic approach to accelerate DNN training by selectively combining localized or Hebbian learning within a Stochastic Gradient Descent (SGD) based training framework. Back-propagation is a computationally expensive process that requires 2 Generalized Matrix Multiply (GEMM) operations to compute the error and weight gradients for each layer. We alleviate this by selectively updating some layers’ weights using localized learning rules that require only 1 GEMM operation per layer. Further, since the weight update is performed during the forward pass itself, the layer activations for the mini-batch do not need to be stored until the backward pass, resulting in a reduced memory footprint. Localized updates can substantially boost training speed, but need to be used selectively and judiciously in order to preserve accuracy and convergence. We address this challenge through the design of a Learning Mode Selection Algorithm, where all layers start with SGD, and as epochs progress, layers gradually transition to localized learning. Specifically, for each epoch, the algorithm identifies a Localized→SGD transition layer, which delineates the network into two regions. Layers before the transition layer use localized updates, while the transition layer and later layers use gradient-based updates. The trend in the weight updates made to the transition layer across epochs is used to determine how the boundary between SGD and localized updates is shifted in future epochs. We also propose a low-cost weak supervision mechanism by controlling the learning rate of localized updates based on the overall training loss. We applied LoCal+SGD to 8 image recognition CNNs (including ResNet50 and MobileNetV2) across 3 datasets (Cifar10, Cifar100 and ImageNet). Our measurements on a Nvidia GTX 1080Ti GPU demonstrate upto 1.5× improvement in end-to-end training time with ∼0.5% loss in Top-1 classification accuracy." }, { "heading": "1 INTRODUCTION", "text": "Deep Neural Networks (DNNs) have achieved continued success in many application domains involving images (Krizhevsky et al., 2017), videos (Ng et al., 2015), text (Zhou et al., 2015) and natural language (Goldberg & Hirst, 2017). However training state-of-the-art DNN models is computationally quite challenging, often requiring exa-FLOPs of compute as the models are quite complex and need to be trained using large datasets. Despite rapid improvements in the capabilities of GPUs and the advent of specialized accelerators, training large models using current platforms is still quite expensive and often takes days to even weeks. In this work, we aim to reduce the computational complexity of DNN training through a new algorithmic approach called LoCal+SGD1, which alleviates the key performance bottlenecks in Stochastic Gradient Descent (SGD) through selective use of localized or Hebbian learning.\nComputational Bottlenecks in DNN Training. DNNs are trained in a supervised manner using gradient-descent based cost minimization techniques such as SGD (Bottou, 2010) or Adam (Kingma & Ba, 2015). The training inputs (typically grouped into minibatches) are iteratively forward propagated (FP ) and back propagated (BP ) through the DNN layers to compute weight updates that push the network parameters in the direction that decreases the overall classification loss.\n1In addition to combining localized and SGD based learning, LoCal+SGD is Low-Calorie SGD or SGD with reduced computational requirements\nBack-propagation is computationally expensive, accounting for 65-75% of the total training time on GPUs. This is attributed to two key factors: (i) BP involves 2 Generalized Matrix Multiply (GEMM) operations, one to propagate the error across layers and the other to compute the weight gradients, and (ii) when training on distributed systems using data/model parallelism(Dean et al., 2012b; Krizhevsky et al., 2012), aggregation of weight gradients/errors across devices incurs significant communication overhead. Further, BP through auxiliary ops such as batch normalization are also more expensive than FP .\nPrior Efforts on Efficient DNN Training. Prior research efforts to improve DNN training time can be grouped into a few directions. One group of efforts enable larger scales of parallelism in DNN training through learning rate tuning (You et al., 2017a; Goyal et al., 2017; You et al., 2017b) and asynchronous weight updates (Dean et al., 2012a). Another class of efforts employ importancebased sample selection during training, wherein ‘easier’ training samples are selectively discarded to improve runtime (Jiang et al., 2019; Zhang et al., 2019). Finally, model quantization (Sun et al., 2019) and pruning (Lym et al., 2019) can lead to significant runtime benefits during training by enabling the use of reduced-bitwidth processing elements.\nLoCal+SGD: Combining SGD with Localized Learning. Complementary to the aforementioned efforts, we propose a new approach, LoCal+SGD, to alleviate the performance bottlenecks in DNN training, while preserving model accuracy. Our hybrid approach combines Hebbian or localized learning (Hebb) with SGD by selectively applying it in specific layers and epochs. Localized learning rules (Hebb; Oja, 1982; Zhong, 2005) utilize a single feed-forward weight update to learn the feature representations, eschewing BP . Careful formulation of the localized learning rule can result in ∼2× computation savings compared to SGD and also significantly reduces memory footprint as activations from FP needed not be retained until BP . The reduction in memory footprint can in turn allow increasing the batch size during training, which leads to further runtime savings due to better compute utilization and reduced communication costs. It is worth noting that localized learning has been actively explored in the context of unsupervised learning (Chen et al., 2020; van den Oord et al., 2018; Hénaff et al., 2019). Further, there has been active research efforts on neuro-scientific learning rules (Lee et al., 2015; Nøkland, 2016). Our work is orthogonal to such efforts and represents a new application of localized learning in a fully supervised context, wherein we selectively combine it within an SGD framework to achieve computational savings.\nPreserving model accuracy and convergence with LoCal+SGD requires localized updates to be applied judiciously i.e., only to selected layers in certain epochs. We address this challenge through the design of a learning mode selection algorithm. At the start training, the selection algorithm initializes the learning mode of all layers to SGD, and as training progresses determines the layers that transition to localized learning. Specifically, for each epoch, the algorithm identifies a Localized→SGD transition layer, which delineates the network into two regions. Layers before the transition layer use localized updates, while subsequent layers use gradient-based updates. This allows BP to stop at the transition layer, as layers before it have no use for the back-propagated errors. The algorithm takes advantage of the magnitude of the weight updates of the Localized→SGD transition layer in deciding the new position of the boundary every epoch. Further, we provide weak supervision by tweaking the learning rate of locally updated layers based on overall training loss.\nContributions: To the best of our knowledge, LoCal+SGD is the first effort that combines localized learning (an unsupervised learning technique) within a supervised SGD context to reduced computational costs while maintaining classification accuracy. This favorable tradeoff is achieved by LoCal+SGD through a Learning Mode Selection Algorithm that applies localized learning to selected layers and epochs. Further improvement is achieved through the use of weak supervision by modulating the learning rate of locally updated layers based on the overall training loss. Across 8 image recognition CNNs (including ResNet50 and MobileNet) and 3 datasets (Cifar10, Cifar100 and ImageNet), we demonstrate that LoCal+SGD achieves up to 1.5× improvement in training time with ∼0.5% Top-1 accuracy loss on a Nvidia GTX 1080Ti GPU.\n2 LoCal+SGD: COMBINING SGD WITH SELECTIVE LOCALIZED LEARNING The key idea in LoCal+SGD is to apply localized learning to selected layers and epochs during DNN training to improve the overall execution time, without incurring loss in accuracy. The following components are critical to the effectiveness of LoCal+SGD:\n• Localized Learning Rule Formulation. We formulate a computationally efficient localized learning rule and highlight the clear runtime benefits when compared to SGD. • Learning Mode Selection Algorithm. We propose a learning mode selection algorithm\nthat chooses between localized learning and SGD-based learning for each layer in every epoch, based on the potential impact on accuracy and computational benefits. • Weak Supervision. We propose a weak supervision technique, which comprises of a low-\ncost supervision signal communicated to the localized learning layers in each epoch. The signal modulates the learning rates of these layers based on the rate of change of the overall classification loss.\nIn the following sub-sections, we describe the salient aspects of these components in greater detail." }, { "heading": "2.1 EFFICIENT LOCALIZED LEARNING", "text": "Localized learning has been extensively explored in the context of unsupervised learning, demonstrating success on small (<= 3 layer) networks using relatively simpler datasets (e.g. MNIST, Cifar-10) (LeCun & Cortes, 2010; Krizhevsky et al., a)) with an accuracy gap that is yet to be bridged on larger datasets (e.g. ResNet50 or MobileNetV2 on ImageNet (Deng et al., 2009)). First proposed in (Hebb), the key intuition behind localized learning rules is to encourage correlations between neurons that have similar activation patterns. Equation 1 depicts the Hebbian weight update proposed in (Hebb), for a synapse with weight W , connecting a pair of input and output neurons whose activation values are represented by x and y respectively, with η as the learning rate.\n4W = η · x · y (1) Considerable research has gone into evolving this equation over the years to improve the performance of localized learning (Oja, 1982; Zhong, 2005). However, many of the proposed rules are computationally complex, or are difficult to parallelize on modern hardware platforms such as GPUs and TPUs. Since our primarily goal is improving DNN training time, we adopt the computationally simple localized learning rule presented in Equation 1.\nThe learning rule in Equation 1 assumes a distinct synapse between each input and output neuron pair. While its application to fully-connected (fc) layers is straightforward, we need to consider the sharing of weights between neuron pairs in convolutional (conv) layers. For updating a shared weight of a conv layer, we calculate the individual updates due to each pair of pre- and post-synaptic neurons sharing the weight and sum all such updates. This essentially reduces to a convolution operation between the input and output activations of the layer and can be expressed by Equation 3 in Figure 1. For further computational efficiency improvement, unlike Equation 1, we consider the pre-activation-function values of the outputs i.e., zl instead of their post activation value al. Further, we normalize the localized update values as shown in Equation 4 of Figure 1, as it was observed to achieve better convergence in practice.\nOverall, we utilize Equations 3 and 4 from Figure 1 to perform the weight updates in all layers that are earlier than the Localized→SGD transition layer during a certain epoch. All other layers continue to be updated using SGD-based BP , expressed by Equations 5-7 in Figure 1. SGD updates are applied to batch-normalization layers present after the Localized→SGD transition layer, and are otherwise skipped. Clearly, Equation 3 has the same computational complexity as Equation 6 of SGD-based BP for conv and fc layers. Thus, from Figure 1, we can directly infer that our localized learning rule will be considerable faster than SGD-based BP . In practice, we measured this improvement to be more than 2× on a NVIDIA GTX 1080Ti GPU for the ImageNet-ResNet50 benchmark, across all conv and fc layers. In addition to the computational complexity, the memory footprint of SGD-based\nBP is also higher. This is because DNN software frameworks commonly store all activation values computed during FP to avoid recomputing al−1, the input activations to the layers, used in Equation 6 of SGD-based BP . In contrast, the localized update for a layer can be performed as soon as the FP through the layer is complete. The activation tensor al of layer L can be discarded or over-written as soon as FP proceeds to the next layer in the network, thereby freeing up a significant portion of on-device memory during training. In turn, this can allow larger minibatch sizes to be accommodated on a given hardware platform, when the localized updates are applied on a sufficient number of layers." }, { "heading": "2.2 LEARNING MODE SELECTION ALGORITHM", "text": "The compute benefits of localized learning come at the cost of potential loss in classification accuracy with respect to SGD training. Thus, we utilize a learning mode selection algorithm to judiciously choose when and where to apply localized learning. The proposed algorithm identifies the learning mode of each layer at every epoch to maximize the runtime benefits, while incurring minimal losses in classification accuracy.\nTo design an efficient learning mode selection algorithm, we first study the effects of different spatiotemporal patterns of localized learning on the computational efficiency and classification accuracy of a network. We specifically investigate whether localized learning is more suitable for specific layers in the network and specific phases in the training process.\nImpact on runtime efficiency: We first analyze the spatial trends, i.e., if locally updating specific layers in the network results in better runtime efficiency. In a particular epoch, if a convolutional layer L, updated with SGD precedes a convolutional layer K, that is updated locally, calculating the SGD-based error gradients of Layer L, i.e. δL, requires error propagation through the locally updated layer K. From a compute efficiency perspective, the benefits of using localized-updates in layer K completely vanish. Thus, it makes sense to partition the network into two regions - a prefix (set of initial layers) that are updated using localized learning, followed by layers that are updated with SGD. SGD-based BP is stopped at the junction of the two regions. Naturally, the compute benefits increase when the number of locally updated layers are higher and thus the boundary i.e., the Localized→SGD transition layer is moved deeper into the network. The impact of different temporal patterns on runtime efficiency is quite straightforward, with higher number of locally updated epochs leading to higher benefits. Further, as the compute complexity of localized updates is constant across different epochs, these benefits are agnostic of which particular epoch involves localized learning.\nImpact on accuracy: To analyze the impact on accuracy, we first examine the nature of features learnt by different layers trained by SGD. It is commonly accepted that the initial layers of a network (Agrawal et al., 2014) perform feature extraction, while later layers aid in the classification process. As localized learning demonstrates better performance for feature extraction, applying it more aggressively, i.e for higher number of epochs, in the initial layers has a much smaller impact accuracy. However, for later layers in the network, the number of localized learning epochs should be progressively reduced to preserve accuracy.\nOverall, based on the impact of localized learning on both runtime and accuracy, we find that a good learning mode selection algorithm should favor application of localized learning to a contiguous group of initial layers, while ensuring fewer or no localized learning epochs in later layers. We further\nimpose an additional constraint on top of this spatio-temporal pattern. Specifically, we allow each layer to transition from one learning mode to another at most once during the entire training process. We empirically observe that utilizing SGD as the initial learning mode allows the network to achieve a higher accuracy than utilizing localized learning as the initial mode. SGD essentially provides a better initialization point for all layers, and the subsequent use of localized updates enables the training to converge with good accuracy.\nAlgorithm 1 Learning Mode Selection Algorithm Input: TE (Index of the transition layer at epoch\nE), ek (epochs since last transition), ||4WE || (L2 norm of the weight update of the transition layer at epoch E), K (minimum interval between transitions), tshift (number of layers to shift boundary) Output: TE+1 (Index of the transition layer at epoch E+1)\n1: WAvg = 1K ∑e=E−1\ne=E−K || 4We|| 2: if || 4WE || <= α ·WAvg and ek>=K 3: TE+1 = TE + tshift 4: ek = 0 5: else 6: TE+1 = TE 7: ek = ek + 1 In accordance with the above considerations, we propose a learning mode selection algorithm, described in Algorithm 1, that identifies the position of the boundary or the Localized→SGD transition layer every epoch. To that end, the algorithm analyzes the L2 norm of the SGD weight updates made to the Localized→SGD transition layer across epochs and determines whether the boundary can be shifted deeper into the network for the next epoch. In order to ensure stability in the training process, the algorithm moves the boundary at most once in every K epochs. It calculates the running average of the norm of the updates, Wavg, over the last K epochs (line 1). The boundary is shifted to the right only if the weight update in epoch E is within a fraction α of Wavg, and K epochs have transpired since the last transition (line 2). The rationale for this criterion is that sustained high magnitudes of weight updates in the transition layer indicate that they are potentially critical to accuracy, in which case the transition layer must continue being updated with SGD. If the criterion is not satisfied, the boundary remains stationary (line 5).\nThe value of α is set by analyzing the trends in the weight update magnitudes across the training process for different networks. The hyper-parameter tshift is set to the size of a recurring block, such as the residual blocks in ResNets and MobileNetV2. The hyper-parameter K is selected in a manner that ensures that localized updates are never applied beyond some fraction of the initial network layers. We denote this fraction as Lmax, and is set to 0.75 in all our experiments. Equation 2 is used to compute K for a network of L layers and a total training period of Emax epochs.\nK = Emax\nLmax ∗ Ltshift (2)\nIn Figure 3, we plot the progression of the transition layer across the ResNet-34 and -50 benchmarks trained on the ImageNet dataset using LoCal+SGD. Interestingly, the weight update norm metric automatically modulates the rate at which the boundary progresses, as the boundary traverses the deeper layers at a slower rate." }, { "heading": "2.3 WEAK SUPERVISION", "text": "To further bridge the accuracy gap between our approach and end-to-end SGD training, we introduce weak supervision in the locally updated layers. Unlike the SGD-updated layers, the locally updated layers in our approach cannot take advantage of the information provided by supervision, i.e., the classification error evaluated at the output. We utilize this supervised information through a low-cost weak supervision scheme that consists of a single signal sent to all layers updated locally in a particular epoch, and is derived from the classification loss observed over past few epochs. The weak supervision scheme is described in Algorithm 2.\nThe key principle behind the weak supervision scheme is to control the learning rates of the locally updated layers based on the rate at which the overall classification loss changes. For example, if the overall classification loss has increased across consecutive epochs, we reverse the direction of the\nupdates (line 3) in the next epoch. In contrast, the update direction is maintained if the overall loss is decreasing (line 5). We find that this weak supervision provides better accuracy results than other learning rate modulation techniques for the locally updated layers such as Adam or momentum-based updates.\nAlgorithm 2 Weak Supervision Scheme Input: Li (Overall classification loss at epoch\ni), lrL (original learning rate of layer L) Output: WL (Weight update of layer L)\n1: 4WL = conv(al−1, zl) 2: if Li−1 < Li 3: WL = WL - lrL · 4WL||4WL|| 4: else 5: WL = WL + lrL · 4WL||4WL|| We would like to highlight that traditional SGD provides fine-grained supervision and involves evaluating the error gradients for every neuron in the network. In contrast, the proposed weak supervision scheme provides coarse-grained supervision by forcing all weights to re-use the same loss information. Overall, our weak supervision scheme is not developed with the intent to compete with SGD updates, but is rather a simple, approximate and low-cost technique that brings the final accuracy of LoCal+SGD at par with end-to-end SGD training performance." }, { "heading": "3 EXPERIMENTAL RESULTS", "text": "In this section, we present the results of our experiments highlighting the compute benefits achieved by LoCal+SGD. We evaluate the benefits across a suite of 8 image-recognition DNNs across 3 datasets. We consider the ResNet18 (He et al., 2015) and VGG13 (Simonyan & Zisserman, 2015) networks for the Cifar10 (Krizhevsky et al., a) and Cifar100 (Krizhevsky et al., b) datasets; and the ResNet34, ResNet50 (He et al., 2015) and MobileNetV2 (Sandler et al., 2018) networks for the ImageNet dataset (Deng et al., 2009). All experiments are conducted on Nvidia GTX 1080Ti GPUs with the batch size set to 64 per GPU, unless otherwise mentioned. Further experimental methodology details for the baseline and proposed approach are provided in the Appendix." }, { "heading": "3.1 SINGLE GPU EXECUTION TIME BENEFITS", "text": "ImageNet: Table 1 presents the performance of the baseline (end-to-end SGD training) and the proposed LoCal+SGD algorithm on the ImageNet benchmarks in terms of the Top-1 classification error and runtime observed on a single GPU. For all benchmarks listed here, LoCal+SGD applies localized updates for nearly 50-60% of the layers. As can be seen, LoCal+SGD achieves upto∼1.4× reduction in runtime compared to to the baseline, while sacrificing <0.5% loss in Top-1 accuracy.\nTable 1 also compares the performance of LoCal+SGD against existing research efforts designed to improve training efficiency. We perform this analysis against two efforts, namely (i) Training with stochastic depth (Huang et al., 2016) and (ii) Structured Pruning during Training (Lym et al., 2019). Training with stochastic depth, as the name suggests, stochastically bypasses residual blocks by propagating input activations/error gradients via identity or downsampling transformations, resulting in improved training time. However, the approach is targeted towards extremely deep networks and\nas seen in Table 1, it incurs a noticeable accuracy loss on networks such as ResNet34, ResNet50 and MobileNetV2. Compared to training with stochastic depth, our proposal clearly achieves better accuracy as well as training runtime benefits. The key principle behind the pruning during training approach is to reduce the size of the weight and activation tensors in a structured manner during training, thereby providing speed-ups on GPU/TPU platforms. However, on complex benchmarks such as ResNet50, such techniques achieve speed-ups at the cost of significant drop in accuracy (∼ 1.5%). To further demonstrate the utility of localized updates in our approach, we consider a third technique, wherein layers selected to be updated locally for a given epoch are instead frozen, i.e., the parameters are held fixed during that epoch. While this achieves better runtime savings, it incurs considerably higher loss (∼1%) in accuracy, further underscoring the benefits of LoCal+SGD. CIFAR-10 and CIFAR-100: Table 2 presents the accuracy and corresponding compute benefits of the baseline and the proposed technique, as well as training with stochastic depth and layer freezing, for the CIFAR-10 and CIFAR-100 datasets. Stochastic depth is applicable only to residual blocks and is hence not considered for the VGG-13 network. Across benchmarks, we observe upto a 1.51× improvement in training runtime. Compared to the ImageNet benchmarks, LoCal+SGD applies localized updates more aggressively in the CIFAR-10 and CIFAR-100 benchmarks i.e., for more layers are updated locally for a higher number of epochs. This leads to the superior compute benefits of the proposed scheme on these benchmarks.\n3.2 EXECUTION TIME BENEFITS FOR MULTI-GPU TRAINING\nWe analyze the memory footprint of the ResNet50 network when trained with LoCal+SGD on the ImageNet dataset. Training first commences with all layers updated with SGD, resulting in a high memory footprint. Due to the 10 GB capacity of the chosen GPU, the mini-batch size is set to 64 per GPU. As the Localized→SGD transition layer progresses across the network, the memory footprint required also gradually reduces across epochs. We take advantage of this reduction in memory footprint in the context of distributed training using 4 GPUs with data parallelism. Specifically, we extract additional runtime benefits by increasing the batch size on each GPU,\nwhich reduces the frequency of gradient aggregation between devices and alleviates the communication overhead. At epoch 33, the memory footprint per GPU reduces to less than 5 GB, allowing training with an increased mini-batch size of 128 per GPU from epoch 33 onwards. The doubling of the batch-size provides an additional 6% runtime improvement, when measured across the entire training period. We note that other training techniques such as training with stochastic depth cannot exploit this feature, due to minimal reduction in memory footprint." }, { "heading": "3.3 ABLATION ANALYSIS", "text": "As mentioned in Section 2, the hyper-parameters α, tshift and Lmax control the progression of the boundary across the network. Different values of either parameter can result in dif-\nferent learning mode configurations during training, resulting in different points in the computational efficiency vs. accuracy trade-off space. To understand the trade-off space between accuracy and runtime benefits, we now individually study the impact of each parameter.\nSp ee\nd -U\np\nLoss in Acc(%)→\nSpeed-Up\nSp ee\nd -U\np →\nLoss in Acc(%)→\n(a) (b)\n1\n1.1\n1.2\n1.3\n1.4\n1.5\n1.6\n0 0.4 0.8 1.2 1.6 1\n1.1\n1.2\n1.3\n1.4\n0 0.4 0.8 1.2 1.6\nFigure 5: Compute efficiency vs. accuracy trade-off on the ImageNet dataset for a) ResNet50 and b) MobileNetV2\nImpact of α : Figure 5 depicts the best compute benefits achieved for different α, for accuracy losses ranging from 0.1%-1.5% for the ResNet50 and MobileNetV2 benchmarks on ImageNet. On the ResNet50 benchmark, even while limiting the loss in accuracy 0.1%, LoCal+SGD achieves 1.1× speedup over traditional SGD. The speedups\nincrease to 1.38×-1.47× when around 1.5% loss in accuracy is tolerable.\nTo p\n-1 E\nrr o\nr %\n→\nTo p\n-1 E\nrr o\nr %\n→\nLmax % →Tshift % →\nAccuracy\n(a) (b)\n1\n1.1\n1.2\n1.3\n1.4\n20\n25\n30\n35\n1 4 7 10 13\nR u\nn ti\nm e\nSa vi\nn gs\n→\nSp ee\nd -U\np →\nSpeed-Up\n1\n1.2\n1.4\n1.6\n22\n24\n26\n28\n10 30 50 70 90\ncuracy is largely stable in the regime of tshift between 5-12%, and begins to experience small degradations again when tshift exceeds 12%. These trends can be explained by analyzing the rate at which the transition layer progresses, and the number of layers transitioning to localized updates in an epoch for different tshift values. Smaller values of tshift (<3%) give rise to low values of k (∼1-2 epochs), the minimum number of epochs that must elapse before the transition layer can shift again. This results in fast progression of the transition layer across the network, leading to rapid changes in the learning mode at the boundary, thereby negatively impacting accuracy. In contrast, while larger tshift values (>12%) encourage slow progression of the boundary, a larger number of layers transition from SGD to localized updates in a single epoch, thereby impacting performance. We note here that in both cases, while α and Lmax can be tuned to control the progression and mitigate the loss in accuracy, the runtime savings is vastly reduced (<10%). Furthermore, for fixed values of Lmax and α, tshift is largely insensitive to runtime benefits, as the average number of layers updated with localized updates remains similar. Hence, for best accuracy and runtime benefits we set tshift in the range of 5-10% for all networks.\nImpact of Lmax : Figure 6(b) depicts the impact of Lmax on accuracy for the ResNet50 network. For each Lmax, we identify the α and tshift that provide the best runtime benefits with minimal loss in accuracy (less than 0.5%). As with tshift, we denote Lmax as a percentage of the total network\ndepth. As seen in the figure, the degradation in accuracy increases slowly for Lmax in the initial layers - it is merely 0.1% at around Lmax = 30%, and increases to 0.4-0.5% for Lmax = 60-70%. However, the accuracy degradation sharply increases beyond 2% once Lmax exceeds 90% of the network depth. Further, runtime benefits generally increase with higher values of Lmax, for fixed tshift and α. Hence, for achieving a good accuracy versus runtime trade-off, we usually set Lmax to 75% for all networks." }, { "heading": "4 RELATED WORK", "text": "This section discusses related research efforts to the proposed LoCal+ SGD training technique. These efforts can be broadly categorized into two classes. The first class of efforts focus on compute efficient DNN training. All efforts belonging to this class utilize gradient-descent algorithms to train the DNN model. These techniques are largely complementary to LoCal+SGD, as they can potentially be applied to the parts of the DNN model updated with SGD. In Section 3, we demonstrated how LoCal+SGD achieves superior accuracy versus computational efficiency trade-off than some of these efforts. Further, the second class of efforts involve neuro-scientific faithful learning rules, such as feedback alignment based efforts etc (Nøkland, 2016). Our work is orthogonal to such efforts, as we selectively combine localized learning rules with SGD for better computational efficiency.\nWe elucidate upon the different research efforts in both directions as follows.\nHyper-parameter tuning: Many notable algorithmic efforts are directed towards achieving training efficiency by controlling the hyper-parameters involved in gradient-descent, notably the learning rate. (You et al., 2017a; Akiba et al., 2017; Goyal et al., 2017; You et al., 2017b) propose learning rate tuning algorithms that achieve training in less than an hour with no loss in accuracy, when distributed to over hundreds of CPU/GPU cores.\nModel size reduction during training: Model size reduction via pruning and quantization is a popular technique to reduce compute costs during inference. In many of these efforts, a dense or full precision model is re-trained or fine-tuned to obtain a pruned or quantized model. Recently, several efforts have also investigated dynamically pruning (Lym et al., 2019) or quantizing (Sun et al., 2019) a model during training itself. The reduction in model size results in training speed-ups. Taking a slightly different approach (Huang et al., 2016) proposes stochastically dropping residual blocks on extremely deep networks such as ResNet-1202, not only for training runtime benefits but also better accuracies due to improved gradient strength.\nInstance importance based training: Recent research efforts have discovered that not all training samples are required for improving loss minimization during SGD training (Jiang et al., 2019; Zhang et al., 2019). That is, a sizable fraction of the samples can be skipped during several epochs, depending on their impact on the classification loss evaluated during FP . This translates to a reduction in mini-batches, providing considerable runtime benefits.\nNeuro-scientific learning rules: Back-propagation algorithms utilized in DNN training are not biologically plausible, and do not explain how learning actually happens in the brain. To this end, there have been several efforts that develop biological faithful learning algorithms, and demonstrate considerable success on complex benchmarks including Cifar10 and ImageNet. For example, unlike conventional DNN training, feedback alignmnent algorithms (Nøkland, 2016) tackle the weight transport problem (Liao et al., 2015) by allowing for asymmetry in the weight values during forward and back propagation. Likewise, Target-Propagation (Lee et al., 2015) encourages neural activity to reach desired target activations evaluated during forward propagation itself, instead of utilizing loss gradients." }, { "heading": "5 CONCLUSION", "text": "In this paper, we introduce a new approach to improve the training efficiency of state-of-the-art DNNs. Specifically, we take advantage of the computationally efficient nature of localized learning rules and selectively update some layers with these rules instead of SGD. We design an intelligent learning mode selection algorithm that determines the update method for the convolutional layers of the network in every epoch while maintaining the accuracy level and extracting maximum benefits. Further, we also implement a low-cost weak supervision scheme that brings the accuracy of the proposed scheme closer to traditional SGD training. Across a benchmark suite of 8 DNNs, we achieve upto 1.5× reduction in training times, as measured on a modern GPU platform." }, { "heading": "6 APPENDIX", "text": "" }, { "heading": "6.1 EXPERIMENTAL SETUP", "text": "This subsection describes the experimental setup used for realizing the baseline and proposed LoCal+SGD training schemes, on the benchmarks specified in Section 3 of the main paper. We conduct our experiments on the complete training and test datasets of each benchmark, using the PyTorch (Paszke et al., 2019) framework.\nBaseline: We consider end-to-end SGD training as the baseline in our experiments. The hyperparameters used in SGD training of each of the benchmarks are described below.\nImageNet: For experiments in Section 3.1 we utilize a batch-size of 64 per GPU, for all benchmarks. For the ResNet50 and ResNet34 benchmarks the initial learning rate set to 0.025. The learning rate is decreased by 0.1 every 30 epochs, for a total training duration of 90 epochs, and the weight decay is 4e− 5. The MobileNetV2 benchmark utilizes an initial learning rate of 0.0125. We use a cosine learning rate decay schedule, as in (Li et al., 2019) for 150 epochs. The weight decay is set to 4e− 5. Both benchmarks use an input size of 224*224*3.\nFor the experiments in Section 3.2, the total batch-size at epoch 1 is 256 (64*4), with the initial learning rate set to 0.1 for the ResNet benchmarks and 0.05 for the MobileNetV2 benchmark. All other parameters remain the same.\nCifar10 and Cifar100: All Cifar10 and Cifar100 experiments utilize a batch-size of 64. The Cifar10 benchmarks are trained with an initial learning rate of 0.05 that is decayed by 0.1 every 10 epochs, across 90 epochs. The initial learning rate of the Cifar100 benchmarks is 0.025 and decayed by 0.5 every 20 epochs, for 150 epochs in total. The weight decay is set to 5e− 4. Both benchmarks utilize an input size of 32*32*3.\nLoCal+SGD: In the proposed LoCal+SGD training scheme, the layers updated with SGD are trained with the same hyper-parameters used in the baseline implementation. Further, LoCal+SGD training is conducted using the same number of epochs as baseline SGD training. When a layer is updated locally, the initial learning rate is 0.01 and is decayed by a factor of 2 and 10 every 30 epochs, for the Cifar and the ImageNet benchmarks respectively. In all experiments, the α parameter is set to 0.95. We measure the accuracy and runtime of the proposed scheme for the same number of training epochs as the baseline implementations." }, { "heading": "6.2 HYPER-PARAMETER TUNING", "text": "To realize LoCal+SGD, we introduce three hyper-parameters: α, tshift and Lmax. tshift controls the number of layers that switch to SGD-based updates every epoch, Lmax is the maximum number of layers that can be updated with localized learning rules, and α determines the position of the\ntransition layer every epoch by analyzing the gradient information at the boundary between the localized and SGD updates.\nTo obtain optimized values for these hyper-parameters, we first perform simple grid search using a single network for a particular dataset (for example, we choose the ResNet50 network for ImageNet). We transfer the same hyper-parameter values to other networks for the same dataset. We justify our use of common hyper-parameter values by the following experiment. In Table 4 below, we depict the results on other ImageNet benchmarks (ResNet34 and MobileNetV2) when hyper-parameter tuning is performed for each benchmark individually. As can be seen, the accuracy and runtime benefits are only marginally better than those obtained using a common set of hyper-parameters obtained by tuning on the ResNet50 benchmark. We thus utilize common values for a dataset, effectively rendering them constants. The time taken to obtain these constants is thus a one-time cost, and does not impact the speedups obtained by LoCal+SGD." }, { "heading": "6.3 IMPACT OF WEAK SUPERVISION", "text": "In Table 5, we highlight the impact of the weak supervision technique on final classification accuracy. As can be seen, across all our benchmarks, the weak supervision technique clearly improves accuracy by nearly 0.06%-0.17%, bringing the final accuracy of LoCal+ SGD closer to baseline SGD." }, { "heading": "6.4 ADDITIONAL COMPARATIVE ANALYSIS", "text": "In addition to the experiments performed in Section 3 to compare the performance of LoCal+SGD against existing techniques such as pruning during training (Lym et al., 2019) and training with stochastic depth (Huang et al., 2016), we conduct additional experiments to further solidify the superiority of our approach. We elucidate upon these comparisons as follows.\n6.4.1 COMPARING LoCal+ SGD AGAINST SGD AT ISO-ACCURACY We compare the proposed LoCal+SGD training strategy against a SGD baseline that is trained with fewer epochs, i.e., the number of epochs required to reach the highest accuracy obtained by LoCal+ SGD across the total training periods listed in Section 6.1. For the ImageNet benchmarks, the runtime improvements are listed in Table 6 below. Clearly, LoCal+SGD continues to achieve significant speed-ups (around 1.25×) compared to the SGD baseline, even for complex benchmarks such as ResNet50 and MobileNetV2.\n6.4.2 COMPARING LoCal+SGD AGAINST FREEZING LAYERS DURING TRAINING\nIn Section 3, we compare LoCal+SGD against a technique, freezing layers during training, wherein instead of updating the layers using localized learning, the weights are held fixed. In this section,\nwe perform a more thorough comparison of LoCal+SGD against freezing layers during training. Specifically, we perform this comparison at iso-runtime, and analyze the resulting accuracy of either approach. To elaborate, we first identify the LoCal+SGD configuration that can reach the best accuracy within 0.05%, 0.1%, 0.25%, 0.5% and 1% of the baseline SGD accuracy. Then, for the same runtimes taken by each LoCal+SGD configuration, we identify the configuration that provides the best accuracy for the freezing layers approach. Our results for the Cifar10 ResNet18 benchmark can be found in Table 7. LoCal+SGD performs superior to freezing layers during training on 3 out of the 5 configurations studied, i.e., is a superior technique when the loss compared to SGD is allowed to exceed 0.1%." }, { "heading": "6.5 ANALYSIS OF STATIC SCHEDULES FOR LEARNING MODE SELECTION", "text": "The current LoCal+SGD framework is realized with the help of an automatic learning mode selection algorithm, which determines the position of the transition layer every epoch. Instead of a dynamic data-dependent algorithm, we investigate the benefits of using a static schedule - that is, the position of the transition layer is determined using some pre-defined scheduling function. To this end, we have implemented a simple static schedule that favors aggressive application of the localized learning rule in initial layers, and gradually decreases the number of epochs localized learning is applied in the deeper layers. As shown in Equation 3, we opt for a quadratic scheduling function, as we empirically observe they perform better compared to the linear functions studied. Here N determines the position of the transition layer every epoch, Emax is the maximum number of training epochs, and c1 and c2 are constants obtained using grid search.\nN = bmax(0, c1 − c2 · (E − Emax)2)c (3)\nWe report the results using this static schedule in Table 8 for the ImageNet-ResNet50 and MobileNetV2 benchmarks. Compared to the results reported in Table 1, we find that the static schedule achieves slightly higher runtime benefits, for marginally lower accuracies. However, static schedules suffer from some drawbacks – several static scheduling functions are feasible, e.g. exponential, quadratic, etc., and identifying the best scheduling function for each network requires extensive empirical analysis. The learning mode selection algorithm utilized in the paper helps alleviate this by automatically identifying the position of the transition layer every epoch, leveraging the gradient information at the boundary between localized updates and SGD." } ]
2,020
null
SP:3533f4976f70e2fdac0934dbb782d7b8af64c9fd
[ "The authors proposed to use the joint KL divergence between the generative joint distribution and the target distribution (containing latent variables which could correspond to latent parts we wanted to model (e.g. beliefs). It was illustrative to discuss decomposing the joint KL into different ways and thus forming information bounds in different scenarios. The decomposition of past and future in Eq.6 also provided a unified perspective for looking at the most currently used objectives.", "The authors formulate a general framework that unifies inference, action/perception, control, and several other tasks. The framework is based on minimizing the KL divergence between a parameterized \"actual\" distribution and a \"target\" distribution. The authors argue that this formulation unifies a wide range of previously proposed objectives. They also argue that it has some advantages when compared to Friston's \"free energy principle\" framework, with which it shares many similarities, in particular that probability matching is preferred to surprise minimization." ]
We introduce a unified objective for action and perception of intelligent agents. Extending representation learning and control, we minimize the joint divergence between the combined system of agent and environment and a target distribution. Intuitively, such agents use perception to align their beliefs with the world, and use actions to align the world with their beliefs. Minimizing the joint divergence to an expressive target maximizes the mutual information between the agent’s representations and inputs, thus inferring representations that are informative of past inputs and exploring future inputs that are informative of the representations. This lets us explain intrinsic objectives, such as representation learning, information gain, empowerment, and skill discovery from minimal assumptions. Moreover, interpreting the target distribution as a latent variable model suggests powerful world models as a path toward highly adaptive agents that seek large niches in their environments, rendering task rewards optional. The framework provides a common language for comparing a wide range of objectives, advances the understanding of latent variables for decision making, and offers a recipe for designing novel objectives. We recommend deriving future agent objectives the joint divergence to facilitate comparison, to point out the agent’s target distribution, and to identify the intrinsic objective terms needed to reach that distribution. MaxEnt Reward Expected Reward Latent Representations Missing Data Controllable Future
[]
[ { "authors": [ "J. Achiam", "H. Edwards", "D. Amodei", "P. Abbeel" ], "title": "Variational option discovery algorithms", "venue": "arXiv preprint arXiv:1807.10299,", "year": 2018 }, { "authors": [ "A.A. Alemi", "I. Fischer" ], "title": "TherML: Thermodynamics of machine learning", "venue": "arXiv preprint arXiv:1807.04162,", "year": 2018 }, { "authors": [ "S. Amari" ], "title": "A theory of adaptive pattern classifiers", "venue": "IEEE Transactions on Electronic Computers,", "year": 1967 }, { "authors": [ "P. Ao", "C. Tian-Qi", "S. Jiang-Hong" ], "title": "Dynamical decomposition of markov processes without detailed balance", "venue": "Chinese Physics Letters,", "year": 2013 }, { "authors": [ "W.R. Ashby" ], "title": "An introduction to cybernetics", "venue": "Chapman & Hall Ltd,", "year": 1961 }, { "authors": [ "M.G. Azar", "B. Piot", "B.A. Pires", "J.-B. Grill", "F. Altché", "R. Munos" ], "title": "World discovery models", "venue": null, "year": 1902 }, { "authors": [ "K. Azizzadenesheli", "E. Brunskill", "A. Anandkumar" ], "title": "Efficient exploration through bayesian deep Q-Networks", "venue": "Information Theory and Applications Workshop (ITA),", "year": 2018 }, { "authors": [ "D. Barber", "F.V. Agakov" ], "title": "The IM algorithm: a variational approach to information maximization", "venue": "In Advances in neural information processing systems,", "year": 2003 }, { "authors": [ "M. Bellemare", "S. Srinivasan", "G. Ostrovski", "T. Schaul", "D. Saxton", "R. Munos" ], "title": "Unifying count-based exploration and intrinsic motivation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "G. Berseth", "D. Geng", "C. Devin", "C. Finn", "D. Jayaraman", "S. Levine" ], "title": "Smirl: Surprise minimizing rl in dynamic environments", "venue": null, "year": 1912 }, { "authors": [ "C.M. Bishop" ], "title": "Pattern recognition and machine learning", "venue": "springer,", "year": 2006 }, { "authors": [ "C. Blundell", "J. Cornebise", "K. Kavukcuoglu", "D. Wierstra" ], "title": "Weight uncertainty in neural networks", "venue": "arXiv preprint arXiv:1505.05424,", "year": 2015 }, { "authors": [ "S.R. Bowman", "L. Vilnis", "O. Vinyals", "A.M. Dai", "R. Jozefowicz", "S. Bengio" ], "title": "Generating sentences from a continuous space", "venue": "arXiv preprint arXiv:1511.06349,", "year": 2015 }, { "authors": [ "L.D. Brown" ], "title": "A complete class theorem for statistical problems with finite sample spaces", "venue": "The Annals of Statistics,", "year": 1981 }, { "authors": [ "Y. Burda", "H. Edwards", "A. Storkey", "O. Klimov" ], "title": "Exploration by random network distillation", "venue": "arXiv preprint arXiv:1810.12894,", "year": 2018 }, { "authors": [ "O. Chang", "Y. Yao", "D. Williams-King", "H. Lipson" ], "title": "Ensemble model patching: A parameter-efficient variational bayesian neural network", "venue": null, "year": 1905 }, { "authors": [ "T. Chen", "S. Kornblith", "M. Norouzi", "G. Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "I. Csiszár", "F. Matus" ], "title": "Information projections revisited", "venue": "IEEE Transactions on Information Theory,", "year": 2003 }, { "authors": [ "L. Da Costa", "T. Parr", "N. Sajid", "S. Veselic", "V. Neacsu", "K. Friston" ], "title": "Active inference on discrete state-spaces: a synthesis", "venue": "arXiv preprint arXiv:2001.07203,", "year": 2020 }, { "authors": [ "P. Dayan", "G.E. Hinton", "R.M. Neal", "R.S. Zemel" ], "title": "The Helmholtz machine", "venue": "Neural computation,", "year": 1995 }, { "authors": [ "I.M. de Abril", "R. Kanai" ], "title": "A unified strategy for implementing curiosity and empowerment driven reinforcement learning", "venue": "arXiv preprint arXiv:1806.06505,", "year": 2018 }, { "authors": [ "J. Denker", "D. Schwartz", "B. Wittner", "S. Solla", "R. Howard", "L. Jackel", "J. Hopfield" ], "title": "Large automatic learning, rule extraction, and generalization", "venue": "Complex Systems,", "year": 1987 }, { "authors": [ "P.A.M. Dirac" ], "title": "The principles of quantum mechanics", "venue": "Oxford university press,", "year": 1958 }, { "authors": [ "M.W. Dusenberry", "G. Jerfel", "Y. Wen", "Y.-a. Ma", "J. Snoek", "K. Heller", "B. Lakshminarayanan", "D. Tran" ], "title": "Efficient and scalable bayesian neural nets with rank-1 factors", "venue": null, "year": 2005 }, { "authors": [ "F. Ebert", "C. Finn", "A.X. Lee", "S. Levine" ], "title": "Self-supervised visual planning with temporal skip connections", "venue": "arXiv preprint arXiv:1710.05268,", "year": 2017 }, { "authors": [ "F. Ebert", "C. Finn", "S. Dasari", "A. Xie", "A. Lee", "S. Levine" ], "title": "Visual foresight: Model-based deep reinforcement learning for vision-based robotic control", "venue": "arXiv preprint arXiv:1812.00568,", "year": 2018 }, { "authors": [ "S.A. Eslami", "D.J. Rezende", "F. Besse", "F. Viola", "A.S. Morcos", "M. Garnelo", "A. Ruderman", "A.A. Rusu", "I. Danihelka", "K. Gregor" ], "title": "Neural scene representation and rendering", "venue": null, "year": 2018 }, { "authors": [ "B. Eysenbach", "A. Gupta", "J. Ibarz", "S. Levine" ], "title": "Diversity is all you need: learning skills without a reward function", "venue": "arXiv preprint arXiv:1802.06070,", "year": 2018 }, { "authors": [ "I. Fischer" ], "title": "The conditional entropy bottleneck", "venue": "arXiv preprint arXiv:2002.05379,", "year": 2020 }, { "authors": [ "C. Florensa", "Y. Duan", "P. Abbeel" ], "title": "Stochastic neural networks for hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1704.03012,", "year": 2017 }, { "authors": [ "R. Fox", "A. Pakman", "N. Tishby" ], "title": "Taming the noise in reinforcement learning via soft updates", "venue": "arXiv preprint arXiv:1512.08562,", "year": 2015 }, { "authors": [ "K. Friston" ], "title": "The free-energy principle: a unified brain theory", "venue": "Nature reviews neuroscience,", "year": 2010 }, { "authors": [ "K. Friston" ], "title": "Life as we know it", "venue": "Journal of the Royal Society Interface,", "year": 2013 }, { "authors": [ "K. Friston" ], "title": "A free energy principle for a particular physics", "venue": "arXiv preprint arXiv:1906.10184,", "year": 2019 }, { "authors": [ "K. Friston", "R. Adams", "R. Montague" ], "title": "What is value—accumulated reward or evidence", "venue": "Frontiers in neurorobotics,", "year": 2012 }, { "authors": [ "K. Friston", "F. Rigoli", "D. Ognibene", "C. Mathys", "T. Fitzgerald", "G. Pezzulo" ], "title": "Active inference and epistemic value", "venue": "Cognitive neuroscience,", "year": 2015 }, { "authors": [ "K. Friston", "T. FitzGerald", "F. Rigoli", "P. Schwartenbeck", "G. Pezzulo" ], "title": "Active inference: a process theory", "venue": "Neural computation,", "year": 2017 }, { "authors": [ "A. Galashov", "S.M. Jayakumar", "L. Hasenclever", "D. Tirumala", "J. Schwarz", "G. Desjardins", "W.M. Czarnecki", "Y.W. Teh", "R. Pascanu", "N. Heess" ], "title": "Information asymmetry in kl-regularized rl", "venue": null, "year": 1905 }, { "authors": [ "S.K.S. Ghasemipour", "R. Zemel", "S. Gu" ], "title": "A divergence minimization perspective on imitation learning methods", "venue": "arXiv preprint arXiv:1911.02256,", "year": 2019 }, { "authors": [ "S. Ghosh", "F. Doshi-Velez" ], "title": "Model selection in bayesian neural networks via horseshoe priors", "venue": "arXiv preprint arXiv:1705.10388,", "year": 2017 }, { "authors": [ "K. Gregor", "D.J. Rezende", "D. Wierstra" ], "title": "Variational intrinsic control", "venue": "arXiv preprint arXiv:1611.07507,", "year": 2016 }, { "authors": [ "K. Gregor", "D.J. Rezende", "F. Besse", "Y. Wu", "H. Merzic", "A. v. d. Oord" ], "title": "Shaping belief states with generative environment models for rl", "venue": null, "year": 1906 }, { "authors": [ "R.L. Gregory" ], "title": "Perceptions as hypotheses", "venue": "Philosophical Transactions of the Royal Society of London. B, Biological Sciences,", "year": 1980 }, { "authors": [ "Z.D. Guo", "M.G. Azar", "B. Piot", "B.A. Pires", "T. Pohlen", "R. Munos" ], "title": "Neural predictive belief representations", "venue": "arXiv preprint arXiv:1811.06407,", "year": 2018 }, { "authors": [ "M. Gutmann", "A. Hyvärinen" ], "title": "Noise-contrastive estimation: a new estimation principle for unnormalized statistical models", "venue": "In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "T. Haarnoja", "H. Tang", "P. Abbeel", "S. Levine" ], "title": "Reinforcement learning with deep energy-based policies", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "T. Haarnoja", "A. Zhou", "P. Abbeel", "S. Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "D. Hafner", "T. Lillicrap", "I. Fischer", "R. Villegas", "D. Ha", "H. Lee", "J. Davidson" ], "title": "Learning latent dynamics for planning from pixels", "venue": "arXiv preprint arXiv:1811.04551,", "year": 2018 }, { "authors": [ "D. Hafner", "T. Lillicrap", "J. Ba", "M. Norouzi" ], "title": "Dream to control: Learning behaviors by latent imagination", "venue": "arXiv preprint arXiv:1912.01603,", "year": 2019 }, { "authors": [ "D. Hafner", "D. Tran", "A. Irpan", "T. Lillicrap", "J. Davidson" ], "title": "Reliable uncertainty estimates in deep neural networks using noise contrastive priors", "venue": "In Conference on Uncertainty in Artificial Intelligence,", "year": 2019 }, { "authors": [ "H. Haken" ], "title": "The science of structure: Synergetics", "venue": "Van Nostrand Reinhold,", "year": 1981 }, { "authors": [ "K. Hausman", "J.T. Springenberg", "Z. Wang", "N. Heess", "M. Riedmiller" ], "title": "Learning an embedding space for transferable robot skills", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "G. Hinton", "O. Vinyals", "J. Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "G.E. Hinton", "D. Van Camp" ], "title": "Keeping the neural networks simple by minimizing the description length of the weights", "venue": "In Proceedings of the sixth annual conference on Computational learning theory,", "year": 1993 }, { "authors": [ "G.E. Hinton", "S. Osindero", "Y.-W. Teh" ], "title": "A fast learning algorithm for deep belief nets", "venue": "Neural computation,", "year": 2006 }, { "authors": [ "R. Houthooft", "X. Chen", "Y. Duan", "J. Schulman", "F. De Turck", "P. Abbeel" ], "title": "VIME: Variational information maximizing exploration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "R.A. Howard", "J.E. Matheson" ], "title": "Risk-sensitive markov decision processes", "venue": "Management science,", "year": 1972 }, { "authors": [ "D.A. Huffman" ], "title": "A method for the construction of minimum-redundancy codes", "venue": "Proceedings of the IRE,", "year": 1952 }, { "authors": [ "A. Immer", "M. Korzepa", "M. Bauer" ], "title": "Improving predictions of bayesian neural networks via local linearization", "venue": "arXiv preprint arXiv:2008.08400,", "year": 2020 }, { "authors": [ "P. Izmailov", "D. Podoprikhin", "T. Garipov", "D. Vetrov", "A.G. Wilson" ], "title": "Averaging weights leads to wider optima and better generalization", "venue": "arXiv preprint arXiv:1803.05407,", "year": 2018 }, { "authors": [ "M. Jaderberg", "V. Mnih", "W.M. Czarnecki", "T. Schaul", "J.Z. Leibo", "D. Silver", "K. Kavukcuoglu" ], "title": "Reinforcement learning with unsupervised auxiliary tasks", "venue": "arXiv preprint arXiv:1611.05397,", "year": 2016 }, { "authors": [ "E.T. Jaynes" ], "title": "Information theory and statistical mechanics", "venue": "Physical review,", "year": 1957 }, { "authors": [ "W.H. Jefferys", "J.O. Berger" ], "title": "Ockham’s razor and bayesian analysis", "venue": "American Scientist,", "year": 1992 }, { "authors": [ "H. Jeffreys" ], "title": "Some tests of significance, treated by the theory of probability", "venue": "In Mathematical Proceedings of the Cambridge Philosophical Society,", "year": 1935 }, { "authors": [ "M.I. Jordan", "Z. Ghahramani", "T.S. Jaakkola", "L.K. Saul" ], "title": "An introduction to variational methods for graphical models", "venue": "Machine learning,", "year": 1999 }, { "authors": [ "R.E. Kalman" ], "title": "A new approach to linear filtering and prediction problems", "venue": "Journal of basic Engineering,", "year": 1960 }, { "authors": [ "H.J. Kappen", "V. Gómez", "M. Opper" ], "title": "Optimal control as a graphical model inference problem", "venue": "Machine learning,", "year": 2009 }, { "authors": [ "M. Karl", "M. Soelch", "J. Bayer", "P. van der Smagt" ], "title": "Deep variational bayes filters: Unsupervised learning of state space models from raw data", "venue": "arXiv preprint arXiv:1605.06432,", "year": 2016 }, { "authors": [ "M. Karl", "M. Soelch", "P. Becker-Ehmck", "D. Benbouzid", "P. van der Smagt", "J. Bayer" ], "title": "Unsupervised real-time control through variational empowerment", "venue": "arXiv preprint arXiv:1710.05101,", "year": 2017 }, { "authors": [ "R.E. Kass", "A.E. Raftery" ], "title": "Bayes factors", "venue": "Journal of the american statistical association,", "year": 1995 }, { "authors": [ "Y. Kim", "S. Wiseman", "A.C. Miller", "D. Sontag", "A.M. Rush" ], "title": "Semi-amortized variational autoencoders", "venue": "arXiv preprint arXiv:1802.02550,", "year": 2018 }, { "authors": [ "D.P. Kingma", "M. Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "A. Kirsch", "C. Lyle", "Y. Gal" ], "title": "Unpacking information bottlenecks: Unifying information-theoretic objectives in deep learning", "venue": "arXiv preprint arXiv:2003.12537,", "year": 2020 }, { "authors": [ "A.S. Klyubin", "D. Polani", "C.L. Nehaniv" ], "title": "Empowerment: A universal agent-centric measure of control", "venue": "IEEE Congress on Evolutionary Computation,", "year": 2005 }, { "authors": [ "R.G. Krishnan", "U. Shalit", "D. Sontag" ], "title": "Deep kalman filters", "venue": "arXiv preprint arXiv:1511.05121,", "year": 2015 }, { "authors": [ "S. Kullback", "R.A. Leibler" ], "title": "On information and sufficiency", "venue": "The annals of mathematical statistics,", "year": 1951 }, { "authors": [ "S. Lange", "M. Riedmiller" ], "title": "Deep auto-encoder neural networks in reinforcement learning", "venue": "In The 2010 International Joint Conference on Neural Networks (IJCNN),", "year": 2010 }, { "authors": [ "Y. LeCun", "S. Chopra", "R. Hadsell", "M. Ranzato", "F. Huang" ], "title": "A tutorial on energy-based learning", "venue": "Predicting structured data,", "year": 2006 }, { "authors": [ "A.X. Lee", "A. Nagabandi", "P. Abbeel", "S. Levine" ], "title": "Stochastic latent actor-critic: Deep reinforcement learning with a latent variable model", "venue": "arXiv preprint arXiv:1907.00953,", "year": 2019 }, { "authors": [ "L. Lee", "B. Eysenbach", "E. Parisotto", "E. Xing", "S. Levine", "R. Salakhutdinov" ], "title": "Efficient exploration via state marginal matching", "venue": "arXiv preprint arXiv:1906.05274,", "year": 2019 }, { "authors": [ "F. Leibfried", "S. Pascual-Diaz", "J. Grau-Moya" ], "title": "A unified bellman optimality principle combining reward maximization and empowerment", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "S. Levine" ], "title": "Reinforcement learning and control as probabilistic inference: Tutorial and review", "venue": "arXiv preprint arXiv:1805.00909,", "year": 2018 }, { "authors": [ "C. Li", "H. Liu", "C. Chen", "Y. Pu", "L. Chen", "R. Henao", "L. Carin" ], "title": "Alice: Towards understanding adversarial learning for joint distribution matching", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "D.V. Lindley" ], "title": "On a measure of the information provided by an experiment", "venue": "The Annals of Mathematical Statistics,", "year": 1956 }, { "authors": [ "C. Louizos", "M. Welling" ], "title": "Structured and efficient variational deep learning with matrix gaussian posteriors", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "C. Louizos", "M. Welling" ], "title": "Multiplicative normalizing flows for variational bayesian neural networks", "venue": "arXiv preprint arXiv:1703.01961,", "year": 2017 }, { "authors": [ "Y.-A. Ma", "T. Chen", "E. Fox" ], "title": "A complete recipe for stochastic gradient mcmc", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "D.J. MacKay" ], "title": "A practical bayesian framework for backpropagation networks", "venue": "Neural computation,", "year": 1992 }, { "authors": [ "D.J. MacKay" ], "title": "Information-based objective functions for active data selection", "venue": "Neural computation,", "year": 1992 }, { "authors": [ "D.J. MacKay" ], "title": "Information theory, inference and learning algorithms", "venue": "Cambridge university press,", "year": 2003 }, { "authors": [ "A. Mirchev", "B. Kayalibay", "M. Soelch", "P. van der Smagt", "J. Bayer" ], "title": "Approximate bayesian inference in spatial environments", "venue": "arXiv preprint arXiv:1805.07206,", "year": 2018 }, { "authors": [ "S. Mohamed", "D.J. Rezende" ], "title": "Variational information maximisation for intrinsically motivated reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "O. Morgenstern", "J. Von Neumann" ], "title": "Theory of games and economic behavior", "venue": "Princeton university press,", "year": 1953 }, { "authors": [ "P. Moutarlier", "R. Chatila" ], "title": "Stochastic multisensory data fusion for mobile robot location and environment modelling", "venue": "5th int. In Symposium on Robotics Research,", "year": 1989 }, { "authors": [ "K.P. Murphy" ], "title": "Machine learning: a probabilistic perspective", "venue": "MIT press,", "year": 2012 }, { "authors": [ "B. O’Donoghue", "I. Osband", "C. Ionescu" ], "title": "Making sense of reinforcement learning and probabilistic inference", "venue": "arXiv preprint arXiv:2001.00805,", "year": 2020 }, { "authors": [ "M. Okada", "N. Kosaka", "T. Taniguchi" ], "title": "Planet of the bayesians: Reconsidering and improving deep planning network by incorporating bayesian inference", "venue": "arXiv preprint arXiv:2003.00370,", "year": 2020 }, { "authors": [ "A. v. d. Oord", "Y. Li", "O. Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "D.A. Ortega", "P.A. Braun" ], "title": "Information, utility and bounded rationality", "venue": "In International Conference on Artificial General Intelligence,", "year": 2011 }, { "authors": [ "P.A. Ortega", "D.A. Braun" ], "title": "A minimum relative entropy principle for learning and acting", "venue": "Journal of Artificial Intelligence Research,", "year": 2010 }, { "authors": [ "I. Osband", "C. Blundell", "A. Pritzel", "B. Van Roy" ], "title": "Deep exploration via bootstrapped DQN", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "P.-Y. Oudeyer", "F. Kaplan", "V.V. Hafner" ], "title": "Intrinsic motivation systems for autonomous mental development", "venue": "IEEE transactions on evolutionary computation,", "year": 2007 }, { "authors": [ "T. Parr", "K.J. Friston" ], "title": "Generalised free energy and active inference", "venue": "Biological cybernetics,", "year": 2019 }, { "authors": [ "D. Pathak", "P. Agrawal", "A.A. Efros", "T. Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2017 }, { "authors": [ "J. Pearl" ], "title": "Causal diagrams for empirical research", "venue": "Biometrika, 82(4):669–688,", "year": 1995 }, { "authors": [ "C. Peterson" ], "title": "A mean field theory learning algorithm for neural networks", "venue": "Complex systems,", "year": 1987 }, { "authors": [ "V.H. Pong", "M. Dalal", "S. Lin", "A. Nair", "S. Bahl", "S. Levine" ], "title": "Skew-fit: State-covering self-supervised reinforcement learning", "venue": null, "year": 1903 }, { "authors": [ "B. Poole", "S. Ozair", "A. v. d. Oord", "A.A. Alemi", "G. Tucker" ], "title": "On variational bounds of mutual information", "venue": null, "year": 1905 }, { "authors": [ "J.W. Pratt" ], "title": "Risk aversion in the small and in the large", "venue": null, "year": 1964 }, { "authors": [ "K. Rawlik", "M. Toussaint", "S. Vijayakumar" ], "title": "Approximate inference and stochastic optimal control", "venue": "arXiv preprint arXiv:1009.3958,", "year": 2010 }, { "authors": [ "D.J. Rezende", "S. Mohamed", "D. Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "arXiv preprint arXiv:1401.4082,", "year": 2014 }, { "authors": [ "C. Salge", "C. Glackin", "D. Polani" ], "title": "Approximation of empowerment in the continuous domain", "venue": "Advances in Complex Systems,", "year": 2013 }, { "authors": [ "C. Salge", "C. Glackin", "D. Polani" ], "title": "Changing the environment based on empowerment as intrinsic motivation", "venue": null, "year": 2014 }, { "authors": [ "N. Savinov", "A. Raichuk", "R. Marinier", "D. Vincent", "M. Pollefeys", "T. Lillicrap", "S. Gelly" ], "title": "Episodic curiosity through reachability", "venue": "arXiv preprint arXiv:1810.02274,", "year": 2018 }, { "authors": [ "J. Schmidhuber" ], "title": "Curious model-building control systems", "venue": "IEEE International Joint Conference on Neural Networks,", "year": 1991 }, { "authors": [ "E. Schrödinger" ], "title": "What is life? The physical aspect of the living cell and mind", "venue": null, "year": 1944 }, { "authors": [ "J. Schulman", "X. Chen", "P. Abbeel" ], "title": "Equivalence between policy gradients and soft q-learning", "venue": "arXiv preprint arXiv:1704.06440,", "year": 2017 }, { "authors": [ "R. Sekar", "O. Rybkin", "K. Daniilidis", "P. Abbeel", "D. Hafner", "D. Pathak" ], "title": "Planning to explore via self-supervised world models", "venue": "arXiv preprint arXiv:2005.05960,", "year": 2020 }, { "authors": [ "T. Shankar", "A. Gupta" ], "title": "Learning robot skills with temporal variational inference", "venue": "arXiv preprint arXiv:2006.16232,", "year": 2020 }, { "authors": [ "C.E. Shannon" ], "title": "A mathematical theory of communication", "venue": "Bell system technical journal,", "year": 1948 }, { "authors": [ "A. Sharma", "S. Gu", "S. Levine", "V. Kumar", "K. Hausman" ], "title": "Dynamics-aware unsupervised discovery of skills", "venue": "arXiv preprint arXiv:1907.01657,", "year": 2019 }, { "authors": [ "P. Shyam", "W. Jaśkowski", "F. Gomez" ], "title": "Model-based active exploration", "venue": "arXiv preprint arXiv:1810.12162,", "year": 2018 }, { "authors": [ "R. Stratonovich" ], "title": "Markov’s conditional processes", "venue": "Teoriya Veroyatn. Primen,", "year": 1960 }, { "authors": [ "S. Sun", "C. Chen", "L. Carin" ], "title": "Learning structured weight uncertainty in bayesian neural networks", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "S. Sun", "G. Zhang", "J. Shi", "R. Grosse" ], "title": "Functional variational bayesian neural networks", "venue": "arXiv preprint arXiv:1903.05779,", "year": 2019 }, { "authors": [ "Y. Sun", "F. Gomez", "J. Schmidhuber" ], "title": "Planning to be surprised: Optimal bayesian exploration in dynamic environments", "venue": "In International Conference on Artificial General Intelligence,", "year": 2011 }, { "authors": [ "R.S. Sutton" ], "title": "Dyna, an integrated architecture for learning, planning, and reacting", "venue": "ACM SIGART Bulletin,", "year": 1991 }, { "authors": [ "R.S. Sutton", "A.G. Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "R.S. Sutton", "D. Precup", "S. Singh" ], "title": "Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning", "venue": "Artificial intelligence,", "year": 1999 }, { "authors": [ "Y. Teh", "V. Bapst", "W.M. Czarnecki", "J. Quan", "J. Kirkpatrick", "R. Hadsell", "N. Heess", "R. Pascanu" ], "title": "Distral: Robust multitask reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "D. Tirumala", "H. Noh", "A. Galashov", "L. Hasenclever", "A. Ahuja", "G. Wayne", "R. Pascanu", "Y.W. Teh", "N. Heess" ], "title": "Exploiting hierarchy for learning and transfer in kl-regularized rl", "venue": null, "year": 1903 }, { "authors": [ "N. Tishby", "D. Polani" ], "title": "Information theory of decisions and actions", "venue": "In Perception-action cycle,", "year": 2011 }, { "authors": [ "E. Todorov" ], "title": "General duality between optimal control and estimation", "venue": "IEEE Conference on Decision and Control,", "year": 2008 }, { "authors": [ "D. Tran", "M. Dusenberry", "M. van der Wilk", "D. Hafner" ], "title": "Bayesian layers: A module for neural network uncertainty", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "M. Tschannen", "J. Djolonga", "P.K. Rubenstein", "S. Gelly", "M. Lucic" ], "title": "On mutual information maximization for representation learning", "venue": null, "year": 1907 }, { "authors": [ "A. Wald" ], "title": "An essentially complete class of admissible decision functions", "venue": "The Annals of Mathematical Statistics,", "year": 1947 }, { "authors": [ "Y. Wen", "P. Vicol", "J. Ba", "D. Tran", "R. Grosse" ], "title": "Flipout: Efficient pseudo-independent weight perturbations on mini-batches", "venue": "arXiv preprint arXiv:1803.04386,", "year": 2018 }, { "authors": [ "N. Wiener" ], "title": "Cybernetics or Control and Communication in the Animal and the Machine", "venue": "MIT press,", "year": 1948 }, { "authors": [ "R.J. Williams", "J. Peng" ], "title": "Function optimization using connectionist reinforcement learning algorithms", "venue": "Connection Science,", "year": 1991 }, { "authors": [ "B. Xin", "H. Yu", "Y. Qin", "Q. Tang", "Z. Zhu" ], "title": "Exploration entropy for reinforcement learning", "venue": "Mathematical Problems in Engineering,", "year": 2020 }, { "authors": [ "D. Yarats", "A. Zhang", "I. Kostrikov", "B. Amos", "J. Pineau", "R. Fergus" ], "title": "Improving sample efficiency in model-free reinforcement learning from images", "venue": null, "year": 1910 }, { "authors": [ "G. Zhang", "S. Sun", "D. Duvenaud", "R. Grosse" ], "title": "Noisy natural gradient as variational inference", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "M. Zhang", "S. Vikram", "L. Smith", "P. Abbeel", "M. Johnson", "S. Levine" ], "title": "Solar: deep structured representations for model-based reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "R. Zhao", "P. Abbeel", "S. Tiomkin" ], "title": "Efficient online estimation of empowerment for reinforcement learning", "venue": "arXiv preprint arXiv:2007.07356,", "year": 2020 }, { "authors": [ "2017 Sun et al", "Louizos", "2017 Welling", "2018 Zhang et al", "Chang" ], "title": "2019), low rank posteriors (Izmailov et al., 2018; Dusenberry et al., 2020), and improved inference algorithms (Wen et al., 2018", "venue": "Immer et al.,", "year": 2020 }, { "authors": [ "Rezende et al", "Ha" ], "title": "2016) to learn an encoder that maps each input to its corresponding belief. The encoder is shared among inputs to reuse computation", "venue": null, "year": 2016 }, { "authors": [ "Schmidhuber", "Zhang" ], "title": "2019a) and accelerated RL", "venue": "Hafner et al.,", "year": 2018 }, { "authors": [ "Karl" ], "title": "2016), although the same graphical model applies to supervied learning with a BNN", "venue": null, "year": 2016 }, { "authors": [ "∝ exp(r(xt" ], "title": "KL-regularized control (Todorov, 2008) defines the preferences with an additional passive dynamics term τ(xt | x1:t−1", "venue": null, "year": 2008 }, { "authors": [ "VIC (Gregor" ], "title": "2016) introduced information-based skill discovery as an extension of empowerment, motivating a line of work including SNN (Florensa et al., 2017)", "venue": "DIAYN (Eysenbach et al., 2018), work by Hausman et al", "year": 2018 }, { "authors": [ "Friston" ], "title": "2012) and account for a variety of intrinsic objectives (Friston et al., 2020). However, typical implementations of active inference have been limited to simple tasks as of today, a problem that divergence minimization overcomes. Active inference differs from divergence minimization in the three aspects discussed below. Maximizing the input probability Divergence minimization aims to match the distribution", "venue": "(Wald,", "year": 1947 } ]
[ { "heading": null, "text": "" }, { "heading": "1 INTRODUCTION", "text": "To achieve goals in complex environments, intelligent agents need to perceive their environments and choose effective actions. These two processes, perception and action, are often studied in isolation. Despite the many objectives that have been proposed in the fields of representation learning and reinforcement learning, it remains unclear how the objectives relate to each other and which fundamentally new objectives remain yet to be discovered. Based on the KL divergence (Kullback and Leibler, 1951), we propose a unified framework for action and perception that connects a wide range of objectives to facilitate our understanding of them while providing a recipe for designing novel agent objectives. Our findings are conceptual in nature and this paper includes no empirical study. Instead, we offer a unified picture of a wide range of methods that have been shown to be successful in practice in prior work. The contributions of this paper are described as follows. Unified objective function for perception and action We propose joint KL minimization as a principled framework for designing and comparing agent objectives. KL minimization was proposed separately for perception as variational inference (Jordan et al., 1999; Alemi and Fischer, 2018) and for actions as KL control (Todorov, 2008; Kappen et al., 2009). Based on this insight, we formulate action and perception as jointly minimizing the KL from the world to a unified target distribution. The target serves both as the model to infer representations and as reward for actions. This extends variational inference to controllable inputs, while extending KL control to latent representations. We show a novel decomposition of joint KL divergence that explains several representation learning and exploration objectives. Divergence minimization additionally connects deep reinforcement learning to the free energy principle (Friston, 2010; 2019), while simplifying and overcoming limitations of its active inference implementations (Friston et al., 2017) that we discuss in Appendix B. Understanding latent variables for decision making Divergence minimization with an expressive target maximizes the mutual information between inputs and latents. Agents thus infer representations that are informative of past inputs and explore future inputs that are informative of the representations. For the past, this yields reconstruction (Hinton et al., 2006; Kingma and Welling, 2013) or contrastive learning (Gutmann and Hyvärinen, 2010; Oord et al., 2018). For the future, it yields information gain exploration (Lindley et al., 1956). Stochastic skills and actions are realized over time, so their past terms are constant. For the future, they lead to empowerment (Klyubin et al., 2005) and skill discovery (Gregor et al., 2016). RL as inference (Rawlik et al., 2010) does not maximize mutual information because its target is factorized. To optimize a consistent objective across past and future, latent representations should be accompanied by information gain exploration. Expressive world models for large ecological niches The more flexible an agent’s target or model, the better the agent can adapt to its environment. Minimizing the divergence between the world and the model, the agent converges to a natural equilibrium or niche where it can accurately predict its inputs and that it can inhabit despite external perturbations (Schrödinger, 1944; Wiener, 1948; Haken, 1981; Friston, 2013; Berseth et al., 2019). While surprise minimization can lead to trivial solutions, divergence minimization encourages the niche to match the agent’s model class, thus visiting all inputs proportionally to how well they can be understood. This suggests designing expressive world models of sensory inputs (Ebert et al., 2017; Hafner et al., 2018; Gregor et al., 2019) as a path toward building highly adaptive agents, while rendering task rewards optional." }, { "heading": "2 FRAMEWORK", "text": "This section introduces the framework of action and perception as divergence minimization (APD). To unify action and perception, we formulate the two processes as joint KL minimization with a shared target distribution. The target distribution expresses the agent’s preferences over system configurations and is also the probabilistic model under which the agent infers its representations. Using an expressive model as the target maximizes the mutual information between the latent variables and the sequence of sensory inputs, thus inferring latent representations that are informative of past inputs and exploring future inputs that are informative of the representations. We assume knowledge of basic concepts from probability and information theory that are reviewed in Appendix D." }, { "heading": "2.1 JOINT KL MINIMIZATION", "text": "Consider a stochastic system described by a joint probability distribution over random variables. For example, the random variables for supervised learning are the inputs and labels and for an agent they are the sequence of sensory inputs, internal representations, and actions. More generally, we combine\nall input variables into x and the remaining variables that we term latents into z. We will see that different latents correspond to different representation learning and exploration objectives.\nThe random variables are distributed according to their generative process or actual distribution pφ. Parts of the actual distribution can be unknown, such as the data distribution, and parts can be influenced by varying the parameter vector φ, such as the distribution of stochastic representations or actions. As a counterpart to the actual distribution, we define the desired target distribution τ over the same support. It describes our preferences over system configurations and can be unnormalized,\nActual distribution: x, z ∼ pφ(x, z) Target distribution: τ(x, z). (1) We formulate the problem of joint KL minimization as changing the parameters φ to bring the actual distribution of all random variables as close as possible to the target distribution, as measured by the KL divergence (Kullback and Leibler, 1951; Li et al., 2017; Alemi and Fischer, 2018),\nmin φ\nKL [ pφ(x, z) ∥∥ τ(x, z)]. (2) All expectations and KLs throughout the paper are integrals under the actual distribution, so they can be estimated from samples of the system and depend on φ. Equation 2 is the reverse KL or information projection used in variational inference (Csiszár and Matus, 2003).\nExamples For representation learning, pφ is the joint of data and belief distributions and τ is a latent variable model. Note that we use pφ to denote not the model under which we infer beliefs but the generative process of inputs and their representations. For control, pφ is the trajectory distribution under the current policy and τ corresponds to the utility of the trajectory. The parameters φ include everything the optimizer can change directly, such as sufficient statistics of representations, model parameters, and policy parameters.\nTarget parameters There are two ways to denote deterministic values within our framework, also known as MAP estimates in the probabilistic modeling literature (Bishop, 2006). We can either use a fixed target distribution and use a latent variable that follows a point mass distribution (Dirac, 1958), or we explicitly parameterize the target using a deterministic parameter as τφ. In either case, τ refers to the fixed model class. The two approaches are equivalent because in both cases the target receives a deterministic value that has no entropy regularizer. For more details, see Appendix A.1.\nAssumptions Divergence minimization uses only two inductive biases, namely that the agent optimizes an objective and that it uses random variables to represent uncertainty. Choosing the wellestablished KL as the divergence measure is an additional assumption. It corresponds to maximizing the expected log probability under the target while encouraging high entropy for all variables in the system to avoid overconfidence, as detailed in Appendix C. Common objectives with different degrees of entropy regularization are summarized in Table 1.\nGenerality Alternative divergence measures would lead to different optimization dynamics, different solutions if the target cannot be reached, and potentially novel objectives for representation learning and exploration. Nonetheless, the KL can describe any converged system, trivially by choosing its actual distribution as the target, and thus offers a simple and complete mathematical perspective for comparing a wide range of specific objectives that correspond to different latent variables and target distributions." }, { "heading": "2.2 INFORMATION BOUNDS", "text": "We show that for expressive targets that capture dependencies between the variables in the system, minimizing the joint KL increases both the preferences and the mutual information between inputs x and latents z. This property allows divergence minimization to explain a wide range of existing representation learning and exploration objectives. We use the term representation learning for inferring deterministic or stochastic variables from inputs, which includes local representations of individual inputs and global representations such as model parameters. Latent preferences The joint KL can be decomposed in multiple ways, for example into a marginal KL plus a conditional KL or by grouping marginal with conditional terms. To reveal the mutual information maximization, we decompose the joint KL into a preference seeking term and an information seeking term. The decomposition can be done either with the information term expressed over inputs and the preferences expressed over latents or the other way around,\nKL [ pφ(x, z) ∥∥ τ(x, z)] joint divergence = E KL [ pφ(z | x) ∥∥ τ(z)] realizing latent preferences − E [ ln τ(x ∣∣ z)− ln pφ(x)] information bound . (3)\nAll expectations throughout the paper are over all variables, under the actual distribution, and thus depend on the parameters φ. The first term on the right side of Equation 3 is a KL regularizer that keeps the belief pφ(z | x) over latent variables close to the marginal latent preferences τ(z). The second term is a variational bound on the mutual information I [ x; z ]\n(Barber and Agakov, 2003). The bound is expressed in input space. Maximizing the conditional ln τ(x | z) seeks latent variables that accurately predict inputs while minimizing the marginal ln pφ(x) seeks diverse inputs. Variational free energy When the agent cannot influence its inputs, such as when learning from a fixed dataset, the input entropy E [ − ln pφ(x) ] is not parameterized and can be dropped from Equation 3. This yields the free energy or ELBO objective used by variational inference to infer approximate posterior beliefs in latent variable models (Hinton and Van Camp, 1993; Jordan et al., 1999). The free energy regularizes the belief pφ(z | x) to stay close to the prior τ(z) while reconstructing inputs via τ(x | z). However, in reinforcement and active learning, inputs can be influenced and thus the input entropy should be kept, which makes the information bound explicit. Input preferences Analogously, we decompose the joint KL the other way around. The first term on the right side of Equation 4 is a KL regularizer that keeps the conditional input distribution pφ(x | z) close to the marginal input preferences τ(x). This term is analogous to the objective in KL control (Todorov, 2008; Kappen et al., 2009), except that the inputs now depend upon latent variables via the policy. The second term is again a variational bound on the mutual information I [ x; z ] , this time expressed in latent space. Intuitively, the bound compares the belief τ(z | x) after observing the inputs and the belief pφ(z) before observing any inputs to measure the gained information,\nKL [ pφ(x, z) ∥∥ τ(x, z)] joint divergence = E KL [ pφ(x | z) ∥∥ τ(x)] realizing input preferences − E [ ln τ(z ∣∣ x)− ln pφ(z)] information bound . (4)\nThe information bounds are tighter the better the target conditional approximates the actual conditional, meaning that the agent becomes better at maximizing mutual information as it learns more about the relation between the two variables. This requires an expressive target that captures correlations between inputs and latents, such as a latent variable model or deep neural network. Maximizing the mutual information accounts for both learning latent representations that are informative of inputs as well as exploring inputs that are informative of the latent representations." }, { "heading": "2.3 MODELS AS PREFERENCES", "text": "The target distribution defines our preferences over system configurations. However, we can also interpret it as a probabilistic model, or energy-based model if unnormalized (LeCun et al., 2006). This is because minimizing the joint KL infers beliefs over latent variables that approximate the posteriors under the model, as shown in Section 2.2. Because the target is not parameterized, it corresponds to the fixed model class, with parameters being inferred as latent variables, optionally using point mass distributions. As the agent brings the actual distribution closer to the target, the target also becomes a better predictor of the actual distribution. Divergence minimization thus emphasizes that the model class simply expresses preferences over latent representations and inputs and lets us interpret inference as bringing the joint of data and belief distributions toward the model joint.\nInput preferences Minimizing the joint divergence also minimizes the divergence between the agent’s input distribution pφ(x) and the marginal input distribution under its target or model τ(x). The marginal input distribution of the model is thus the agent’s preferred input distribution, that the agent aims to sample from in the environment. Because τ(x) marginalizes out all latent variables and parameters, it describes how well an input sequence x can possibly be described by the model class, as used in the Bayes factor (Jeffreys, 1935; Kass and Raftery, 1995). Divergence minimizing agents thus seek out inputs proportionally to how their models can learn to predict them through inference, while avoiding inputs that are inherently unpredictable given their model class. Because the target can be unnormalized, we can combine a latent variable model with a reward factor of the form exp(r(x)) to create a target that incorporates task rewards. The reward factor adds preferences for certain inputs without affecting the remaining variables in the model. We describe examples of such reward factors this in Appendix A.4 and Section 3.1.\nAction perception cycle Interpreting the target as a model shows that divergence minimization is consistent with the idea of perception as inference suggested by Helmholtz (Helmholtz, 1866; Gregory, 1980). Expressing preferences as models is inspired by the free energy principle and active inference (Friston, 2010; Friston et al., 2012; 2017), which we compare to in Appendix B. Divergence minimization inherits an interpretation of action and perception from active inference that we visualize in Figure 2a. While action and perception both minimize the same joint KL, they affect different variables. Perception is based on inputs and affects the beliefs over representations, while actions are based on the representations and affect inputs. Given a unified target, perception thus aligns the agent’s beliefs with the world while actions align the world with its beliefs.\nNiche seeking The information bounds responsible for representation learning and exploration are tighter under expressive targets, as shown in Section 2.2. What happens when we move beyond task rewards and simply define the target as a flexible model? The more flexible the target and belief family, the better the agent can minimize the joint KL. Eventually, the agent will converge to a natural equilibrium or ecological niche where it can predict its inputs well and that it can inhabit despite external perturbations (Wiener, 1948; Ashby, 1961). Niche seeking connects to surprise minimization (Schrödinger, 1944; Friston, 2013; Berseth et al., 2019), which aims to maximize the marginal likelihood of inputs under a model. In environments without external perturbations, this can lead to trivial solutions once they are explored. Divergence minimization instead aims to match the marginal input distribution of the model. This encourages large niches that cover all inputs that the agent can learn to predict. Moreover, it suggests that expressive world models lead to autonomous agents that understand and inhabit large niches, rendering task rewards optional." }, { "heading": "2.4 PAST AND FUTURE", "text": "Representations are computed from past inputs and exploration targets future inputs. To identify the two processes, we thus need to consider how an agent optimizes the joint KL after observing past inputs x< and before observing future inputs x>, as discussed in Figure 2b. For example, past inputs can be stored in an experience dataset and future inputs can be approximated by planning with a learned world model, on-policy trajectories, or replay of past inputs (Sutton, 1991). To condition the joint KL on past inputs, we first split the information bound in Equation 3 into two smaller bounds on the past mutual information I [ x<; z ] and additional future mutual information I [ x>; z\n∣∣ x<], E [ ln τ(z ∣∣ x)− ln pφ(z)]\ninformation bound\n= E [ ln τ(z ∣∣ x)− ln pφ(z | x<) + ln pφ(z | x<)− ln pφ(z) ]\n≥E [ ln τ(z ∣∣ x)− ln pφ(z | x<)\nfuture information bound\n+ ln τ(z | x<) − ln pφ(z) past information bound\n] . (5)\nEquation 5 splits the belief update from the prior pφ(z) to the posterior τ(z | x) into two updates via the intermediate belief pφ(z | x<) and then applies the variational bound from Barber and Agakov (2003) to allow both updates to be approximate. Splitting the information bound lets us separate past and future terms in the joint KL, or even separate individual time steps. It also lets us separately choose to express terms in input or latent space. This decomposition is one of our main contributions and shows how the joint KL divergence accounts for both representation learning and exploration,\nKL [ pφ(x, z) ∥∥ τ(x, z)] ≤ E KL[pφ(z | x<) ∥∥ τ(z)] realizing past latent preferences − E [ ln τ(x< ∣∣ z)− ln pφ(x<)] representation learning\n+ E KL [ pφ(x> | x<, z) ∥∥ τ(x> | x<)] realizing future input preferences − E [ ln τ(z ∣∣ x)− ln pφ(z | x<)] exploration . (6)\nConditioning on past inputs x< removes their expectation and renders pφ(x<) constant. While some latent variables in the set z are never realized, such as latent state estimates or model parameters, other latent variables become observed over time, such as stochastic actions or skills. Because the agent selects the values of these variables, we have to condition the objective terms on them as causal interventions (Pearl, 1995; Ortega and Braun, 2010). In practice, this means replacing all occurrences of z by the unobserved latents z> and conditioning those terms on the observed latents do(z<). To keep notation simple, we omit this step in our notation. To build an intuition about Equation 6, we discuss the four terms on the right-hand side. The first two terms involve the past while the last two terms involve the future. The first term keeps the agent’s belief pφ(z | x<) close to the prior τ(z) to incorporate inductive biases. The second term encourages the belief to be informative of the past inputs so that the inputs are reconstructed by τ(x< | z), where is pφ(x<) is a constant because x< are observed. The third term is the control objective that steers toward future inputs that match the preferred input distribution τ(x> | x<). The fourth term is an information bound that seeks out future inputs that are informative of the latent representations in z and encourages actions or skills in z that maximally influence future inputs. The decomposition shows that the joint KL accounts for both learning informative representations of past inputs and exploring informative future inputs as two sides of the same coin. From this, we derive several representation and exploration objectives by including different latent variables in the set z. These objectives are summarized in Table 2 and derived with detailed examples in Section 3. Representation learning Because past inputs are observed, the past information bound only affects the latents. Expressed as Equation 3, it leads to reconstruction (Hinton et al., 2006), and as Equation 4, it leads to contrastive learning (Gutmann and Hyvärinen, 2010; Oord et al., 2018). This accounts for local representations of individual inputs, as well as global representations, such as latent parameters. Moreover, representations can be inferred online or amortized using an encoder (Kingma and Welling, 2013). Latents with point estimates are equivalent to target parameters and thus are optimized jointly to tighten the variational bounds. Because past actions and skills are realized, their mutual information with realized past inputs is constant and thus contributes no past objective terms. Exploration Under a flexible target, latents in z result in information-maximizing exploration. For latent representations, this is known as expected information gain and encourages informative future inputs that convey the most bits about the latent variable, such as world model parameters, policy parameters, or state estimates (Lindley et al., 1956; Sun et al., 2011). For stochastic actions, a fully factorized target leads to maximum entropy RL. An expressive target yields empowerment,\nmaximizing the agent’s influence over the world (Klyubin et al., 2005). For skills, it yields skill discovery or diversity that learns distinct modes of behavior that together cover many different trajectories (Gregor et al., 2016; Florensa et al., 2017; Eysenbach et al., 2018; Achiam et al., 2018)." }, { "heading": "3 EXAMPLES", "text": "We use the framework of action and perception as divergence minimization presented in Section 2 to derive a wide range of concrete objective functions that have been proposed in the literature, shown in Figure 1. For this, we analyze the cases of different latent variables and factorizations of the actual and target distributions. These derivations serve as practical examples for producing new objective functions within our framework. We start by describing maximum entropy RL because of its popularity in the literature. Due to space constraints, we refer to Appendix A for the remaining examples, which include variational inference, amortized inference, filtering, KL control, empowerment, skill discovery, and information gain. Designing novel objectives In practice, an agent is determined by its target distribution, belief family, and optimization algorithm. Our framework thus suggests to break down the implementation of an agent into the same three components that are typically considered in probabilistic modeling. As Section 2 showed, the target distribution is also the model under which the agent infers its beliefs about the world. We also saw that more expressive models allow agents to increase the mutual information between their inputs and latents more. To design an agent that learns a lot about the world, we should thus design expressive world model and use them as the target distribution. For example, these could include latent state estimates, latent parameters, latent skills, hierarchical latents, or temporal abstraction. Each world model corresponds to a new agent objective.\n3.1 MAXIMUM ENTROPY RL\nMaximum entropy RL (Williams and Peng, 1991; Kappen et al., 2009; Rawlik et al., 2010; Tishby and Polani, 2011; Fox et al., 2015; Schulman et al., 2017; Haarnoja et al., 2018) chooses stochastic actions to maximize a task reward while remaining close to an action prior. The action prior is typically independent of the inputs, corresponding to a factorized target. The objective thus does not contain a mutual information term. Despite factorized targets being common in practice, we suggest that expressive targets, such as world models, are preferable in the longer term.\nFigure 3 shows the actual and target distributions for maximum entropy RL. The input sequence is x . = {xt} and the action sequence is a . = {at}. In the graphical model, these are grouped into past actions and inputs ax<, future actions a>, and future inputs x>. The actual distribution consists of the fixed environment dynamics and a stochastic policy. The target consists of a reward factor, an action prior that is often the same for all time steps, and the environment dynamics,\nActual: pφ(x, a) . = ∏ t p(xt | x1:t−1, a1:t−1)\nenvironment\npφ(at | x1:t, a1:t−1) policy ,\nTarget: τ(x, a) ·∝ ∏ t exp(r(xt)\nreward ) p(xt | x1:t−1, a1:t−1) environment τ(at). action prior\n(7)\nMinimizing the joint KL results in a complexity regularizer in action space and the expected reward. Including the environment dynamics in the target cancels out the curiosity term as in the expected reward case in Appendix A.4, leaving maximum entropy RL to explore only in action space. Moreover, including the environment dynamics in the target gives up direct control over the agent’s input preferences, as they depend not just on the reward but also the environment dynamics marginal. Because the target distribution is factorized and does not capture dependencies between x and a, maximum entropy RL does not maximize their mutual information,\nKL [ pφ ∥∥ τ] = ∑t E KL[pφ(at | x1:t, a1:t−1) ∥∥ τ(at)]\ncomplexity\n− E [ r(xt) ] .\nexpected reward (8)\nThe action complexity KL can be simplified into an entropy regularizer by choosing a uniform action prior as in SQL (Haarnoja et al., 2017) and SAC (Haarnoja et al., 2018). The action prior\ncan also depend on the past inputs and incorporate knowledge from previous tasks as in Distral (Teh et al., 2017) and work by Tirumala et al. (2019) and Galashov et al. (2019). Divergence minimization motivates combining maximum entropy RL with input density exploration by removing the environment dynamics from the target distribution. The resulting agent aims to converge to the input distribution that is proportional to the exponentiated task reward. Moreover, divergence minimization shows that the difference between maximum entropy RL and empowerment, that we describe in Appendix A.5, is the target factorizes actions and inputs or captures their dependencies." }, { "heading": "4 RELATED WORK", "text": "Divergence minimization Various problems have been formulated as minimizing a divergence between two distributions. TherML (Alemi and Fischer, 2018) studies representation learning as KL minimization. We follow their interpretation of the data and belief as actual distribution, although their target is only defined by its factorization. ALICE (Li et al., 2017) describes adversarial learning as joint distribution matching, while Kirsch et al. (2020) unify information-based objectives. Ghasemipour et al. (2019) describe imitation learning as minimizing divergences between the inputs of learned and expert behavior. None of these works consider combined representation learning and control. Thompson sampling minimizes the forward KL to explain action and perception as exact inference (Ortega and Braun, 2010). In comparison, we optimize the backward KL to support intractable models and connect to a wide range of practical objectives. Active inference The presented framework is inspired by the free energy principle, which studies the dynamics of agent and environment as stationary SDEs (Friston, 2010; 2019). We inherit the interpretations of active inference, which implements agents based on the free energy principle (Friston et al., 2017). While divergence minimization matches the input distribution under the model, active inference maximizes the probability of inputs under it, resulting in smaller niches. Moreover, active inference optimizes the exploration terms only with respect to actions, which requires a specific action prior. Finally, typical implementations of active inference involve an expensive Bayesian model average over possible action sequences, limiting its applications to date (Friston et al., 2015; 2020). We compare to active inference in detail in Appendix B. Generalized free energy (Parr and Friston, 2019) studies a unified objective similar to ours, although its entropy terms are defined heuristically rather than derived from a general principle. Control as inference It is well known that RL can be formulated as KL minimization over inputs and actions (Todorov, 2008; Kappen et al., 2009; Rawlik et al., 2010; Ortega and Braun, 2011; Levine, 2018), as well as skills (Hausman et al., 2018; Tirumala et al., 2019; Galashov et al., 2019). We build upon this literature and extend it to agents with latent representations, leading to variational inference on past inputs and information seeking exploration for future inputs. Divergence minimization relates the above methods and motivates an additional entropy regularizer for inputs (Todorov, 2008; Lee et al., 2019b; Xin et al., 2020). SLAC (Lee et al., 2019a) combines representation learning and control but does not consider the future mutual information, so their objective changes over time. In comparison, we derive the terms from a general principle and point out the information gain that results in an objective that is consistent over time. The information gain term may also address concerns about maximum entropy RL raised by O’Donoghue et al. (2020)." }, { "heading": "5 CONCLUSION", "text": "We introduce a general objective for action and perception of intelligent agents, based on minimizing the KL divergence. To unify the two processes, we formulate them as joint KL minimization with a shared target distribution. This target distribution is the probabilistic model under which the agent infers its representations and expresses the agent’s preferences over system configurations. We summarize the key takeaways as follows:\n• Unified objective for action and perception Divergence minimization with an expressive target maximizes the mutual information between latents and inputs. This leads to inferring representations that are informative of past inputs and exploration of future inputs that are informative of the representations. To optimize a consistent objective that does not change over time, any latent representation should be accompanied by a corresponding exploration term. • Understanding of latent variables for decision making Different latents lead to different\nobjective terms. Latent representations are never observed, leading to both representation learning\nand information gain exploration. Actions and skills become observed over time and thus do not encourage representation learning but lead to generalized empowerment and skill discovery. • Adaptive agents through expressive world models Divergence minimization agents with an\nexpressive target find niches where they can accurately predict their inputs and that they can inhabit despite external perturbations. The niches correspond to the inputs that the agent can learn to understand, which is facilitated by the exploration terms. This suggests designing powerful world models as a path toward building autonomous agents, without the need for task rewards. • General recipe for designing novel objectives When introducing new agent objectives, we\nrecommend deriving them from the joint KL by choosing a latent structure and target. For information maximizing agents, the target is an expressive model, leaving different latent structures to be explored. Deriving novel objectives from the joint KL facilitates comparison, renders explicit the target distribution, and highlights the intrinsic objective terms needed to reach that distribution. • Discovering new families of agent objectives Our work shows that a family of representation\nlearning and exploration objectives can be derived from minimizing a joint KL between the system and a target distribution. Different divergence measures give rise to new families of such agent objectives that could be easier to optimize or converge to better optima for infeasible targets. We leave exploring those objective families and comparing them empirically as future work.\nWithout constraining the class of targets, our framework is general and can describe any system. This by itself offers a framework for comparing many existing methods. However, interpreting the target as a model further suggests that intelligent agents may use especially expressive models as targets. This hypothesis should be investigated in future work by examining artificial agents with expressive world models or by modeling the behavior of natural agents as divergence minimization. Acknowledgements Hidden for review." }, { "heading": "A ADDITIONAL EXAMPLES", "text": "This section leverages the presented framework to explain a wide range of objectives in a unifying review, as outlined in Figure 1. For this, we include different variables in the actual distribution, choose different target distributions, and then rewrite the joint KL to recover familiar objectives. We start with perception, the case with latent representations but uncontrollable inputs and then turn to action, the case without latent representations but with controllable inputs. We then turn to combined action and perception. The derivations follow the general recipe described in Section 2. The same steps can be followed for new latent structures and target distributions to yield novel agent objectives.\nA.1 VARIATIONAL INFERENCE\nFollowing Helmholtz, we describe perception as inference under a model (Helmholtz, 1866; Gregory, 1980; Dayan et al., 1995). Inference computes a posterior over representations by conditioning the model on inputs. Because this has no closed form in general, variational inference optimizes a parameterized belief to approximate the posterior (Peterson, 1987; Hinton and Van Camp, 1993; Jordan et al., 1999). Figure 4 shows variational inference for the example of supervised learning using a BNN (Denker et al.,\n1987; MacKay, 1992a; Blundell et al., 2015). The inputs are images x .= {xi} and their classes y . = {yi} and we infer the latent parameters w as a global representation of the data set (Alemi and Fischer, 2018). The parameters depend on the inputs only through the optimization process that produces φ. The target consists of a parameter prior and a conditional likelihood that uses the parameters to predict classes from images,\nActual: pφ(x, y, w) . = pφ(w)\nbelief\n∏ i p(xi, yi)\ndata\n,\nTarget: τ(x, y, w) ·∝ τ(w) prior\n∏ i τ(yi | xi, w)\nlikelihood\n. (9)\nApplying the framework, we minimize the KL between the actual and target joints. Because the data distribution is fixed here, the input marginal p(x, y) is a constant. In this case, the KL famously results in the free energy or ELBO objective (Hinton and Van Camp, 1993; Jordan et al., 1999) that trades off remaining close to the prior and enabling accurate predictions. The objective can be interpreted as the description length of the data set under entropy coding (Huffman, 1952; MacKay, 2003) because it measures the nats needed for storing both parameter belief and prediction residuals,\nKL [ pφ ∥∥ τ] = KL[pφ(w) ∥∥ τ(w)]\ncomplexity\n− E [ ln τ(y ∣∣ x,w)]\naccuracy\n+ E [ ln p(x, y) ]\nconstant\n. (10)\nVariational methods for BNNs (Peterson, 1987; Hinton and Van Camp, 1993; Blundell et al., 2015) differ in their choices of prior and belief distributions and inference algorithm. This includes hierarchical priors (Louizos and Welling, 2016; Ghosh and Doshi-Velez, 2017), data priors (Louizos and Welling, 2016; Hafner et al., 2019b; Sun et al., 2019), flexible posteriors (Louizos and Welling, 2016; Sun et al., 2017; Louizos and Welling, 2017; Zhang et al., 2018; Chang et al., 2019), low rank posteriors (Izmailov et al., 2018; Dusenberry et al., 2020), and improved inference algorithms (Wen et al., 2018; Immer et al., 2020). BNNs have been leveraged for RL for robustness (Okada et al., 2020; Tran et al., 2019) and exploration (Houthooft et al., 2016; Azizzadenesheli et al., 2018). Target parameters While expressive beliefs over model parameters lead to a global search for their values, provide uncertainty estimates for predictions, and enable directed exploration in the RL setting, they can be computationally expensive. When these properties are not needed, we can choose a point mass distribution pφ(w) → δφ(w) . = {1 if w = φ else 0} to simplify the expectations and avoid the entropy and mutual information terms that are zero for this variable (Dirac, 1958),\nKL [ pφ(w) ∥∥ τ(w)] complexity − E [ ln τ(y ∣∣ x,w)] accuracy → ln τ(φ) complexity − E [ ln τ(y ∣∣ x, φ)] accuracy . = E [ − ln τφ(y ∣∣ x)] parameterized target . (11)\nPoint mass beliefs result in MAP or maximum likelihood estimates (Bishop, 2006; Murphy, 2012) that are equivalent to parameterizing the target as τφ. Parameterizing the target is thus a notational\nchoice for random variables with point mass beliefs. Technically, we also require the prior over target parameters to be integrable but this is true in practice where only finite parameter spaces exist.\nA.2 AMORTIZED INFERENCE\nLocal representations represent individual inputs. They can summarize inputs more compactly, enable interpolation between inputs, and facilitate generalization to unseen inputs. In this case, we can use amortized inference (Kingma and Welling, 2013; Rezende et al., 2014; Ha et al., 2016) to learn an encoder that maps each input to its corresponding belief. The encoder is shared among inputs to reuse computation. It can also compute beliefs for new inputs without further optimization, although optimization can refine the belief (Kim et al., 2018).\nFigure 5 shows amortized inference on the example of a VAE (Kingma and Welling, 2013; Rezende et al., 2014). The inputs are images x .= {xi} and we infer their latent codes z = {zi}. The actual distribution consists of the unknown and fixed data distribution and the parameterized encoder pφ(zi | xi). The target is a probabilistic model defined as the prior over codes and the decoder that computes the conditional likelihood of each image given its code. We parameterize the target here, but one could also introduce an additional latent variable to infer a distribution over decoder parameters as in Appendix A.1,\nActual: pφ(x, z) . = ∏ i p(xi)\ndata pφ(zi | xi) encoder ,\nTarget: τφ(x, z) . = ∏ i τφ(xi | zi)\ndecoder τ(zi) prior\n. (12)\nBecause the data distribution is still fixed, minimizing the joint KL again results in the variational free energy or ELBO objective that trades of prediction accuracy and belief simplicity. However, by including the constant input marginal, we highlight that the prediction term is a variational bound on the mutual information that encourages the representations to be informative of their inputs,\nKL [ pφ ∥∥ τφ] = E KL[pφ(z | x) ∥∥ τ(z)]\ncomplexity\n− E [ ln τφ(x ∣∣ z)− ln p(x)]\ninformation bound\n. (13)\nIn input space, the information bound leads to reconstruction as in DBNs (Hinton et al., 2006), VAEs (Kingma and Welling, 2013; Rezende et al., 2014), and latent dynamics (Krishnan et al., 2015; Karl et al., 2016). In latent space, it leads to contrastive learning as in NCE (Gutmann and Hyvärinen, 2010), CPC (Oord et al., 2018; Guo et al., 2018), CEB (Fischer, 2020), and SimCLR (Chen et al., 2020). To maximize their mutual information, x and z should be strongly correlated under the target distribution, which explains the empirical benefits of ramping up the decoder variance throughout learning (Bowman et al., 2015; Eslami et al., 2018) or scaling the temperature of the contrastive loss (Chen et al., 2020). The target defines the variational family and includes inductive biases (Tschannen et al., 2019). Both forms have enabled learning world models for planning (Ebert et al., 2018; Ha and Schmidhuber, 2018; Zhang et al., 2019; Hafner et al., 2018; 2019a) and accelerated RL (Lange and Riedmiller, 2010; Jaderberg et al., 2016; Lee et al., 2019a; Yarats et al., 2019; Gregor et al., 2019).\nA.3 FUTURE INPUTS\nBefore moving to actions, we discuss perception with unobserved future inputs that are outside of our control (Ghahramani and Jordan, 1995). This is typical in supervised learning where the test set is unavailable during training (Bishop, 2006), in online learning where training inputs become available over time (Amari, 1967), and in filtering where only inputs up to the current time are available (Kalman, 1960). Figure 6 shows missing inputs on the example of filtering with an HMM (Stratonovich, 1960; Kalman,\n1960; Karl et al., 2016), although the same graphical model applies to supervied learning with a BNN\nor representation learning with a VAE given train and test data sets. The inputs x .= {x<, x>} consist of past images x< and future images x> that follow an unknown and fixed data distribution. We represent the input sequence using a chain z of corresponding compact latent states. However, the representations are computed only based on x< because x> is not yet available, as expressed in the factorization of the actual distribution,\nActual: pφ(x, z) . = p(x>, x<)\ndata\npφ(z | x<) belief ,\nTarget: τφ(x, z) . = τφ(x< | z)\nlikelihood τφ(x> | z) prediction τ(z) prior\n. (14)\nBayesian assumption Bayesian reasoning operates within the model class τ and makes the assumption that the model class is correct. Under this assumption, the future inputs x> ∼ p(x> | x<, z) = p(x> | x<) follow the target distribution τφ(x> | x<, z) = τφ(x> | z). This renders the divergence of future inputs given the other variables zero, so that x> does not need to be considered for optimization, recovering standard variational inference from Appendix A.1,\nKL [ pφ ∥∥ τφ] = KL[pφ(x<, z) ∥∥ τφ(x<, z)]\nvariational inference\n+ E KL [ p(x> | x<) ∥∥ τφ(x> | z)] uncontrolled future . (15)\nAssuming that future inputs follow the model distribution is appropriate when the model accurately reflects our knowledge about future inputs. However, the assumption does not always hold, for example for data augmentation or distillation (Hinton et al., 2015) that generate data from another distribution to improve the model. Importantly, assuming that future inputs already follow the target is not appropriate when they can be influenced, because there would be no need to intervene.\nA.4 CONTROL\nWe describe behavior as an optimal control problem where the agent chooses actions to move its distribution of sensory inputs toward a preference distribution over inputs that can be specified via rewards (Morgenstern and Von Neumann, 1953; Lee et al., 2019b). We first cover deterministic actions that lead to KL control (Kappen et al., 2009; Todorov, 2008) and input density exploration (Schmidhuber, 1991; Bellemare et al., 2016; Pathak et al., 2017). Figure 7 shows deterministic control with the input\nsequence x .= {xt} that the agent can partially influence by varying the parameters φ of the deterministic policy, control rule, or plan. In the graphical model, we group the input sequence into past inputs x< and future inputs x>. There are no internal latent variables. The target describes the preferences over input sequences that can be unnormalized,\nActual: pφ(x) . = ∏ t pφ(xt | x1:t−1)\ncontrolled dynamics\n,\nTarget: τ(x) .= ∏ t τ(xt | x1:t−1)\npreferences\n. (16)\nMinimizing the KL between the actual and target joints maximizes log preferences and the input entropy. Maximizing the input entropy is a simple form of exploration known as input density exploration that encourages rare inputs and aims for a uniform distribution over inputs (Schmidhuber, 1991; Oudeyer et al., 2007). This differs from the action entropy of maximum entropy RL in Section 3.1 and information gain in Appendix A.7 that takes inherent stochasticity into account,\nKL [ pφ ∥∥ τ] = −∑t ( E[ ln τ(xt ∣∣ x1:t−1)]\nexpected preferences\n+ H [ pφ(xt ∣∣ x1:t−1)] curiosity ) . (17)\nTask reward Motivated by risk-sensitivity (Pratt, 1964; Howard and Matheson, 1972), KL control (Kappen et al., 2009) defines the preferences as exponential task rewards τ(xt | x1:t−1) ·∝ exp(r(xt)). KL-regularized control (Todorov, 2008) defines the preferences with an additional passive dynamics term τ(xt | x1:t−1) ·∝ exp(r(xt))τ ′(xt | x1:t−1). Expected reward (Sutton and Barto, 2018) corresponds to the preferences τφ(xt | x1:t−1) ·∝ exp(r(xt))pφ(xt | x1:t−1) that include the controlled dynamics. This cancels out the curiosity term in the joint KL, leading to a simpler objective that does not encourage rare inputs, which might limit exploration of the environment.\nInput density exploration Under divergence minimization, maximizing the input entropy is not an exploration heuristic but an inherent part of the control objective. In practice, the input entropy is often estimated by learning a density model of individual inputs as in pseudo-counts (Bellemare et al., 2016), latent variable models as in SkewFit (Pong et al., 2019), unnormalized models as in RND (Burda et al., 2018), and non-parameteric models as in reachability (Savinov et al., 2018). More accurately, it can be estimated by a sequence model of inputs as in ICM (Pathak et al., 2017). The expectation over inputs is estimated by sampling episodes from either the actual environment, a replay buffer, or a learned model of the environment (Sutton, 1991).\nA.5 EMPOWERMENT\nRemaining in the stochastic control setting of Section 3.1, we consider a different target distribution that predicts actions from inputs. This corresponds to an exploration objective that we term generalized empowerment, which maximizes the mutual information between the sequence of future inputs and future actions. It encourages the agent to influence its environment in as many ways as possible while avoiding actions that have no predictable effect.\nFigure 8 shows stochastic control with an expressive target that captures correlations between inputs and actions. The input sequence is x .= {xt} and the action sequence is a . = {at}. In the graphical model, these are grouped into past actions and inputs ax<, future actions a>, and future inputs x>. The actual distribution consists of the environment and the stochastic policy. The target predicts actions from the inputs before and after them using a reverse predictor. We use uniform input preferences here, but the target can also include an additional reward factor as in Section 3.1,\nActual: pφ(x, a) . = ∏ t p(xt | x1:t−1, a1:t−1)\nenvironment\npφ(at | x1:t, a1:t−1) policy ,\nTarget: τφ(x, a) ·∝ ∏ t τφ(at | x1:T , a1:t−1)\nreverse predictor\n. (18)\nMinimizing the joint KL reveals an information boudn between future actions and inputs and a control term that maximizes input entropy and, if specified, task rewards. Empowerment (Klyubin et al., 2005) was originally introduced as potential empowerment to “keep your options open” and was later studied as realized empowerment to “use your options” (Salge et al., 2014). Realized empowerment maximizes the mutual information I [ xt+k; at:t+k\n∣∣ x1:t, a1:t−1]. Divergence minimization generalizes this to the mutual information I [ xt:T ; at:T\n∣∣ x1:t, a1:t−1] between the sequences of future actions and future inputs. The k-step variant is recovered by a target that conditions the reverse predictor on fewer inputs. Realized empowerment measures agent’s influence on its environment and can be interpreted as maximizing information throughput with the action marginal pφ(at | at−1) as source, the environment as noisy channel, and the reverse predictor as decoder,\nKL [ pφ ∥∥ τφ] = E KL[p(x | a) ∥∥ τ(x)]\ncontrol\n− E [ ln τφ(a ∣∣ x)− ln pφ(a)]\ngeneralized empowerment\n,\nE [ ln τφ(a ∣∣ x)− ln pφ(a)]\ngeneralized empowerment\n≥ ∑ t E [ ln τφ(at ∣∣ x, a1:t−1) decoder − ln pφ(at | a1:t−1) source ] .\n(19)\nEmpowerment has been studied for continuous state spaces (Salge et al., 2013), for image inputs (Mohamed and Rezende, 2015), optimized using a variational bound (Karl et al., 2017), combined with input density exploration (de Abril and Kanai, 2018) and task rewards (Leibfried et al., 2019), and used for task-agnostic exploration of locomotion behaviors (Zhao et al., 2020). Divergence minimization suggests generalizing empowerment from the input k steps ahead to the sequence of all future inputs. This can be seen as combining empowerment terms of different horizons. Moreover, we offer a principled motivation for combining empowerment with input density exploration. In comparison to maximum entropy RL in Section 3.1, empowerment captures correlations between x and a in its target distribution and thus leads to information maximization. Moreover, it encourages the agent to converge to the input distribution that is proportional to the exponentiated reward.\nA.6 SKILL DISCOVERY\nMany complex tasks can be broken down into sequences of simpler steps. To leverage this idea, we can condition a policy on temporally abstract options or skills (Sutton et al., 1999). Skill discovery aims to learn useful skills, either for a specific task or without rewards to solve downstream tasks later on. Where empowerment maximizes the mutual information between inputs and actions, skill discovery can be formulated as maximizing the mutual information between inputs and skills (Gregor et al., 2016).\nFigure 9 shows skill discovery with the input sequence x .= {xt}, action sequence a . = {at}, and the sequence of temporally abstract skills z .= {zk}. The graphical model groups the sequences into past and future variables. The actual distribution consists of the fixed environment, an abstract policy that selects skills by sampling from a fixed distribution as shown here or as a function of past inputs, and the low-level policy that selects actions based on past inputs and the current skill. The target consists of an action prior and a reverse predictor for the skills and could further include a reward factor,\nActual: pφ(x, a, z) . = ∏T/K k=1 pφ(zk)\nabstract policy\n∏T t=1 pφ(at | x1:t, a1:t−1, zbt/Kc)\npolicy\np(xt | x1:t−1, a1:t−1) environment ,\nTarget: τφ(x, a, z) ·∝ ∏T/K k=1 τφ(zk | x)\nreverse predictor\n∏T t=1 τ(at)\naction prior\n. (20)\nMinimizing the joint KL results in a control term as in Appendix A.5, a complexity regularizer for actions as in Section 3.1, and a variational bound on the mutual information between the sequences of inputs and skills. The information bound is a generalization of skill discovery (Gregor et al., 2016; Florensa et al., 2017). Conditioning the reverse predictor only on inputs that align with the duration of the skill recovers skill discovery. Maximizing the mutual information between skills and inputs encourages the agent to learn skills that together realize as many different input sequences as possible while avoiding overlap between the sequences realized by different skills,\nKL [ pφ ∥∥ τφ] = E KL[p(x | a) ∥∥ τ(x)]\ncontrol\n+ E KL [ pφ(a | x, z) ∥∥ τ(a)] complexity − E [ ln τφ(z ∣∣ x)− ln pφ(z)] skill discovery .\n(21) VIC (Gregor et al., 2016) introduced information-based skill discovery as an extension of empowerment, motivating a line of work including SNN (Florensa et al., 2017), DIAYN (Eysenbach et al., 2018), work by Hausman et al. (2018), VALOR (Achiam et al., 2018), and work by Tirumala et al. (2019) and (Shankar and Gupta, 2020). DADS (Sharma et al., 2019) estimates the mutual information in input space by combining a forward predictor of skills with a contrastive bound. Divergence minimization suggests a generalization of skill discovery where actions should not just consider the current skill but also seek out regions of the environment where many skills are applicable.\nA.7 INFORMATION GAIN\nAgents need to explore initially unknown environments to achieve goals. Learning about the world is beneficial even when it does not serve maximizing the currently known reward signal, because the knowledge might become useful later on during this or later tasks. Reducing uncertainty requires representing uncertainty about aspects we want to explore, such as dynamics parameters, policy parameters, or state representations. To efficiently reduce uncertainty, the agent should select actions that maximize the expected information gain (Lindley et al., 1956).\nFigure 10 shows information gain exploration on the example of latent model parameters and deterministic actions. The inputs are a sequence x .= {xt} and the latent parameters are a global representation w. The graphical model separates inputs into past inputs x< and future inputs x>. The actual distribution consists of the controlled dynamics and the parameter belief. Amortized latent state representations would include a link from x< to z. Latent policy parameters would include a\nlink from w to x>. The target distribution is a latent variable model that explains past inputs and predicts future inputs, as in Appendix A.3. The target could further include a reward factor,\nActual: pφ(x,w) . = pφ(w)\nbelief\n∏ t pφ(xt | x1:t−1)\ncontrolled dynamics\n,\nTarget: τ(x,w) .= τ(w) prior\n∏ t τ(xt | x1:t−1, w)\nlikelihood\n. (22)\nMinimizing the KL between the two joints reveals a control term as in previous sections and the information bound between inputs and the latent representation, as derived in Section 2.2. In contrast to Appendix A.3, we can now influence future inputs. This leads to learning representations that are informative of past inputs and exploring future inputs that are informative of the representations. The mutual information between the representation and future inputs is the expected information gain (Lindley et al., 1956; MacKay, 1992b) that encourages inputs that are expected to convey the most bits about the representation to maximally reduce uncertainty in the belief,\nKL [ pφ ∥∥ τφ] ≤ E KL[pφ(w | x<) ∥∥ τ(w)]\nsimplicity\n− E [ ln τφ(x< ∣∣ w)− ln pφ(x<)]\nrepresentation learning + E KL [ pφ(x> | x<, w) ∥∥ τφ(x> | x<)] control − E [ ln τφ(w ∣∣ x)− ln pφ(w | x<)] information gain ,\nE [ ln τφ(w ∣∣ x)− ln pφ(w | x<)]\ninformation gain\n≥ ∑ t′>t E [ ln τφ(w ∣∣ x1:t′)− ln pφ(w | x1:t′−1)] intrinsic reward . (23)\nInformation gain can be estimated by planning (Sun et al., 2011) or from past environment interaction (Schmidhuber, 1991). State representations lead to agents that disambiguate unobserved environment states, for example by opening doors to see objects behind them, such as in active inference (Da Costa et al., 2020), INDIGO (Azar et al., 2019), and DVBF-LM (Mirchev et al., 2018). Model parameters lead to agents that discover the rules of their environment, such as in active inference (Friston et al., 2015), VIME (Houthooft et al., 2016), MAX (Shyam et al., 2018), and Plan2Explore (Sekar et al., 2020). SLAM resolves uncertainty over both states and dynamics (Moutarlier and Chatila, 1989). Policy parameters lead to agents that explore to find the best behavior, such as bootstrapped DQN (Osband et al., 2016) and Bayesian DQN (Azizzadenesheli et al., 2018). One might think exploration should seek inputs with large error, but reconstruction and exploration optimize the same objective. Maximizing information gain minimizes the reconstruction error at future time steps by steering toward diverse but predictable inputs. Divergence minimization shows that every latent representation should be accompanied with an expected information gain term, so that the agent optimizes a consistent objective for past and future time steps. Moreover, it shows that representations should be optimized jointly with the policy to support both reconstruction and action choice (Lange and Riedmiller, 2010; Jaderberg et al., 2016; Lee et al., 2019a; Yarats et al., 2019)." }, { "heading": "B ACTIVE INFERENCE", "text": "Divergence minimization is motivated by the free energy principle (Friston, 2010; 2019) and its implementation active inference (Friston et al., 2017). Both approaches share the interpretation of models as preferences (Wald, 1947; Brown, 1981; Friston et al., 2012) and account for a variety of intrinsic objectives (Friston et al., 2020). However, typical implementations of active inference have been limited to simple tasks as of today, a problem that divergence minimization overcomes. Active inference differs from divergence minimization in the three aspects discussed below. Maximizing the input probability Divergence minimization aims to match the distribution of the system to the target distribution. Therefore, the agent aims to receive inputs that follow the marginal distribution of inputs under the model. In contrast, active inference aims to maximize the probability of inputs under the model. This is often described as minimizing Bayesian surprise. Therefore, the agent aims to receive inputs that are the most probable under its model. Mathematically, this difference stems from the conditional input entropy of the actual system that distinguishes the joint KL divergence in Equation 2 from the expected free energy used in active inference,\nKL [ pφ(x, z) ∥∥ τ(x, z)] joint divergence = E [ − ln τ(x ∣∣ z)]+ E KL[pφ(z | x) ∥∥ τ(z)] expected free energy − E [ − ln pφ(x) ] input entropy . (24)\nBoth formulations include the entropy of latent variables and thus the information gain that encourages the agent to explore informative future inputs. Moreover, in complex environments, it is unlikely\nthat the agent ever learns everything so that its beliefs concentrate and it stops exploring. However, in this hypothetical scenario, active inference converges to the input that is most probable under its model. In contrast, divergence minimization aims to converge to sampling from the marginal input distribution under the model, resulting in a larger niche. That said, it is possible to construct a target distribution that includes the input entropy of the actual system and thus overcome this difference. Expected free energy action prior Divergence minimization optimizes the same objective with respect to representations and actions. Therefore, actions optimize the expected information gain and representations optimize not just past accuracy but also change to support actions in maximizing the expected information gain. In contrast, active inference first optimizes the expected free energy to compute a prior over policies. After that, it optimizes the free energy with respect to both representations and actions. This means active inference optimizes the information gain only with respect to actions, without the representations changing to support better action choice based on future objective terms. Bayesian model average over policies Typical implementations of active inference compute the action prior using a Bayesian model average. This involves computing the expected free energy for every possible policy or action sequence that is available to the agent. The action prior is then computed as the softmax over the computed values. Enumerating all policies is intractable for larger action spaces or longer planning horizons, thus limiting the applicability of active inference implementations. In contrast, divergence minimization absorbs the objective terms for action and perception into a single variational optimization thereby finessing the computational complexity of computing a separate action prior. This leads to a simple framework, allowing us to draw close connections to the deep RL literature and to scale to challenging tasks, as evidenced by the many established methods that are explained under the divergence minimization framework." }, { "heading": "C KL INTERPRETATION", "text": "Minimizing the KL divergence has a variety of interpretations. In simple terms, it says “optimize a function but don’t be too confident.” Decomposing Equation 2 shows that we maximize the expected log target while encouraging high entropy of all the random variables. Both terms are expectations under pφ and thus depend on the parameter vector φ,\nKL [ pφ(x, z) ∥∥ τ(x, z)] = E[− ln τ(x, z)] energy − H [ x, z ] entropy . (25)\nThe energy term expresses which system configurations we prefer. It is also known as the cross entropy loss, expected log loss, (Bishop, 2006; Murphy, 2012), energy function when unnormalized (LeCun et al., 2006), and agent preferences in control (Morgenstern and Von Neumann, 1953). The entropy term prevents all random variables in the system from becoming deterministic, encouraging a global search over their possible values. It implements the maximum entropy principle to avoid overconfidence (Jaynes, 1957), Occam’s razor to prevent overfitting (Jefferys and Berger, 1992), bounded rationality to halt optimization before reaching the point solution (Ortega and Braun, 2011), and risk-sensitivity to account for model misspecification (Pratt, 1964; Howard and Matheson, 1972). Expected utility The entropy distinguishes the KL from the expected utility objective that is typical in RL (Sutton and Barto, 2018). Using a distribution as the optimization target is more general, as every system has a distribution but not every system has a utility function it is optimal for. Moreover, the dynamics of any stochastic system maximize only its log stationary distribution (Ao et al., 2013; Friston, 2013; Ma et al., 2015). This motivates using the desired distribution as the optimization target. Expected utility is recovered in the limit of a sharp target that outweighs the entropy." }, { "heading": "D BACKGROUND", "text": "This section introduces notation, defines basic information-theoretic quantities, and briefly reviews KL control and variational inference for latent variable models. Expectation A random variable x represents an unknown variable that could take on one of multiple values x̄, each with an associated probability mass or density p(x = x̄). Applying a function to a random variable yields a new random variable y = f(x). The expectation of a random variable is the weighted average of the values it could take on, weighted by their probability,\nE [ f(x) ] . = ∫ f(x)p(x) dx. (26)\nWe use integrals here, as used for random variables that take on continuous values. For discrete variables, the integrals simplify to sums. Information The information of an event x̄ measures the number of bits it contains (Shannon, 1948). Intuitively, rare events contain more information. The information is defined as the code length of the event under an optimal encoding for x ∼ p(x),\nI(x̄) . = ln\n( 1\np(x̄)\n) = − ln p(x̄). (27)\nThe logarithm base 2 measures information in bits and the natural base in the unit nats. Entropy The entropy of a random variable x is the expected information of its events. It quantifies the randomness or uncertainty of the random variable. Similarly, the conditional entropy measures the uncertainty of x that we expect to remain after observing another variable y,\nH [ x ] . = E [ − ln p(x) ] , H [ x ∣∣ y] .= E[− ln p(x ∣∣ y)]. (28)\nNote that the conditional entropy uses an expectation over both variables. A deterministic distribution reaches the minimum entropy of zero. The uniform distribution reaches the maximum entropy, the logarithm of the number of possible events. KL divergence The Kullback-Leibler divergence (Kullback and Leibler, 1951), measures the directed similarity of one distribution to another distribution. The KL divergence is defined as the expectation under p of the log difference between the two distributions p and τ ,\nKL [ p(x) ∥∥ τ(x)] .= E[ ln p(x)− ln τ(x)] = E[− ln τ(x)]−H[x]. (29) The KL divergence is non-negative and reaches zero if and only if p = τ . Also known as relative entropy, it is the expected number of additional bits needed to describe x when using the code for a different distribution τ to encode events from x ∼ p(x). This is shown by the decomposition as cross-entropy minus entropy shown above. Analogously to the conditional entropy, the conditional KL divergence is an expectation over both variables under the first distribution. Mutual information The mutual information, or simply information, between two random variables x and y measures how many bits the value of x carries about the unobserved value of y. It is defined as the entropy of one variable minus its conditional entropy given the other variable,\nI [ X;Y ] . = H [ X ] −H [ X ∣∣ Y ] = E[ ln p(x ∣∣ y)− ln p(x)] = KL[p(x, y) ∥∥ p(x)p(y)]. (30)\nThe mutual information is symmetric in its arguments and non-negative. It reaches zero if and only if x and y are independent so that p(x, y) = p(x)p(y). Intuitively, it is higher the better we can predict one variable from the other and the more random the variable is by itself. It can also be written as KL divergence between the joint and product of marginals. Variational bound Computing the exact mutual information requires access to both the conditional and marginal distributions. When the conditional is unknown, replacing it with another distribution bounds the mutual information from below (Barber and Agakov, 2003; Poole et al., 2019),\nI [ x; z ] ≥ I [ x; z ] − E KL [ p(x | z) ∥∥ τφ(x | z)] = E[ ln τφ(x ∣∣ z)− ln p(x)]. (31) Maximizing the bound with respect to the parameters φ tightens the bound, thus bringing τφ(x | z) closer to p(x | z). Improving the bound through optimization gives it the name variational bound. The more flexible the family of τφ(x | z), the more accurate the bound can become. Dirac distribution The Dirac distribution (Dirac, 1958), also known as point mass, represents a random variable x with certain event x̄. We show an intuitive definition here; for a rigorous definition using measure theory see Rudin (1966),\nδx̄(x) . = { 1 if x = x̄ 0 else.\n(32)\nThe expectation under a Dirac distribution is simply the inner expression evaluated at the certain event, Eδx̄(x) [ f(x) ] = f(x̄). The entropy of a Dirac distributed random variable is therefore\nH [ x ]\n= − ln δx̄(x̄) = 0 and its mutual information with another random variables is also zero. KL control KL control (Todorov, 2008; Kappen et al., 2009) minimizes the KL divergence between the trajectory x ∼ pφ(x) of inputs x . = {x1, x2, . . . , xT } and a target distribution τ(x) ·∝ exp(r(x)) defined with a reward r(x),\nKL [ pφ(x) trajectory ∥∥ τ(x) target ] = − E [ ln τ(x) ] expected reward − H [ x ] entropy . (33)\nThe KL between the two distributions is minimized with respect to the control rule or action sequence φ, revealing the expected reward and an entropy regularizer. Because the expectations are terms of the trajectory x, they are integrals under its distribution pφ. Variational inference Latent variable models explain inputs x using latent variables z. They define a prior τ(z) and an observation model τ(x | z). To infer the posterior τ(z | x̄) that represents a given input x̄, we need to condition the model on the input. However, this requires inverting the observation model using Bayes rule and has no closed form in general. To overcome this intractability, variational inference (Hinton and Van Camp, 1993; Jordan et al., 1999) optimizes a parameterized belief pφ(z | x̄) to approximate the posterior by minimizing the KL,\nKL [ pφ(z | x̄) ∥∥ τ(z | x̄)]+ ln τ(x̄) constant = KL [ pφ(z | x̄) ∥∥ τ(z)] complexity − E [ ln τ(x̄ ∣∣ z)] accuracy . (34)\nAdding the marginal τ(x) that does not depend on φ completes the intractable posterior to the joint that can be factorized into the available parts τ(z) and τ(x | z). This reveals a complexity regularizer that keeps the belief close to the prior and an accuracy term that encourages the belief to be representative of the input. This objective is known as the variational free energy or ELBO." } ]
2,020
null
SP:9c77f92d9933964d7066aec0e5d3e33bb2ee1745
[ "Principal component analysis (PCA) is a well-known dimensionality reduction and feature learning technique in the literature that leads to uncorrelated features. While there are a plethora of algorithms for PCA, along with accompanying analysis, a majority of these works have been developed from an optimization perspective. This paper differs from existing works in that it motivates the $k$-PCA problem, which involves learning the $k$-dominant eigen vectors of the sample covariance matrix, as a competitive game between $k$ players in which each player is supposed to estimate one of the eigen vectors and the PCA solution is the unique strict-Nash equilibrium. The main contributions of the paper in this regard are the following:", "The authors present new insights on PCA analysis by reconceiving it in terms of a Nash equilibrium among different players, related to the different components. The importance of an objective function minimizing the off-diagonal elements of R is emphasized. The insights lead to parallel algorithms and are demonstrated on large scale problems, which is nice. Overall the new insights can be very valuable and also inspiring for future work and for new developments, from a broader perspective." ]
We present a novel view on principal component analysis (PCA) as a competitive game in which each approximate eigenvector is controlled by a player whose goal is to maximize their own utility function. We analyze the properties of this PCA game and the behavior of its gradient based updates. The resulting algorithm—which combines elements from Oja’s rule with a generalized GramSchmidt orthogonalization—is naturally decentralized and hence parallelizable through message passing. We demonstrate the scalability of the algorithm with experiments on large image datasets and neural network activations. We discuss how this new view of PCA as a differentiable game can lead to further algorithmic developments and insights.
[ { "affiliations": [], "name": "NASH EQUILIBRIUM" }, { "affiliations": [], "name": "Ian Gemp" }, { "affiliations": [], "name": "Brian McWilliams" }, { "affiliations": [], "name": "Claire Vernade" } ]
[ { "authors": [ "P-A. Absil", "Robert Mahony", "Rodolphe Sepulchre" ], "title": "Optimization Algorithms on Matrix Manifolds", "venue": null, "year": 2009 }, { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li" ], "title": "First efficient convergence for streaming k-PCA: a global, gap-free, and near-optimal rate", "venue": "IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS),", "year": 2017 }, { "authors": [ "Ehsan Amid", "Manfred K Warmuth" ], "title": "An implicit form of Krasulina’s k-PCA update without the orthonormality constraint", "venue": "arXiv preprint arXiv:1909.04803,", "year": 2019 }, { "authors": [ "Anthony J Bell", "Terrence J Sejnowski" ], "title": "The “independent components” of natural scenes are edge filters", "venue": "Vision Research,", "year": 1997 }, { "authors": [ "Keith Bonawitz", "Hubert Eichner", "Wolfgang Grieskamp", "Dzmitry Huba", "Alex Ingerman", "Vladimir Ivanov", "Chloe Kiddon", "Jakub Konecny", "Stefano Mazzocchi", "H Brendan McMahan" ], "title": "Towards federated learning at scale: system design", "venue": "arXiv preprint arXiv:1902.01046,", "year": 2019 }, { "authors": [ "Nicolas Boumal", "Pierre-Antoine Absil", "Coralia Cartis" ], "title": "Global rates of convergence for nonconvex optimization on manifolds", "venue": "IMA Journal of Numerical Analysis,", "year": 2019 }, { "authors": [ "James Bradbury", "Roy Frostig", "Peter Hawkins", "Matthew James Johnson", "Chris Leary", "Dougal Maclaurin", "Skye Wanderman-Milne" ], "title": "JAX: composable transformations of Python+NumPy programs, 2018", "venue": "URL http://github.com/google/jax", "year": 2018 }, { "authors": [ "Xi Chen", "Yan Duan", "Rein Houthooft", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Infogan: interpretable representation learning by information maximizing generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Michael B Cohen", "Cameron Musco", "Christopher Musco" ], "title": "Input sparsity time low-rank approximation via ridge leverage score sampling", "venue": "In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms,", "year": 2017 }, { "authors": [ "Constantinos Daskalakis", "Paul W Goldberg", "Christos H Papadimitriou" ], "title": "The complexity of computing a Nash equilibrium", "venue": "SIAM Journal on Computing,", "year": 2009 }, { "authors": [ "Guillaume Desjardins", "Karen Simonyan", "Razvan Pascanu" ], "title": "Natural neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Dan Feldman", "Melanie Schmidt", "Christian Sohler" ], "title": "Turning big data into tiny data: Constant-size coresets for k-means, PCA, and projective clustering", "venue": "SIAM Journal on Computing,", "year": 2020 }, { "authors": [ "Arpita Gang", "Haroon Raja", "Waheed U Bajwa" ], "title": "Fast and communication-efficient distributed pca", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Mina Ghashami", "Edo Liberty", "Jeff M Phillips", "David P Woodruff" ], "title": "Frequent directions: simple and deterministic matrix sketching", "venue": "SIAM Journal on Computing,", "year": 2016 }, { "authors": [ "Benyamin Ghojogh", "Fakhri Karray", "Mark Crowley" ], "title": "Eigenvalue and generalized eigenvalue problems: Tutorial", "venue": "arXiv preprint arXiv:1903.11240,", "year": 2019 }, { "authors": [ "Itzhak Gilboa", "Eitan Zemel" ], "title": "Nash and correlated equilibria: some complexity considerations", "venue": "Games and Economic Behavior,", "year": 1989 }, { "authors": [ "Gene H Golub", "Henk A Van der Vorst" ], "title": "Eigenvalue computation in the 20th century", "venue": "Journal of Computational and Applied Mathematics,", "year": 2000 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Nathan Halko", "Per-Gunnar Martinsson", "Joel A Tropp" ], "title": "Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions", "venue": "SIAM review,", "year": 2011 }, { "authors": [ "Donald Olding Hebb" ], "title": "The Organization of Behavior: A Neuropsychological Theory", "venue": null, "year": 2005 }, { "authors": [ "Christina Heinze", "Brian McWilliams", "Nicolai Meinshausen" ], "title": "Dual-loco: distributing statistical estimation using random projections", "venue": "In Artificial Intelligence and Statistics,", "year": 2016 }, { "authors": [ "Christina Heinze-Deml", "Brian McWilliams", "Nicolai Meinshausen" ], "title": "Preserving privacy between features in distributed estimation", "venue": "Stat, 7(1):e189,", "year": 2018 }, { "authors": [ "Roger A Horn", "Charles R Johnson" ], "title": "Matrix Analysis", "venue": null, "year": 2012 }, { "authors": [ "Ian T Jolliffe" ], "title": "Principal components in regression analysis", "venue": "In Principal Component Analysis. Springer,", "year": 2002 }, { "authors": [ "Norman P Jouppi", "Cliff Young", "Nishant Patil", "David Patterson", "Gaurav Agrawal", "Raminder Bajwa", "Sarah Bates", "Suresh Bhatia", "Nan Boden", "Al Borchers" ], "title": "In-datacenter performance analysis of a tensor processing unit", "venue": "In Proceedings of the 44th Annual International Symposium on Computer Architecture,", "year": 2017 }, { "authors": [ "TP Krasulina" ], "title": "The method of stochastic approximation for the determination of the least eigenvalue of a symmetrical matrix", "venue": "USSR Computational Mathematics and Mathematical Physics,", "year": 1969 }, { "authors": [ "Gabriel Krummenacher", "Brian McWilliams", "Yannic Kilcher", "Joachim M Buhmann", "Nicolai Meinshausen" ], "title": "Scalable adaptive stochastic optimization using random projections", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Shengqiao Li" ], "title": "Concise formulas for the area and volume of a hyperspherical cap", "venue": "Asian Journal of Mathematics and Statistics,", "year": 2011 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Gunnar Raetsch", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Emile Mathieu", "Tom Rainforth", "N Siddharth", "Yee Whye Teh" ], "title": "Disentangling disentanglement in variational autoencoders", "venue": "arXiv preprint arXiv:1812.02833,", "year": 2018 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "arXiv preprint arXiv:1802.05957,", "year": 2018 }, { "authors": [ "Erkki Oja" ], "title": "Simplified neuron model as a principal component analyzer", "venue": "Journal of Mathematical Biology,", "year": 1982 }, { "authors": [ "Ali Rahimi", "Benjamin Recht" ], "title": "Reflections on random kitchen sinks, 2017", "venue": "URL http://www.argmin", "year": 2017 }, { "authors": [ "Haroon Raja", "Waheed U Bajwa" ], "title": "Distributed stochastic algorithms for high-rate streaming principal component analysis", "venue": "arXiv preprint arXiv:2001.01017,", "year": 2020 }, { "authors": [ "H Rutishauser" ], "title": "Simultaneous iteration method for symmetric matrices. In Handbook for Automatic Computation, pages 284–302", "venue": null, "year": 1971 }, { "authors": [ "Terence D Sanger" ], "title": "Optimal unsupervised learning in a single-layer linear feedforward neural network", "venue": "Neural Networks,", "year": 1989 }, { "authors": [ "Mhd Hasan Sarhan", "Abouzar Eslami", "Nassir Navab", "Shadi Albarqouni" ], "title": "Learning interpretable disentangled representations using adversarial VAEs. In Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data, pages", "venue": null, "year": 2019 }, { "authors": [ "Tamas Sarlos" ], "title": "Improved approximation algorithms for large matrices via random projections", "venue": "47th Annual IEEE Symposium on Foundations of Computer Science", "year": 2006 }, { "authors": [ "Ohad Shamir" ], "title": "A stochastic PCA and SVD algorithm with an exponential convergence rate", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Ohad Shamir" ], "title": "Convergence of stochastic gradient descent for PCA", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Ohad Shamir" ], "title": "Fast stochastic algorithms for SVD and PCA: Convergence properties and convexity", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Cheng Tang" ], "title": "Exponentially convergent stochastic k-PCA without variance reduction", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Aladin Virmaux", "Kevin Scaman" ], "title": "Lipschitz regularity of deep neural networks: analysis and efficient estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The principal components of data are the vectors that align with the directions of maximum variance. These have two main purposes: a) as interpretable features and b) for data compression. Recent methods for principal component analysis (PCA) focus on the latter, explicitly stating objectives to find the k-dimensional subspace that captures maximum variance (e.g., (Tang, 2019)), and leaving the problem of rotating within this subspace to, for example, a more efficient downstream singular value (SVD) decomposition step1. This point is subtle, yet critical. For example, any pair of twodimensional, orthogonal vectors spans all of R2 and, therefore, captures maximum variance of any two-dimensional dataset. However, for these vectors to be principal components, they must, in addition, align with the directions of maximum variance which depends on the covariance of the data. By learning the optimal subspace, rather than the principal components themselves, objectives focused on subspace error ignore the first purpose of PCA. In contrast, modern nonlinear representation learning techniques focus on learning features that are both disentangled (uncorrelated) and low dimensional (Chen et al., 2016; Mathieu et al., 2018; Locatello et al., 2019; Sarhan et al., 2019).\nIt is well known that the PCA solution of the d-dimensional dataset X ∈ Rn×d is given by the eigenvectors of X>X or equivalently, the right singular vectors of X . Impractically, the cost of computing the full SVD scales with O(min{nd2, n2d})-time and O(nd)-space (Shamir, 2015; Tang, 2019). For moderately sized data, randomized methods can be used (Halko et al., 2011). Beyond this, stochastic—or online—methods based on Oja’s rule (Oja, 1982) or power iterations (Rutishauser, 1971) are common. Another option is to use streaming k-PCA algorithms such as Frequent Directions (FD) (Ghashami et al., 2016) or Oja’s algorithm2 (Allen-Zhu and Li, 2017) with storage complexity O(kd). Sampling or sketching methods also scale well, but again, focus on the top-k subspace (Sarlos, 2006; Cohen et al., 2017; Feldman et al., 2020).\nIn contrast to these approaches, we view each principal component (equivalently eigenvector) as a player in a game whose objective is to maximize their own local utility function in controlled competition with other vectors. The proposed utility gradients are interpretable as a combination of Oja’s rule and a generalized Gram-Schmidt process. We make the following contributions:\n• A novel formulation of PCA as finding the Nash equilibrium of a suitable game, • A sequential, globally convergent algorithm for approximating the Nash on full-batch data,\n1After learning the top-k subspace V ∈ Rd×k, the rotation can be recovered via an SVD of XV . 2FD approximates the top-k subspace; Oja’s algorithm approximates the top-k eigenvectors.\n• A decentralized algorithm with experiments demonstrating the approach as competitive with modern streaming k-PCA algorithms on synthetic and real data, • In demonstration of the scaling of the approach, we compute the top-32 principal components\nof the matrix of RESNET-200 activations on the IMAGENET dataset (n ≈ 106, d ≈ 20 ·106).\nEach of these contributions is important. Novel formulations often lead to deeper understanding of problems, thereby, opening doors to improved techniques. In particular, k-player games are in general complex and hard to analyze. In contrast, PCA has been well-studied. By combining the two fields we hope to develop useful analytical tools. Our specific formulation is important because it obviates the need for any centralized orthonormalization step and lends itself naturally to decentralization. And lastly, theory and experiments support the viability of this approach for continued research." }, { "heading": "2 PCA AS AN EIGEN-GAME", "text": "We adhere to the following notation. Vectors and matrices meant to approximate principal components (equivalently eigenvectors) are designated with hats, v̂ and V̂ respectively, whereas true principal components are v and V . Subscripts indicate which eigenvalue a vector is associated with. For example, vi is the ith largest eigenvector. In this work, we will assume each eigenvalue is distinct. By an abuse of notation, vj<i refers to the set of vectors {vj |j ∈ {1, . . . , i− 1}} and are also referred to as the parents of vi (vi is their child). Sums over indices should be clear from context, e.g.,∑ j<i = ∑i−1 j=1. The Euclidean inner product is written 〈u, v〉 = u>v. We denote the unit sphere by Sd−1 and simplex by ∆d−1 in d-dimensional ambient space.\nOutline of derivation As argued in the introduction, the PCA problem is often mis-interpreted as learning a projection of the data into a subspace that captures maximum variance (equiv. maximizing the trace of a suitable matrix R introduced below). This is in contrast to the original goal of learning the principal components. We first develop the intuition for deriving our utility functions by (i) showing that only maximizing the trace of R is not sufficient for recovering all principal components (equiv. eigenvectors), and (ii) showing that minimizing off-diagonal terms in R is a complementary objective to maximizing the trace and can recover all components. We then consider learning only the top-k and construct utilities that are consistent with findings in (i) and (ii), equal the true eigenvalues at the Nash of the game we construct, and result in a game that is amenable to analysis.\nDerivation of player utilities. The eigenvalue problem for a symmetric matrix X>X = M ∈ Rd×d is to find a matrix of d orthonormal column vectors V (implies V is full-rank) such that MV = V Λ with Λ diagonal. Given a solution to this problem, the columns of V are known as eigenvectors and corresponding entries in Λ are eigenvalues. By left-multiplying by V > and recalling V >V = V V > = I by orthonormality (i.e., V is unitary), we can rewrite the equality as\nV >MV = V >V Λ unitary = Λ. (1)\nLet V̂ denote a guess or estimate of the true eigenvectors V and define R(V̂ ) def= V̂ >MV̂ . The PCA problem is often posed as maximizing the trace of R (equiv. minimizing reconstruction error):\nmax V̂ >V̂=I {∑ i Rii = Tr(R) = Tr(V̂ >MV̂ ) = Tr(V̂ V̂ >M) = Tr(M) } . (2)\nSurprisingly, the objective in (2) is independent of V̂ , so it cannot be used to recover all (i.e., k = d) the eigenvectors of M—(i). Alternatively, Equation (1) implies the eigenvalue problem can be phrased as ensuring all off-diagonal terms of R are zero, thereby ensuring R is diagonal—(ii):\nmin V̂ >V̂=I ∑ i6=j R2ij . (3)\nIt is worth further examining the entries of R in detail. Diagonal entries Rii = 〈v̂i,Mv̂i〉 are recognized as Rayleigh quotients because ||v̂i|| = 1 by the constraints. Off-diagonal entries Rij = 〈v̂i,Mv̂j〉 measure alignment between v̂i and v̂j under a generalized inner product 〈·, ·〉M .\nSo far, we have considered learning all the eigenvectors. If we repeat the logic for the top-k eigenvectors with k < d, then by Equation (1), R must still be diagonal. V is not square, so V V > 6= I , but assuming V is orthonormal as before, we have V V > = P is a projection matrix. Left-multiplying Equation (1) by V now reads (PM)V = V Λ so we are solving an eigenvalue problem for a subspace of M .\nIf we only desire the top-k eigenvectors, maximizing the trace encourages learning a subspace spanned by the top-k eigenvectors, but does not recover the eigenvectors themselves. On the other hand, Equation (3) places no preference on recovering large over small eigenvectors, but does enforce the columns of V̂ to actually be eigenvectors. The preceding exercise is intended to introduce minimizing the off-diagonal terms of R as a possible complementary objective for solving top-k PCA. Next, we will use these two objectives to construct utility functions for each eigenvector v̂i.\nWe want to combine the objectives to take advantage of both their strengths. A valid proposal is\nmax V̂ >V̂=I ∑ i Rii − ∑ i6=j R2ij . (4)\nHowever, this objective ignores the natural hierarchy of the top-k eigenvectors. For example, v̂1 is penalized for aligning with v̂k and vice versa, but v̂1, being the estimate of the largest eigenvector, should be free to search for the direction that captures the most variance independent of the locations of the other vectors. Instead, first consider solving for the top-1 eigenvector, v1, in which case R = [〈v̂1,Mv̂1〉] is a 1× 1 matrix. In this setting, Equation (3) is not applicable because there are no off-diagonal elements, so maxv̂>1 v̂1=1〈v̂1,Mv̂1〉 is a sensible utility function for v̂1.\nIf considering the top-2 eigenvectors, v̂1’s utility remains as before, and we introduce a new utility for v̂2. Equation (3) is now applicable, so v̂2’s utility is\nmax v̂>2 v̂2=1,v̂ > 1 v̂2=0\n〈v̂2,Mv̂2〉 − 〈v̂2,Mv̂1〉2\n〈v̂1,Mv̂1〉 (5)\nwhere we have divided the off-diagonal penalty by 〈v1,Mv1〉 so a) the two terms in Equation (5) are on a similar scale and b) for reasons that ease analysis. Additionally note that the constraint v̂>1 v̂2 = 0 may be redundant at the optimum (v̂∗1 = v1, v̂ ∗ 2 = v2) because the second term, 〈v̂∗2 ,Mv̂∗1〉2 = 〈v2,Mv1〉2 = Λ211〈v2, v1〉2, already penalizes such deviations (Λii is the ith largest eigenvector). These reasons motivate the following set of objectives (utilities), one for each vector i ∈ {1, . . . , k}:\nmax v̂>i v̂i=1\n{ ui(v̂i|v̂j<i) = v̂>i Mv̂i − ∑ j<i (v̂>i Mv̂j) 2 v̂>j Mv̂j = ||Xv̂i||2 − ∑ j<i 〈Xv̂i, Xv̂j〉2 〈Xv̂j , Xv̂j〉 } (6)\nwhere the notation ui(ai|b) emphasizes that player i adjusts ai to maximize a utility conditioned on b. It is interesting to note that by incorporating knowledge of the natural hierarchy (see Figure 1), we are immediately led to constructing asymmetric utilities, and thereby, inspired to formulate the PCA problem as a game, rather than a direct optimization problem as in Equation (4).\nA key concept in games is a Nash equilibrium. A Nash equilibrium specifies a variable for each player from which no player can unilaterally deviate and improve their outcome. In this case, V̂ is a (strict-)Nash equilibrium if and only if for all i, ui(v̂i|v̂j<i) > ui(zi|v̂j<i) for all zi ∈ Sd−1. Theorem 2.1 (PCA Solution is the Unique strict-Nash Equilibrium). Assume that the top-k eigenvalues of X>X are positive and distinct. Then the top-k eigenvectors form the unique strictNash equilibrium of the proposed game in Equation (6).3 The proof is deferred to Appendix L.\nSolving for the Nash of a game is difficult in general. Specifically, it belongs to the class of PPADcomplete problems (Gilboa and Zemel, 1989; Daskalakis et al., 2009). However, because the game\nis hierarchical and each player’s utility only depends on its parents, it is possible to construct a sequential algorithm that is convergent by solving each player’s optimization problem in sequence." }, { "heading": "3 METHOD", "text": "Utility gradient. In Section 2, we mentioned that normalizing the penalty term from Equation (5) had a motivation beyond scaling. Dividing by 〈v̂j ,Mv̂j〉 results in the following gradient for player i:\n∇v̂iui(v̂i|v̂j<i) = 2M [ v̂i − ∑ j<i v̂>i Mv̂j v̂>j Mv̂j\nv̂j︸ ︷︷ ︸ generalized Gram-Schmidt\n] = 2X> [ Xv̂i − ∑ j<i 〈Xv̂i, Xv̂j〉 〈Xv̂j , Xv̂j〉 Xv̂j ] . (7)\nThe resulting gradient with normalized penalty term has an intuitive meaning. It consists of a single generalized Gram-Schmidt step followed by the standard matrix product found in power iteration and Oja’s rule. Also, notice that applying the gradient as a fixed point operator in sequence (v̂i ← 1 2∇v̂iui(v̂i|v̂j<i)) on M = I recovers the standard Gram-Schmidt procedure for orthogonalization.\nA sequential algorithm. Each eigenvector can be learned by maximizing its utility. The vectors are constrained to the unit sphere, a non-convex Riemannian manifold, so we use Riemmanian gradient ascent with gradients given by Equation (7). In this case, Riemannian optimization theory simply requires an intermediate step where the gradient,∇v̂i , is projected onto the tangent space of the sphere to compute the Riemannian gradient, ∇Rv̂i . A more detailed illustration can be found in Appendix J. Recall that each ui depends on v̂j<i. If any of v̂j<i are being learned concurrently, then v̂i is maximizing a non-stationary objective which makes a convergence proof difficult. Instead, for completeness, we prove convergence assuming each v̂i is\nlearned in sequence. Algorithm 1 learns v̂i given fixed parents v̂j<i; we present the convergence guarantee in Section 4 and details on setting ρi and α in Appendix O.\nAlgorithm 1 EigenGameR-Sequential Given: matrix X ∈ Rn×d, maximum error tolerance ρi, initial vector v̂0i ∈ Sd−1, learned approximate parents v̂j<i, and step size α. v̂i ← v̂0i ti = d 54 min(||∇v̂0i ui||/2, ρi)\n−2e for t = 1 : ti do\nrewards← Xv̂i penalties← ∑ j<i 〈Xv̂i,Xv̂j〉 〈Xv̂j ,Xv̂j〉Xv̂j\n∇v̂i ← 2X> [ rewards− penalties ] ∇Rv̂i ← ∇v̂i − 〈∇v̂i , v̂i〉v̂i v̂′i ← v̂i + α∇Rv̂i v̂i ← v̂ ′ i\n||v̂′i|| end for return v̂i\nAlgorithm 2 EigenGameR (EigenGame—update with∇v̂i instead of∇Rv̂i )\nGiven: stream, Xt ∈ Rm×d, total iterations T , initial vector v̂0i ∈ Sd−1, and step size α. v̂i ← v̂0i for t = 1 : T do\nrewards← Xtv̂i penalties← ∑ j<i 〈Xtv̂i,Xtv̂j〉 〈Xtv̂j ,Xtv̂j〉Xtv̂j\n∇v̂i ← 2X>t [ rewards− penalties ] ∇Rv̂i ← ∇v̂i − 〈∇v̂i , v̂i〉v̂i v̂′i ← v̂i + α∇Rv̂i v̂i ← v̂ ′ i ||v̂′i|| broadcast(v̂i)\nend for return v̂i\nA decentralized algorithm. While Algorithm 1 enjoys a convergence guarantee, learning every parent v̂j<i before learning v̂i may be unnecessarily restrictive. Intuitively, as parents approach their respective optima, they become quasi-stationary, so we do not expect maximizing utilities in parallel to be problematic in practice. To this end, we propose Algorithm 2 visualized in Figure 2.\n3Unique up to a sign change; this is expected as both vi and −vi represent the same principal component.\nIn practice we can assign each eigenvector update to its own device (e.g. a GPU or TPU). Systems with fast interconnects may facilitate tens, hundreds or thousands of accelerators to be used. In such settings, the overhead of broadcast(v̂i) is minimal. We can also specify that the data stream is co-located with the update so v̂i updates with respect to its own Xi,t. This is a standard paradigm for e.g. data-parallel distributed neural network training. We provide further details in Section 6.\nMessage Passing on a DAG. Our proposed utilities enforce a strict hierarchy on the eigenvectors. This is a simplification that both eases analysis (see Appendix M) and improves convergence4, however, it is not optimal. We assume vectors are initialized randomly on the sphere and, for instance, v̂k may be initialized closer to v1 than even v̂1 and vice versa. The hierarchy shown in Figure 1 enforces a strict graph structure for broadcasting information of parents to the childrens’ utilities.\nTo our knowledge, our utility formulation in Equation (6) is novel. One disadvantage is that stochastic gradients of Equation (7) are biased. This is mitigated with large batch sizes (further discussion in Appendix I)." }, { "heading": "4 CONVERGENCE OF EIGENGAME", "text": "Here, we first show that Equation (6) has a simple form such that any local maximum of ui is also a global maximum. Player i’s utility depends on its parents, so we next explain how error in the parents propagates to children through mis-specification of player i’s utility. Using the first result and accounting for this error, we are then able to give global, finite-sample convergence guarantees in the full-batch setting by leveraging recent non-convex Riemannian optimization theory.\nThe utility landscape and parent-to-child error propagation. Equation (6) is abstruse, but we prove that the shape of player i’s utility is simply sinusoidal in the angular deviation of v̂i from the optimum. The amplitude of the sinusoid varies with the direction of the angular deviation along the unit-sphere and is dependent on the accuracy of players j < i. In the special case where players j < i have learned the top-(i− 1) eigenvectors exactly, player i’s utility simplifies (see Lemma N.1) to\nui(v̂i, {vj<i}) = Λii − sin2(θi) ( Λii − ∑ l>i zlΛll ) (8)\nwhere θi is the angular deviation and z ∈ ∆d−1 parameterizes the deviation direction. Note that sin2 has period π instead of 2π, which simply reflects the fact that vi and −vi are both eigenvectors.\n4EigenGame sans order learns max 1 PC and sans order+normalization 5 PCs on data in Figure 3a. 5EigenGame runtimes are longer than those of EigenGameR in the synthetic experiments despite strictly requiring fewer FLOPS; apparently this is due to low-level floating point arithmetic specific to the experiments.\nAn error propagation analysis reveals that it is critical to learn the parents to a given degree of accuracy. The angular distance between vi and the maximizer of player i’s utility with approximate parents has tan−1 dependence (i.e., a soft step-function; see Lemma N.5 and Figure 13 in Appendix N). Theorem 4.1 (Global convergence). Algorithm 1 achieves finite sample convergence to within θtol angular error of the top-k principal components, independent of initialization. Furthermore, if each v̂i is initialized to within π4 of vi, Algorithm 1 returns the components with angular error less than θtol\nin T = ⌈ O ( k [\n(k−1)! θtol k∏ i=1 ( 16Λ11 gi )]2)⌉ iterations. Proofs are deferred to Appendices O.4 and O.5.\nAngular error is defined as the angle between v̂i and vi: θi = sin−1( √\n1− 〈vi, v̂i〉2). The first k in the formula for T appears from a naive summing of worst case bounds on the number of iterations required to learn each v̂j<k individually. The constant 16 arises from the error propagation analysis; parent vectors, v̂j<i, must be learned to under 1/16th of a canonical error threshold, gi(i−1)Λ11 , for the child v̂i where gi = Λii − Λi+1,i+1. The Riemannian optimization theory we leverage dictates that 1 ρ2 iterations are required to meet a O(ρ) error threshold. This is why the squared inverse of the error threshold appears here. Breaking down the error threshold itself, the ratio Λ11/gi says that more iterations are required to distinguish eigenvectors when the difference between them (summarized by the gap gi) is small relative to the scale of the spectrum, Λ11. The (k − 1)! term appears because learning smaller eigenvectors requires learning a much more accurate v̂1 higher up the DAG.\nLastly, the utility function for each v̂i is sinusoidal, and it is possible that we initialize v̂i with initial utility arbitrarily close to the trough (bottom) of the function where gradients are arbitrarily small. This is why the global convergence rate depends on the initialization in general. Note that Algorithm 1 effectively detects the trough by measuring the norm of the initial gradient (∇v̂0i ui) and scales the number of required iterations appropriately. A complete theorem that considers the probability of initializing v̂i within π4 of vi is in Appendix O, but this possibility shrinks to zero in high dimensions.\nWe would also like to highlight that these theoretical findings are strong relative to some other claims. For example, the exponential convergence guarantee for Matrix Krasulina requires the initial guess at the eigenvectors capture the top-(k − 1) subspace (Tang, 2019), unlikely when d k. A similar condition is required in (Shamir, 2016b). These guarantees are given for the mini-batch setting while ours is for the full-batch, however, we provide global convergence without restrictions on initialization." }, { "heading": "5 RELATED WORK", "text": "PCA is a century-old problem and a massive literature exists (Jolliffe, 2002; Golub and Van Loan, 2012). The standard solution to this problem is to compute the SVD, possibly combined with randomized algorithms, to recover the top-k components as in (Halko et al., 2011) or with Frequent Directions (Ghashami et al., 2016) which combines sketching with SVD.\nIn neuroscience, Hebb’s rule (Hebb, 2005) refers to a connectionist rule that solves for the top eigenvector of a matrix M using additive updates of a vector v as v ← v + ηMv. Likewise, Oja’s rule (Oja, 1982; Shamir, 2015) refers to a similar update v ← v + η(I − vv>)Mv. In machine learning, using a normalization step of v ← v/||v|| with Hebb’s rule is somewhat confusingly referred to as Oja’s algorithm (Shamir, 2015), the reason being that the subtractive term in Oja’s rule can be viewed as a regularization term for implicitly enforcing the normalization. In the limit of infinite step size, η →∞, Oja’s algorithm effectively becomes the well known Power method. If a normalization step is added to Oja’s rule, this is referred to as Krasulina’s algorithm (Krasulina, 1969). In the language of Riemannian manifolds, v/||v|| can be recognized as a retraction and (I − vv>) as projecting the gradient Mv onto the tangent space of the sphere (Absil et al., 2009).\nMany of the methods above have been generalized to the top-k components. Most generalizations involve adding an orthonormalization step after each update, typically accomplished with a QR factorization plus some minor sign accounting (e.g., see Algorithm 3 in Appendix A.1). An extension of Krasulina’s algorithm to the top-k setting, termed Matrix Krasulina (Tang, 2019), was recently proposed in the machine learning literature. This algorithm can be recognized as projecting the gradient onto the Stiefel manifold (the space of orthonormal matrices) followed by a QR step to maintain orthonormality, which is a well known retraction.\nMaintaining orthonormality via QR is computationally expensive. Amid and Warmuth (2019) propose an alternative Krasulina method which does not require re-orthonormalization but instead requires inverting a k × k matrix; in a streaming setting restricted to minibatches of size 1 (Xt ∈ Rd), Sherman-Morrison (Golub and Van Loan, 2012) can be used to efficiently replace the inversion step. Raja and Bajwa (2020) develop a data-parallel distributed algorithm for the top eigenvector. Alternatively, the Jacobi eigenvalue algorithm explicitly represents the matrix of eigenvectors as a Givens rotation matrix using sin’s and cos’s and rotates M until it is diagonal (Golub and Van der Vorst, 2000).\nIn contrast, other methods extract the top components in sequence by solving for the ith component using an algorithm such as power iteration or Oja’s, and then enforcing orthogonality by removing the learned subspace from the matrix, a process known as deflation. Alternatively, the deflation process may be intertwined with the learning of the top components. The generalized Hebbian algorithm (Sanger, 1989) (GHA) works this way as do Lagrangian inspired formulations (Ghojogh et al., 2019) as well as our own approach. We make the connection between GHA and our algorithm concrete in Prop. K.1. Note, however, that the GHA update is not the gradient of any utility (Prop. K.2) and therefore, lacks a clear game interpretation.\nOf these, Oja’s algorithm has arguably been the most extensively studied (Shamir, 2016a; Allen-Zhu and Li, 2017)6 Note that Oja’s algorithm converges to the actual principal components (Allen-Zhu and Li, 2017) and Matrix Krasulina (Tang, 2019) converges to the top-k subspace. However, neither can be obviously decentralized. GHA (Sanger, 1989) converges to the principal components asymptotically and can be decentralized (Gang et al., 2019). Each of these is applicable in the streaming k-PCA setting." }, { "heading": "6 EXPERIMENTS", "text": "We compare our approach against GHA, Matrix Krasulina, and Oja’s algorithm7. We present both EigenGame and EigenGameR which projects the gradient onto the tangent space of the sphere each step. We measure performance of methods in terms of principal component accuracy and subspace distance. We measure principal component accuracy by the number of consecutive components, or longest streak, that are estimated within an angle of π8 from ground truth. For example, if the angular errors of the v̂i’s returned by a method are, in order, [θ1, θ2, θ3, . . .] = [ π16 , π 4 , π 10 , . . .], then the method is credited with a streak of only 1 regardless of the errors θi>2. For Matrix Krasulina, we first compute the optimal matching from v̂i to ground truth before measuring angular error. We present the longest streak as opposed to “# of eigenvectors found” because, in practice, no ground truth is available and we think the user should be able to place higher confidence in the larger eigenvectors being correct. If an algorithm returns k vectors, k2 of which are accurate components but does not indicate which, this is less helpful. We measure normalized subspace distance using 1− 1k · Tr(U ∗P ) ∈ [0, 1] where U∗ = V V † and P = V̂ V̂ † similarly to Tang (2019).\nSynthetic data. Experiments on synthetic data demonstrate the viability of our approach (Figure 3a). Oja’s algorithm performs best on synthetic experiments because strictly enforcing orthogonalization with an expensive QR step greatly helps when solving for all eigenvectors. EigenGame is able to effectively parallelize this over k machines and the advantage of QR diminishes in Figure 3b. The remaining algorithms perform similarly on a linearly decaying spectrum, however, EigenGame performs better on an exponentially decaying spectrum due possibly to instability of Riemannian gradients near the equilibrium (see Appendix J for further discussion). GHA and EigenGameR are equivalent under specific conditions (see Proposition K.1).\nFigure 4a shows EigenGame solves for the eigenvectors up to a high degree of accuracy π32 , i.e. the convergence results in Figure 3a are not the result of using a loose tolerance of π8 . With the lower tolerance, all algorithms take slightly more iterations to learn the eigenvectors of the linear spectrum; it is difficult to see any performance change for the exponential spectrum. Although Theorem 4.1 assumes distinct eigenvalues, Figure 4b supports the claim that EigenGame does not require distinct eigenvalues for convergence. We leave proving convergence in this setting to future work.\n6See Table 1 in (Allen-Zhu and Li, 2017). 7A detailed discussion of Frequent Directions (Ghashami et al., 2016) can be found in Appendix H.\nMNIST handwritten digits. We compare EigenGame against GHA, Matrix Krasulina, and Oja’s algorithm on the MNIST dataset (Figure 3b). We flatten each image in the training set to obtain a 60, 000× 784 dimensional matrix. EigenGame is competitive with Oja’s in a high batch size regime (1024 samples per mini-batch). The performance gap between EigenGame and the other methods shrinks as the mini-batch size is reduced (see Appendix I), expectedly due to biased gradients.\nThe principal components of RESNET-200 activations on IMAGENET are edge filters. A primary goal of PCA is to obtain interpretable low-dimensional representations. To this end we present an example of using EigenGame to compute the top-32 principal components of the activations of a pretrained RESNET-200 on the IMAGENET dataset. We concatenate the flattened activations from the output of each residual block resulting in a d ≈ 20M dimensional vector representation for each of the roughly 1.2M input images. It is not possible to store the entire 195TB matrix in memory, nor incrementally compute the Gram/covariance matrix.\nWe implemented a data-and-model parallel version of EigenGame in JAX (Bradbury et al., 2018) where each v̂i is assigned to it’s own TPU (Jouppi et al., 2017). Each device keeps a local copy of the RESNET parameters and the IMAGENET datastream. Sampling a mini-batch (of size 128), computing the network activations and updating v̂i are all performed locally. The broadcast(v̂i) step is handled by the pmap and lax.all_gather functions. Computing the top-32 principal components takes approximately nine hours on 32 TPUv3s.\nFigure 5a shows the top principal components of the activations of the trained network organized by network topology (consisting of five residual blocks). Note that EigenGame is not applied block-wise, but on all 20M dimensions. We do not assume independence between blocks and the eigenvector has unit norm across all blocks. We observe that Block 1 (closest to input) of PC 1 has very small magnitude activations relative to the other PCs. This is because PC 1 should capture the variance which discriminates most between the classes in the dataset. Since Block 1 is mainly\nconcerned with learning low-level image filters, it stands to reason that although these are important for good performance, they do not necessarily extract abstract representations which are useful for classification. Conversely, we see that PC 1 has larger relative activations in the later blocks.\nWe visualize the average principal activation in Block 18 in Figure 5b. The higher PCs learn distinct filters (Gabor filters, Laplacian-of-Gaussian filters c.f. (Bell and Sejnowski, 1997)).\n7 CONCLUSION It seems easier to train a bi-directional LSTM with attention than to compute the SVD of a large matrix. –Chris Re NeurIPS 2017 Test-of-Time Award, Rahimi and Recht\n(Rahimi and Recht, 2017).\nIn this work we motivated PCA from the perspective of a multi-player game. This inspired a decentralized algorithm which enables large-scale principal components estimation. To demonstrate this we used EigenGame to analyze a large neural network through the lens of PCA. To our knowledge this is the first academic analysis of its type and scale (for reference, (Tang, 2019) compute the top-6 PCs of the d = 2300 outputs of VGG). EigenGame also opens a variety of research directions.\nScale. In experiments, we broadcast across all edges in Figure 1 every iteration. Introducing lag or broadcasting with dropout may improve efficiency. Can we further reduce our memory footprint by storing only scalars of the losses and avoiding congestion through online bandit or reinforcement learning techniques? Our decentralized algorithm may have implications for federated and privacy preserving learning as well (Heinze et al., 2016; Heinze-Deml et al., 2018; Bonawitz et al., 2019).\nGames. EigenGame has a unique Nash equilibrium due to the fixed DAG structure, but vectors are initialized randomly so v̂k may start closer to v1 than v̂1 does. Adapting the DAG could make sense, but might also introduce spurious fixed points or suboptimal Nash. Might replacing vectors with populations accelerate extraction of the top principal components?\nCore ML. EigenGame could be useful as a diagnostic or for accelerating training (Desjardins et al., 2015; Krummenacher et al., 2016); similarly, spectral normalization has shown to be a valuable tool for stabilizing GAN training (Miyato et al., 2018).\nLastly, GANs (Goodfellow et al., 2014) recently reformulated learning a generative model as a two-player zero-sum game. Here, we show how another fundamental unsupervised learning task can be formulated as a k-player game. While two-player, zero-sum games are well understood, research on k-player, general-sum games lies at the forefront in machine learning. We hope that marrying a fundamental, well-understood task in PCA with the relatively less understood domain of many player games will help advance techniques on both ends." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We are grateful to Trevor Cai for his help scaling the JAX implementation of EigenGame to handle the large IMAGENET experiment and to Daniele Calandriello for sharing his expert knowledge of related work and advice on revising parts of the manuscript." } ]
2,021
null
SP:9feb34bfbe8bfbf1a99d90a74f36b2b0c7dc9985
[ "Real-world data contains noise in the annotated labels. To mitigate, the authors propose a supervised learning approach, Robust Temporal Ensembling (RTE). RTE combines 1) task loss correction, which is a generalized cross entropy loss, 2) different augmentations resulting from AugMix technique and the Jensen-Shannon divergence (JSD), 3) the ensemble consistency regularization and pseudo labeling.", "This submission deals with robust supervised learning in the presence of noisy labels. The label noise is modeled using a probabilistic (and conditionally independent) transition matrix that changes the label of one class to another one. In order to classify with noise, the network is trained with a mixture of three known losses including: 1) generalized cross entropy (GCE) rejects the outlier labels, 2) JSD divergence to assure the soft-max distribution matches the augmented data distributions, and 3) an ensemble consistency regularization (ECR) that penalizes the inconsistencies of the augmented data based on the mean teachers. Experiments with CIFAR-10, CIFAR-100, and ImageNet classification indicate substantial gains compared with state-of-the-art alternatives. " ]
Successful training of deep neural networks with noisy labels is an essential capability as most real-world datasets contain some amount of mislabeled data. Left unmitigated, label noise can sharply degrade typical supervised learning approaches. In this paper, we present robust temporal ensembling (RTE), a simple supervised learning approach which combines robust task loss, temporal pseudolabeling, and a ensemble consistency regularization term to achieve noise-robust learning. We demonstrate that RTE achieves state-of-the-art performance across the CIFAR-10, CIFAR-100, and ImageNet datasets, while forgoing the recent trend of label filtering/fixing. In particular, RTE achieves 93.64% accuracy on CIFAR-10 and 66.43% accuracy on CIFAR-100 under 80% label corruption, and achieves 74.79% accuracy on ImageNet under 40% corruption. These are substantial gains over previous state-of-the-art accuracies of 86.6%, 60.2%, and 71.31%, respectively, achieved using three distinct methods. Finally, we show that RTE retains competitive corruption robustness to unforeseen input noise using CIFAR10-C, obtaining a mean corruption error (mCE) of 13.50% even in the presence of an 80% noise ratio, versus 26.9% mCE with standard methods on clean data.
[]
[ { "authors": [ "Eric Arazo", "Diego Ortego", "Paul Albert", "Noel E O’Connor", "Kevin McGuinness" ], "title": "Unsupervised label noise modeling and loss correction", "venue": null, "year": 1904 }, { "authors": [ "Ben Athiwaratkun", "Marc Finzi", "Pavel Izmailov", "Andrew Gordon Wilson" ], "title": "There are many consistent explanations of unlabeled data: Why you should average", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ian Goodfellow", "Nicolas Papernot", "Avital Oliver", "Colin Raffel" ], "title": "Mixmatch: A holistic approach to semi-supervised learning", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ekin D. Cubuk", "Alex Kurakin", "Kihyuk Sohn", "Han Zhang", "Colin Raffel" ], "title": "Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "G.E.P. Box", "D.R. Cox" ], "title": "An analysis of transformations", "venue": "Journal of the Royal Statistical Society. Series B (Methodological),", "year": 1964 }, { "authors": [ "Olivier Chapelle", "Alexander Zien" ], "title": "Semi-supervised classification by low density separation", "venue": "Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics,", "year": 2005 }, { "authors": [ "Olivier Chapelle", "Jason Weston", "Léon Bottou", "Vladimir Vapnik" ], "title": "Vicinal risk minimization", "venue": "Advances in Neural Information Processing Systems,", "year": 2001 }, { "authors": [ "Ekin D. Cubuk", "Barret Zoph", "Jonathon Shlens", "Quoc V. Le" ], "title": "Randaugment: Practical automated data augmentation with a reduced search space", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR Workshops", "year": 2020 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Li Fei-Fei", "Jia Deng" ], "title": "Imagenet: Where have we been? where are we going", "venue": null, "year": 2017 }, { "authors": [ "B. Frenay", "M. Verleysen" ], "title": "Classification in the presence of label noise: A survey", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2014 }, { "authors": [ "Jacob Goldberger", "Ehud Ben-Reuven" ], "title": "Training deep neural-networks using a noise adaptation", "venue": null, "year": 2016 }, { "authors": [ "Yves Grandvalet", "Y. Bengio" ], "title": "Semi-supervised learning by entropy minimization", "venue": "volume 17,", "year": 2004 }, { "authors": [ "Bo Han", "Quanming Yao", "Xingrui Yu", "Gang Niu", "Miao Xu", "Weihua Hu", "Ivor Tsang", "Masashi Sugiyama" ], "title": "Co-teaching: Robust training of deep neural networks with extremely noisy labels", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "Proceedings of the International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Duncan Wilson", "Kevin Gimpel" ], "title": "Using trusted data to train deep networks on labels corrupted by severe noise", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Dan Hendrycks", "Norman Mu", "Ekin Dogus Cubuk", "Barret Zoph", "Justin Gilmer", "Balaji Lakshminarayanan" ], "title": "Augmix: A simple method to improve robustness and uncertainty under data shift", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Joel Hestness", "Sharan Narang", "Newsha Ardalani", "Gregory Diamos", "Heewoo Jun", "Hassan Kianinejad", "Md. Mostofa Ali Patwary", "Yang Yang", "Yanqi Zhou" ], "title": "Deep Learning Scaling is Predictable, Empirically", "venue": "arXiv e-prints, art", "year": 2017 }, { "authors": [ "Daniel Ho", "Eric Liang", "Ion Stoica", "Pieter Abbeel", "Xi Chen" ], "title": "Population Based Augmentation: Efficient Learning of Augmentation Policy Schedules", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Max Jaderberg", "Valentin Dalibard", "Simon Osindero", "Wojciech M. Czarnecki", "Jeff Donahue", "Ali Razavi", "Oriol Vinyals", "Tim Green", "Iain Dunning", "Karen Simonyan", "Chrisantha Fernando", "Koray Kavukcuoglu" ], "title": "Population Based Training of Neural Networks", "venue": "arXiv e-prints, art", "year": 2017 }, { "authors": [ "Simon Jenni", "Paolo Favaro" ], "title": "Deep bilevel learning", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Lu Jiang", "Zhengyuan Zhou", "Thomas Leung", "Li-Jia Li", "Li Fei-Fei" ], "title": "Mentornet: Learning datadriven curriculum for very deep neural networks on corrupted labels", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Alexander Kolesnikov", "Lucas Beyer", "Xiaohua Zhai", "Joan Puigcerver", "Jessica Yung", "Sylvain Gelly", "Neil Houlsby" ], "title": "Big Transfer (BiT): General Visual Representation Learning", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "A. Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alex Kurakin", "Chun-Liang Li", "Colin Raffel", "David Berthelot", "Ekin Dogus Cubuk", "Han Zhang", "Kihyuk Sohn", "Nicholas Carlini", "Zizhao Zhang" ], "title": "Fixmatch: Simplifying semi-supervised learning with consistency and confidence", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Samuli Laine", "Timo Alia" ], "title": "Temporal ensembling for semi-supervised learning", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Dong-Hyun Lee" ], "title": "Pseudo-label : The simple and efficient semi-supervised learning method for deep neural networks", "venue": "ICML 2013 Workshop : Challenges in Representation Learning (WREPL),", "year": 2013 }, { "authors": [ "Kimin Lee", "Sukmin Yun", "Kibok Lee", "Honglak Lee", "Bo Li", "Jinwoo Shin" ], "title": "Robust inference via generative classifiers for handling noisy labels. volume", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Ang Li", "Ola Spyra", "Sagi Perel", "Valentin Dalibard", "Max Jaderberg", "Chenjie Gu", "David Budden", "Tim Harley", "Pramod Gupta" ], "title": "A Generalized Framework for Population Based Training", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Junnan Li", "Yongkang Wong", "Qi Zhao", "Mohan S Kankanhalli" ], "title": "Learning to learn from noisy labeled data", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Junnan Li", "Richard Socher", "Steven C.H. Hoi" ], "title": "Dividemix: Learning with noisy labels as semisupervised learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Yuncheng Li", "Jianchao Yang", "Yale Song", "Liangliang Cao", "Jiebo Luo", "Li-Jia Li" ], "title": "Learning from noisy labels with distillation", "venue": "pp. 1928–1936,", "year": 2017 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Dhruv Mahajan", "Ross Girshick", "Vignesh Ramanathan", "Kaiming He", "Manohar Paluri", "Yixuan Li", "Ashwin Bharambe", "Laurens van der Maaten" ], "title": "Exploring the Limits of Weakly Supervised Pretraining", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Aditya Krishna Menon", "Ankit Singh Rawat", "Sashank J. Reddi", "Sanjiv Kumar" ], "title": "Can gradient clipping mitigate label noise", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Nagarajan Natarajan", "Inderjit S Dhillon", "Pradeep K Ravikumar", "Ambuj Tewari" ], "title": "Learning with noisy labels", "venue": "Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "Duc Tam Nguyen", "Chaithanya Kumar Mummadi", "Thi Phuong Nhung Ngo", "Thi Hoai Phuong Nguyen", "Laura Beggel", "Thomas Brox" ], "title": "Self: Learning to filter noisy labels with selfensembling", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Giorgio Patrini", "Alessandro Rozza", "Aditya Menon", "Richard Nock", "Lizhen Qu" ], "title": "Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach", "venue": "arXiv e-prints, art", "year": 2016 }, { "authors": [ "Scott Reed", "Honglak Lee", "Dragomir Anguelov", "Christian Szegedy", "Dumitru Erhan", "Andrew Rabinovich" ], "title": "Training Deep Neural Networks on Noisy Labels with Bootstrapping", "venue": "arXiv e-prints, art", "year": 2014 }, { "authors": [ "Mengye Ren", "Wenyuan Zeng", "Bin Yang", "Raquel Urtasun" ], "title": "Learning to reweight examples for robust deep learning", "venue": "arXiv preprint arXiv:1803.09050,", "year": 2018 }, { "authors": [ "Mehdi Sajjadi", "Mehran Javanmardi", "Tolga Tasdizen" ], "title": "Regularization with stochastic transformations and perturbations for deep semi-supervised learning", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Hwanjun Song", "Minseok Kim", "Jae-Gil Lee" ], "title": "SELFIE: Refurbishing unclean samples for robust deep learning", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Hwanjun Song", "Minseok Kim", "Dongmin Park", "Jae-Gil Lee" ], "title": "Learning from Noisy Labels with Deep Neural Networks: A Survey", "venue": "arXiv e-prints, art", "year": 2020 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": "Journal of Machine Learning Research,", "year": 1929 }, { "authors": [ "I. Sutskever", "J. Martens", "G. Dahl", "G. Hinton" ], "title": "On the importance of initialization and momentum in deep learning", "venue": "30th International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Antti Tarvainen", "Harri Valpola" ], "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Sunil Thulasidasan", "Tanmoy Bhattacharya", "Jeff Bilmes", "Gopinath Chennupati", "Jamal MohdYusof" ], "title": "Combating label noise in deep learning using abstention", "venue": null, "year": 1905 }, { "authors": [ "Yisen Wang", "Weiyang Liu", "Xingjun Ma", "James Bailey", "Hongyuan Zha", "Le Song", "Shu-Tao Xia" ], "title": "Iterative learning with open-set noisy labels", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Qizhe Xie", "Zihang Dai", "Eduard Hovy", "Minh-Thang Luong", "Quoc V. Le" ], "title": "Unsupervised Data Augmentation for Consistency Training", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Qizhe Xie", "Minh-Thang Luong", "Eduard Hovy", "Quoc V. Le" ], "title": "Self-training with Noisy Student improves ImageNet classification", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Kun Yi", "Jianxin Wu" ], "title": "Probabilistic end-to-end noise correction for learning with noisy labels", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide Residual Networks", "venue": "arXiv e-prints, art", "year": 2016 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N. Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Zhilu Zhang", "Mert Sabuncu" ], "title": "Generalized cross entropy loss for training deep neural networks with noisy labels", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Zizhao Zhang", "Han Zhang", "Sercan O. Arik", "Honglak Lee", "Tomas Pfister" ], "title": "Distilling effective supervision from severe label noise", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "S. Zheng", "Y. Song", "T. Leung", "I. Goodfellow" ], "title": "Improving the robustness of deep neural networks via stability training", "venue": "In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Goldberger", "Ben-Reuven" ], "title": "2016) model the correct label as latent and having gone through a parameterized corruption process. Expectation maximization is used to estimate both the parameters of the corruption process and the underlying latent label. Jiang et al. (2018) introduce the idea of learning a curriculum-learning strategy with a mentor model to train a student model to be robust to label noise. Patrini et al. (2016) estimate the noise", "venue": null, "year": 2016 }, { "authors": [ "Wang" ], "title": "under the assumption of feature independent noise) and show that, given the true noise transition matrix, optimizing for the true underlying labels", "venue": null, "year": 2018 }, { "authors": [ "Ren" ], "title": "2018) use a meta-learning approach to dynamically weight examples to minimize loss using a set of validation examples with clean labels, however they also report a competitive baseline using a randomized weighting scheme which requires no clean validation", "venue": null, "year": 2018 }, { "authors": [ "Zhang" ], "title": "distinct desirable properties: cross-entropy exhibits better gradient properties for learning, while mean absolute error exhibits better theoretically-grounded robustness to noisy labels. Han et al. (2018) leverage co-teaching such that two networks are trained together, in which each network 1. identifies high-confidence examples, 2. passes this information in a message to its peer, and 3. leverages the incoming message", "venue": null, "year": 2018 }, { "authors": [ "Li" ], "title": "Under review as a conference paper at ICLR 2021 while a ‘compatibility loss’ is introduced to ensure that the estimated label distribution stays close to the noisy labels provided with the training", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks have enjoyed considerable success across a variety of domains, and in particular computer vision, where the common theme is that more labeled training data yields improved model performance (Hestness et al., 2017; Mahajan et al., 2018; Xie et al., 2019b; Kolesnikov et al., 2019). However, performance depends on the quality of the training data, which is expensive to collect and inevitably imperfect. For example, ImageNet (Deng et al., 2009) is one of the most widely-used datasets in the field of deep learning and despite over 2 years of labor from more than 49,000 human annotators across 167 countries, it still contains erroneous and ambiguous labels (FeiFei & Deng, 2017; Karpathy, 2014). It is therefore essential that learning algorithms in production workflows leverage noise robust methods.\nNoise robust learning has a long history and takes many forms (Natarajan et al., 2013; Frenay & Verleysen, 2014; Song et al., 2020). Common strategies include loss correction and reweighting (Patrini et al., 2016; Zhang & Sabuncu, 2018; Menon et al., 2020), label refurbishment (Reed et al., 2014; Song et al., 2019), abstention (Thulasidasan et al., 2019), and relying on carefully constructed trusted subsets of human-verified labeled data (Li et al., 2017; Hendrycks et al., 2018; Zhang et al., 2020). Additionally, recent methods such as SELF (Nguyen et al., 2020) and DivideMix (Li et al., 2020) convert the problem of learning with noise into a semi-supervised learning approach by splitting the corrupted training set into clean labeled data and noisy unlabeled data at which point semisupervised learning methods such as Mean Teacher (Tarvainen & Valpola, 2017) and MixMatch (Berthelot et al., 2019) can be applied directly. In essence, these methods effectively discard a majority of the label information so as to side-step having to learning with noise at all. The problem here is that noisy label filtering tactics are imperfect resulting in corrupted data in the small labeled partition and valuable clean samples lost to the large pool of unlabeled data. Moreover, caution is needed when applying semi-supervised methods where the labeled data is not sampled i.i.d. from the pool of unlabeled data (Oliver et al.). Indeed, filtering tactics can be biased and irregular, driven by specification error and the underlying noise process of the label corruption. Recognizing the success of semi-supervised approaches, we ask: can we leverage the underlying mechanisms of semi-supervised learning such as entropy regularization for learning with noise without discarding our most valuable asset, the labels?" }, { "heading": "2 ROBUST TEMPORAL ENSEMBLING", "text": "" }, { "heading": "2.1 PRELIMINARIES", "text": "Adopting the notation of Zhang & Sabuncu (2018), we consider the problem of classification where X ⊂ Rd is the feature space and Y = {1, . . . , c} is the label space where the classifier function is a deep neural network with a softmax output layer that maps input features to distributions over labels f : X → Rc. The dataset of training examples containing in-sample noise is defined as D = {(xi, ỹi)}ni=1 where (xi, ỹi) ∈ (X × Y) and ỹi is the noisy version of the true label yi such that p(ỹi = k|yi = j, xi) ≡ ηijk. We do not consider open-set noise (Wang et al., 2018), in which there is a particular type of noise that occurs on inputs, x̃, rather than labels. Following most prior work, we make the simplifying assumption that the noise is conditionally independent of the input, xi, given the true labels. In this setting, we can write ηijk = p(ỹi = k|yi = j) ≡ ηjk which is, in general, considered to be class dependent noise1,2.\nTo aid in a simple and precise corruption procedure, we now depart from traditional notation and further decompose ηjk as pj · cjk, where pj ∈ [0, 1] is the probability of corruption of the j-th class and cjk ∈ [0, 1] is the relative probability that corrupted samples of class j are labeled as class k, with ci6=j ≥ 0, cjj = 0 and ∑ k cjk = 1. A noisy dataset with m classes can then be described as transition probabilities specified by\nF = diag(P ) · C + diag(1− P ) · I (1) where C ∈ Rm×m defines the system confusion or noise structure, P ∈ Rm defines the noise intensity or ratio for each class, and I is the identity matrix. When cjk = ckj the noise is said to be symmetric and is considered asymmetric otherwise. If ratio of noise is the same for all classes then pj = p and the dataset is said to exhibit uniform noise. For the case of uniform noise, equation (1) interestingly takes the familiar form of the Google matrix equation as\nFp = p · C + (1− p) · I (2) Note that, by this definition, ηjj = p · cjj = 0 which prohibits ỹi = yi. This ensures a true effective noise ratio of p. For example, suppose there are m = 10 classes and we wish to corrupt labels with 80% probability. Then if corrupted labels are sampled from Y rather than Y \\ {y}, 110 · 0.8 = 8% of the corrupted samples will not actually be corrupted, leading to a true corruption rate of 72%. Therefore, despite prescribing p = 0.8, the true effective noise ratio would be 0.72, which in turn yields a 0.081−0.8 = 40% increase in clean labels, and this is indeed the case in many studies (Zhang & Sabuncu, 2018; Nguyen et al., 2020; Li et al., 2020; Zhang et al., 2020)." }, { "heading": "2.2 METHODS", "text": "At a very high level, RTE is the combination of noise-robust task loss, augmentation, and pseudolabeling for consistency regularization. We unify generalized cross entropy (Zhang & Sabuncu, 2018), AugMix stochastic augmentation strategy (Hendrycks et al., 2020), an exponential moving average of model weights for generating pseudo-labels (Tarvainen & Valpola, 2017), and an augmentation anchoring-like approach (Berthelot et al., 2020) to form a robust approach for learning with noisy labels." }, { "heading": "2.2.1 NOISE-ROBUST TASK LOSS", "text": "Generalized cross entropy (GCE) (Zhang & Sabuncu, 2018) is a theoretically grounded noise-robust loss function that can be seen as a generalization of mean absolute error (MAE) and categorical cross entropy (CCE). The main idea is that CCE learns quickly, but more emphasis is put on difficult samples which is prone to overfit noisy labels, while MAE treats all samples equally, providing noise-robustness but learning slowly. To exploit the benefits of both MAE and CCE, a negative Box-Cox transformation (Box & Cox, 1964) is used as the loss function\nLq(f(xi), yi = j) = (1− fj(xi)q)\nq (3)\n1See Lee et al. (2019) for treatment of conditionally dependent semantic noise such that ηijk 6= ηjk. 2Note that Patrini et al. (2016) define the noise transition matrix T such that Tjk ≡ ηjk.\nwhere q ∈ (0, 1], and fj denotes the j-th element of f . Note that GCE becomes CCE for limq→0 Lq and becomes MAE/unhinged loss when q = 1." }, { "heading": "2.2.2 ENSEMBLE CONSISTENCY REGULARIZATION", "text": "Consistency regularization works under the assumption that a model should output similar predictions given augmented versions of the same input. This regularization strategy is a common component of semi-supervised learning algorithms with the general form of ‖pθ(y|xaug1)−pθ(y|xaug2)‖ where pθ(y|x) is the predicted class distribution produced by the model having parameters θ for input x (Zheng et al., 2016; Sajjadi et al., 2016). We build upon numerous variations from semisupervised learning (Laine & Alia, 2017; Tarvainen & Valpola, 2017; Berthelot et al., 2019; 2020) and leverage an ensemble consistency regularization (ECR) strategy as\nECR = 1 |Y|N∗ N∗∑ i=1 ‖pθ′(y|x)− pθ(y|A(x))‖ (4)\nwhere x is the training example, A is stochastic augmentation function reevaluated for each term in the summation, θ′t = αθ ′ t−1 + (1 − α)θt is a temporal moving average of model weights used to generate pseudo-label targets, and inputs are pre-processed with standard random horizontal flip and crop. In practice, this consists of initializing a copy of the initial model and maintaining an exponential moving average as training progresses. Some methods directly average multiple label predictions together at each optimization step to form a single pseudo-label target (Berthelot et al., 2019; Li et al., 2020) but we find pseudo-label target distributions generated by θ′ to be better suited for the learning with noise problem due to the intrinsic ensemble nature of the weight averaging process over many optimization steps (Tarvainen & Valpola, 2017). In semi-supervised learning techniques, it is common to leverage a large batch-size of unlabeled data for consistency regularization. However, we found that modulating N∗, rather than the batch size of the consistency term, yields a monotonic increase in model performance consistent with related works (Berthelot et al., 2020). Moreover, in semi-supervised learning, different batches are used for between supervised and unsupervised loss terms but we find (see section 4.3) that for the case of learning with noise, batches synchronized with GCE task loss term yields superior performance." }, { "heading": "2.2.3 AUGMENTATION", "text": "AugMix (Hendrycks et al., 2020) is a data augmentation technique which utilizes stochasticity, diverse augmentations, a Jensen-Shannon divergence consistency loss, and a formulation to mix multiple augmented inputs. Other augmentation strategies such as RandAugment (Cubuk et al., 2020), augmentations are applied sequentially with fixed intensity which can degrade input quickly. In AugMix, to mitigate input degradation but retain augmentation diversity, several stochastically sampled augmentation chains are layered together in a convex combination to generate highly diverse transformations. These mixing coefficients are randomly sampled from a Dirichlet distribution with shared concentration parameters, and the resulting augmented version of the input is combined with the original input through a second random convex combination sampled from a beta distribution, again with shared parameters." }, { "heading": "2.2.4 JENSEN-SHANNON DIVERGENCE", "text": "The Jensen-Shannon consistency loss is used to enforce a flat response of the classifier by incentivizing the model to be stable, consistent, and insensitive across a diverse range of inputs (Zheng et al., 2016). The Jensen-Shannon divergence (JSD) is minimized across distributions porig, paug1, and paug2 of the original sample xorig and its augmented variants xaug1 and xaug2 which can be understood to measure the average information that the sample reveals about the identity of its originating distribution (Hendrycks et al., 2020). This JSD term is computed withM = (porig +paug1 +paug2)/3 and is then\nJSD = 1\n3\n( KL(porig ‖M) + KL(paug1 ‖M) + KL(paug2 ‖M) ) (5)\nwhere KL(p ‖ q) is Kullback–Leibler divergence from q to p. In this way, the JSD term improves the stability of training in the presence of noisy labels and heavy data augmentation with a modest contribution to final classifier test accuracy as shown in Table 5." }, { "heading": "2.3 PUTTING IT ALL TOGETHER", "text": "We unify the various components defined in sections 2.2 together under a single parsimonious loss function at training defined as\nLRTE = Lq + λJSD · JSD+λECR · ECR (6)\nwhich is essentially composed of robust task loss and consistency regularization. Here the JSD term is synchronized with ECR by computing the clean distribution using pθ′ . Final performance is reported using θ′.\nTo understand the synergy of GCE and ECR it is helpful to first point out that because GCE leverages a Box-Cox power transform to stabilize loss variance, it can be shown in this case to be a form of maximum likelihood estimation (Ferrari & Yang, 2010). The ECR term itself is based on pseudolabeling and pseudo-labeling can be shown to be form of entropy regularization (Grandvalet & Bengio, 2004) which in the framework of maximum a posterior (MAP) estimation encourages lowdensity separation between classes by minimizing the conditional entropy of the class probabilities of the noisy data (Lee, 2013). That is, by minimizing entropy, the overlap of class probability distribution can be reduced. The implicit assumption here is that classes are, in fact, well separated (Chapelle & Zien, 2005). Moreover, MAP estimation itself acts as a regularization of MLE by incorporating a priori knowledge of related training examples in order to solve the ill-posed noisy learning objective and further prevent overfitting. Indeed, entropy regularization is favorable in situations for which the joint distribution, p(x, y), is mis-specified (Grandvalet & Bengio, 2004) which further underpins the motivation of pseudo-labeling as an apt basis for regularization of the GCE loss.\nPseudo-labeling and data augmentation often go hand-in-hand. Data augmentation serves dual purpose as a generic regularizer to mitigate over-fitting of noisy labels (Zhang et al., 2018) as well as provides additional information about the vicinity or neighborhood of the training examples which is formalized by Vicinal Risk Minimization (Chapelle et al., 2001). These augmented examples can be seen as drawn from a vicinity distribution of the training examples to enlarge support of the training distribution such that samples in the vicinity share the same class but does not model the relation across examples of different classes (Zhang et al., 2018). Therefore, data augmentations approximate samples of nearby elements of the data manifold where the difference, ξ(x) = A(x)−x, approximates elements of its tangent space (Athiwaratkun et al., 2019). In this way, ECR can loosely be seen as generating a set of stochastic differential constraints at each optimization step of the classification task loss. While stronger augmentation can enrich the vicinity distribution, augmentation methods such as MixUp (Zhang et al., 2018) and RandAugment (Cubuk et al., 2020) can overly degrade training examples and drift off the data manifold (Hendrycks et al., 2020). When learning with noise, it is therefore essential to leverage an augmentation process rich in variety but which also preserve the image semantics and local statistics such as AugMix (Hendrycks et al., 2020) so as to minimize the additional strain on an already ill-posed noisy learning objective. Consistent with this understanding, although RandAugment has been successfully leveraged in semi-supervised learning (Berthelot et al., 2020; Kurakin et al., 2020; Xie et al., 2019a), our experiments with RandAugment proved unsuccessful for extreme levels of label noise. Moreover, AugMix augmentation used together with the Jensen-Shannon consistency loss endows trained models with far superior model robustness to corrupted data in deployment as shown in Table 7." }, { "heading": "3 RELATED WORK", "text": "Some methods for learning with noise attempt to improve noisy learning performance head-on by leveraging augmentation as a strong regularizer to mitigate memorization of corrupted labels (Zhang et al., 2018) while others attempt to refurbish corrupted labels to control the accumulation of noise from mislabeled data (Song et al., 2019). A recent theme in learning with noisy labels has been to transform the learning with noise problem into a semi-supervised one by removing the labels of training data determined to be corrupted to form the requisite dichotomy of clean labeled data and a pool of unlabeled data (Nguyen et al., 2020; Li et al., 2020); then directly applying semisupervised approaches such as MixMatch (Berthelot et al., 2019) and MeanTeacher (Tarvainen & Valpola, 2017). Other methods go so far as to require trusted human verified data and combine re-weighting with re-labeling into a meta optimization approach (Zhang et al., 2020).\nSemi-supervised learning algorithms have advanced considerably in recent years, making heavy use of both data augmentation and consistency regularization. MixMatch (Berthelot et al., 2019) proposed a low-entropy label-guessing approach for augmented unlabeled data and mixes labeled and unlabeled data using MixUp. In MixMatch, pseudo-label targets are formed by averaging label distributions produce by the model on samples drawn from the vicinity distribution ( 1K ∑ K pθ(y|A(x))). However, this averaging requires artificial sharpening to generate low-entropy pseudo-labels. From the MAP estimation perspective, sharpening does not add auxiliary a priori knowledge for the optimization step but rather prescribes a desirable property of the model generated label distribution. Indeed, our experiments with the use of artificial label sharpening in RTE resulted in failed training at high levels of label noise and subsequent related work recognized that stronger augmentations can result in disparate predictions so their average may not generate meaningful targets (Berthelot et al., 2020). ReMixMatch (Berthelot et al., 2020) introduced augmentation anchoring which aims to minimize the entropy between label distributions produced by multiple weak and strong data augmentations of unlabeled data using a control theory augmentation approach. While pseudo-label guessing and augmentation anchoring motivate the utility of multiple augmentations of the same data, our proposed ECR for learning with noise differs in the following important ways: ECR does not use distribution alignment for ”fairness”, distribution averaging, or label-sharpening; ECR forms pseudo-label targets using an exponential average of model weights and is batch-synchronized with the GCE task loss term. Finally, the recent work, FixMatch (Kurakin et al., 2020), proposes a simplified semi-supervised approach where the consistency regularization term uses hard pseudo-labeling for low-entropy targets together with a filtering step to remove lowconfidence unlabeled examples but does not leverage multiple strong augmentations." }, { "heading": "4 EXPERIMENTS", "text": "In this section we analyze the performance of RTE against various uniform noise configurations for both symmetric and asymmetric settings. For asymmetric noise, we test both the traditional configuration (Patrini et al., 2016), typically reported by related works, and an additional configuration defined by (7) which is in the spirit of (Lee et al., 2019), where we define the asymmetric noise structure using the confusion matrix of a trained shallow network. In all of these experiments, RTE outperforms existing methods. Finally, we perform additional ablation studies to better understand the contribution and synergy of the terms in equation (6) as well as to probe the efficacy of ECR.\nIn our experiments we consider the standard CIFAR-10, CIFAR-100, and ImageNet datasets (Krizhevsky, 2009; Deng et al., 2009). CIFAR-10 and CIFAR-100 each contain 50,000 training and 10,000 test images of 10 and 100 classes, respectively; and ImageNet contains approximately 1,000,000 training images and 50,000 validation images of 1000 classes. Additionally, we test networks trained with noisy labels against unforeseen input corruptions using CIFAR-10-C (Hendrycks & Dietterich, 2019) which was constructed by corrupting the original CIFAR-10 test set with a total of 15 noise, blur, weather, and digital corruptions under different severity levels and intensities. Classifier performance is averaged across these corruption types and severity levels to yield mean corruption error (mCE). Since CIFAR-10-C is used to measure network behavior under data shift, these 15 corruptions are not included during the training procedure. Here, CIFAR-10-C helps to establish a rigorous benchmark for image classifier robustness to better understand how models trained with noisy data might perform in safety-critical applications.\nTo mitigate the sensitivity of experimental results to empirical, and perhaps arbitrary, choices of hyperparameters, we present additional results that leverage Population Based Training (PBT) (Jaderberg et al., 2017; Li et al., 2019) which is a simple asynchronous optimisation algorithm that jointly optimize a population of models and their hyperparameters. In particular, PBT discovers a per-epoch schedule of hyperparameter settings rather than a static fixed configuration used over the entirety of training. These PBT schedules, for example, allow task loss Lq to vary between CE and MAE loss dynamically during training and similarly the number of ECR terms N∗ can be modulated to realize a form of curriculum learning. Moreover, for our purposes, PBT schedules also provide a form of quasi-ablation study, as optimization of the task-loss parameter q, the number of ECR termsN∗, and the ECR weight λECR allows for the realization of a simplified loss which forgos these components if determined maximally beneficial. We find, as in other studies, that this joint optimization of hyperparameter schedules typically results in faster wall-clock convergence and higher final performance. (Ho et al., 2019; Li et al., 2019)." }, { "heading": "4.1 UNIFORM SYMMETRIC NOISE", "text": "Training Setup. Please see Section C in the Appendix.\nBaselines. To best interpret the effectiveness of RTE, we compare our results to many techniques for learning with noise (Table 1). A description of each baseline method can be found in Appendix B. Only two of these references provide ImageNet results trained with label noise (Table 2).\nResults. Experimental results with uniform symmetric noise for both CIFAR-10 and CIFAR-100 are presented in Table 1 with comparisons to related work, including current state-of-the-art methods. RTE establishes new state-of-the-art performance at all noise levels and exhibits especially large performance gaps at high noise levels. At 80% noise, previous state-of-the-art was achieved by (Arazo et al., 2019) in the case of CIFAR-10 and by (Li et al., 2020) in the case of CIFAR-100. RTE improves performance over these methods by 7.0 absolute percentage points and 6.2 absolute percentage points, respectively. Of all of these works, only two report results on ImageNet training with noisy labels. These are included alongside RTE results in Table 2, where once again we see that RTE performs favorably, improving state-of-the-art performance in terms of both top-1 accuracy and top-5 accuracy. As in (Arazo et al., 2019) and (Li et al., 2020), we also include loss distributions over clean and corrupt labels in Figure 1. Here we can see that RTE prevents rote memorization of noisy labels. Moreover, Table 7 shows that RTE retains strong corruption robustness with an mCE of 12.05% and 13.50% at noise ratios of 40% and 80% respectively, as measured using CIFAR10-C. Put in context, experiments summarized in Table 7 indicate that even with extreme levels of mislabeled training data, RTE trained models have lower corruption error than models trained using standard methods using clean data." }, { "heading": "4.2 UNIFORM ASYMMETRIC NOISE", "text": "Training Setup. For consistency, uniform asymmetric noise experiments use the same hyperparameter configurations outlined for uniform symmetric noise. Here we test RTE performance using both the traditional asymmetric noise configuration (Patrini et al., 2016) typically reported by related works defined by Equation 8 in Section G of the Appendix as well as an additional configuration in the spirit of (Lee et al., 2019) where we define the asymmetric noise structure using the confusion matrix of a trained shallow network defined by Equation 7 in Section D of the Appendix.\nThe asymmetric noise defined by Patrini et al. (2016) in equation (8) does not corrupt all classes but rather attempts to capture a noise process whereby labelers confuse specific pairs of classes which by some is argued to be more realistic in practice (Han et al., 2018; Ren et al., 2018). We additionally consider a rich noise structure by training a shallow classifier (ResNet-10) on clean CIFAR-10 and use the resulting confusion matrix of this model to define the noise structure in equation (7). For example, this asymmetric noise process readily captures the phenomenon that objects on blue backgrounds are often confused (e.g. birds, ships, and airplanes) and its natural asymmetry where p(ỹi = SHIP|yi = AIRPLANE) = 0.2772 while p(ỹi = AIRPLANE|yi = SHIP) = 0.4603 (locations [1, 9] and [9, 1] in Eq. 7). Dataset statistics are provided for an instance of CIFAR-10 with\nasymmetric label noise prescribed according to equation (7) with a uniform noise ratio of 60% in Table 9 of Appendix G .\nBaselines. In the case of asymmetric noise as defined in Patrini et al. (2016), by equation (8), we compare the performance of RTE against existing work. A brief description of each baseline method can be found in Appendix B. In the case of asymmetric noise structure as defined in equation (7), to our knowledge, prior work does not exist, and we report RTE performance at varied noise levels.\nResults. The results for asymmetric noise as presented in related works defined in Patrini et al. (2016) by equation (8) with a uniform noise ratio of 40% are shown in Table 3 along side the performance of related methods. Again, RTE improves the state-of-the-art performance in this category, with a 1.1 absolute percentage point increase over (Li et al., 2020).\nTest accuracy for different level of asymmetric noise using C defined by (7) are shown in Table 4. Even with 60% noise ratio, RTE achieves 93.99% test accuracy. The first significant decline in accuracy occurs around a 65% asymmetric noise ratio, when the majority labels in a class are corrupted labels from another class. That is, for Fp=0.65 with C defined by (7), there are more AUTOMOBILE images labeled as TRUCKs, than actual TRUCK images labeled as TRUCK." }, { "heading": "4.3 ABLATION STUDIES", "text": "We perform various ablation studies to better understand the contribution of each term in equation (6), probe the efficacy of ECR, and compare with alternative regularization approaches. Our ablation results are presented in Table 5. These ablation studies use the training configurations defined in section 4.1 unless otherwise stated. We perform a component analysis where we remove one component at a time from equation 6 to better understand the performance contributions of each term. Removal of any term degrades performance. We also test alternative consistency regularization approaches using label guessing as proposed in MixMatch (Berthelot et al., 2019) and augmentation anchoring from ReMixMatch (Berthelot et al., 2020) which both underperform by significant margins compared to ECR. Moreover, our results show significant benefits in the use of EMA whereas performance degrades with the augmentation anchoring approach consistent with prior work (Berthelot et al., 2019). Additionally, we test if label sharpening could benefit ECR, but we find that the artificial sharpening process amplifies noisy pseudo-labels early in training and learning collapses for high noise ratios. Similarly, we find the strong linear chains of augmentations performed by RandAugment lead to training instabilities. Figure 2 summarizes the comparison of ECR to a traditional semi-supervised approach where a larger batch size is used for unsupervised regularization terms. This comparison indicates improved noisy learning performance with batch synchronization and repeated augmentation over larger batch sizes with single augmentations, validating the use of ECR for learning with noise." }, { "heading": "5 CONCLUSION", "text": "We introduced robust temporal ensembling (RTE), which unifies semi-supervised consistency regularization and noise robust task loss as an effective approach for learning with noisy labels. Rather than discarding noisy labels and applying semi-supervised methods, we successfully demonstrated a new approach for learning with noise which leverages all the data together without the need to filter,\nrefurbish, or abstain from noisy training examples. Through various experiments, we showed that RTE performs quite well in practice, advancing state-of-the-art performance across the CIFAR-10, CIFAR-100, and ImageNet datasets by 7.0, 6.2, and 3.5 absolute percentage points, respectively. Moreover, experiments summarized in Tables 4 and 7 show that despite significant label noise, RTE trained models retain lower corruption error on unforeseen data shifts than models trained using standard methods on clean data. Finally, the results of numerous ablations summarized in section 4.3 validate the composition of loss terms and their combined efficacy over alternative methods. In future work, we are interested in the application of RTE for different modalities such as natural language processing and speech where label noise can be more pervasive and subjective." }, { "heading": "A APPENDIX: HYPERPARAMETERS", "text": "" }, { "heading": "B APPENDIX: BASELINES FOR TABLE 1", "text": "In this section we provide a brief summary of the baseline methods in the main text:\nReed et al. (2014) introduce two methods for achieving prediction consistency, one based on reconstruction and one based on bootstrapping, and demonstrated empirically that bootstrapping leads to better robustness to label noise. Goldberger & Ben-Reuven (2016) model the correct label as latent and having gone through a parameterized corruption process. Expectation maximization is used to estimate both the parameters of the corruption process and the underlying latent label. Jiang et al. (2018) introduce the idea of learning a curriculum-learning strategy with a mentor model to train a student model to be robust to label noise. Patrini et al. (2016) estimate the noise transition matrix (under the assumption of feature independent noise) and show that, given the true noise transition matrix, optimizing for the true underlying labels is possible. Wang et al. (2018) introduce an iterative scheme that combines 1. outlier detection in feature space (acting as a proxy to noisy-label detection), 2. a Siamese network (taking either a clean, clean pair or a clean, noisy pair) to encourage separation, and 3. sample reweighting based on clean vs. noisy confidence levels in order to effectively filter out noisy labels during training. They focus primarily on open-set noise, but they also report performance of their system when used in the closed-set setting. Ren et al. (2018) use a meta-learning approach to dynamically weight examples to minimize loss using a set of validation examples with clean labels, however they also report a competitive baseline using a randomized weighting scheme which requires no clean validation set. Jenni & Favaro (2018) formulate example weighting as a bilevel-optimization problem, in which performance on a validation set is maximized with respect to example weights, subject to the constraint that the model maximizes performance on the training set; and they argue that this approach should lead to better generalization when label noise is present. Zhang & Sabuncu (2018) introduce a loss function that is a generalization of cross-entropy loss and mean absolute error, which is beneficial since each exhibits distinct desirable properties: cross-entropy exhibits better gradient properties for learning, while mean absolute error exhibits better theoretically-grounded robustness to noisy labels. Han et al. (2018) leverage co-teaching such that two networks are trained together, in which each network 1. identifies high-confidence examples, 2. passes this information in a message to its peer, and 3. leverages the incoming message to optimize using the examples selected by its peer. Zhang et al. (2018) train using convex combinations of both input images and their labels, arguing that this approach makes it more difficult for the network to memorize corrupt labels. Song et al. (2019) measure label consistency throughout training in order to determine which samples are ‘refurbishable’, and these samples are then ‘corrected’ by replacing their ground-truth label with the most frequently-predicted label. Lee et al. (2019) do not modify the training process of the underlying neural network but instead form a generative model over the final (pre-softmax) features of the neural network, and this generative distribution along with Bayes rule is then used to estimate a more robust conditional distribution over the label. Arazo et al. (2019) fit a beta mixture model over the loss using two mixture components, representing clean and noisy labels, and each sample’s underlying component probabilities are used to weight each sample’s contribution during training. They combine this approach with MixUp (Zhang et al., 2018). Yi & Wu (2019) maintain a direct estimate of a distribution over true underlying labels during training, and train the parameters of a neural network by minimizing reverse KL divergence (from the model’s predicted distribution to these true-label estimates). Mean-\nwhile a ‘compatibility loss’ is introduced to ensure that the estimated label distribution stays close to the noisy labels provided with the training set. Li et al. (2019) subject a student model to artificial label noise during training and take alternating gradient steps and maintain a teacher model that is not subjected to such noise. Here, alternating gradient steps are taken to 1. minimize classification loss and 2. minimize the KL divergence from the student’s predicted distributions to the teacher’s predicted distributions. Nguyen et al. (2020) use discrepancy between an ensemble-based teacher model and labels to identify and filter out incorrect labels, and continue to leverage these samples without the labels. This is done in a semi-supervised fashion by maintaining consistency between the teacher’s predictions and the student’s predictions. Li et al. (2020) maintain two networks and for each network models loss using a mixture of Gaussians with two components (clean and noisy). Each network estimates which samples belong to each component, and the other network then uses the clean samples in a supervised manner along with the noisy labels in a semi-supervised manner." }, { "heading": "C APPENDIX: UNIFORM SYMMETRIC NOISE EXPERIMENTAL SETUP", "text": "For CIFAR-10, we leverage equation (2) with C10i 6=j = 1 9 and we employ a 28-layer residual network (He et al., 2016) with a widening factor of 6 (WNR 28x6) (Zagoruyko & Komodakis, 2016), a dropout rate of 0.01 (Srivastava et al., 2014), α = 0.99, AugMix with a mixture width and severity of 3, a batch size of 128, and 300 epochs of training. We optimize using SGD with Nesterov momentum of 0.9 (Sutskever et al., 2013), a weight decay of 0.001, and a cosine learning rate (Loshchilov & Hutter, 2017) of 0.03 · cos(7πk/16K), where k is the current training step and K is the total number of training steps. The RTE loss function (6) is configured with static λJSD, λECR and N∗ of 12, 1, and 10, respectively, whereas q is scheduled according to 0.6 · sin(13πk/16K) (which assigns small q-values in early training epochs, reaches a maximum of q = 0.6 after 180 epochs, and decreases to q = 0.33 over the remaining 120 epochs). For CIFAR-100, the setup is similar, but different hyperparameters are used; details are included in the Appendix in Table 6. In addition to manual configurations, we consider PBT with a population size of 35 to optimize learning rate, weight decay, q, λJSD, λECR and N∗. Fastidious readers will find the complete PBT configuration defined in Appendix F. For ImageNet, ResNet50 is used and trained with SGD for 300 epochs with a stepped learning rate of 0.1, 0.01 and 0.001 which begin at epochs 0, 100 and 200 respectively. ImageNet hyperparameters are also included in the Appendix in Table 6." }, { "heading": "D APPENDIX: CONFUSION MATRIX FOR UNIFORM ASYMMETRIC NOISE", "text": "C = .0000 .0396 .2475 .0594 .0594 .0396 .0495 .0693 .2772 .1584 .1765 .0000 .0294 .0000 .0000 .0000 .0294 .0000 .1765 .5882 .1745 .0000 .0000 .1544 .1879 .1074 .2617 .0872 .0268 .0000 .0388 .0116 .1473 .0000 .1240 .3682 .1899 .0853 .0155 .0194 .0303 .0000 .2197 .1667 .0000 .0606 .2879 .2121 .0227 .0000 .0324 .0000 .1435 .4676 .1019 .0000 .1204 .1157 .0093 .0093 .0536 .0179 .3571 .3036 .1071 .0714 .0000 .0536 .0179 .0179 .0704 .0000 .0986 .1268 .3803 .1831 .0986 .0000 .0000 .0423\n.4603 .0952 .0794 .0476 .0317 .0000 .0476 .0317 .0000 .2063\n.1711 .5132 .0263 .0526 .0263 .0132 .0658 .0395 .0921 .0000\n (7)" }, { "heading": "E APPENDIX: PERFORMANCE DATA ON CIFAR-10-C", "text": "" }, { "heading": "F APPENDIX: PBT EXPERIMENTS", "text": "" }, { "heading": "G APPENDIX: UNIFORM ASYMMETRIC NOISE ON CIFAR-10", "text": "The matrix C defines the noise structure for uniform asymmetric noise on CIFAR-10 with following labels: AIRPLANE, AUTOMOBILE, BIRD, CAT, DEER, DOG, FROG, HORSE, SHIP, TRUCK.\nC = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 (8)\nH APPENDIX: EXTENDED DATA AND ANALYSIS" } ]
2,020
ROBUST TEMPORAL ENSEMBLING
SP:16392bc9174dde6ad7b569f3f40fa14a4ed48831
[ "The paper introduces a new condition for showing the existence of the solution of a deep equilibrium model (which defines an implicit mapping via the fixed point). The new formulation also comes with a convenient and accurate Lipschitz bound. The proposed condition can be satisfied via reparameterizing an unconstrained set of trainable parameters. ", "> Summary: This paper studies a new and more general way of parameterizing the simplest equilibrium network of the form $\\sigma(Wz+Ux+b)$, a form that has been tackled by works like (Winston & Kolter 2020)and (El Ghaoui et al. 2019). The authors provide a computationally (relatively) efficient way of computing Lipschitz-bounded equilibrium networks and a detailed analysis of how the network should be constructed, along with the proof of the existence and uniqueness of the fixed point (and with less restrictive conditions when compared to MON). The empirical results on adversarial robustness shows that the proposed approach is a bit more robust than prior layer-based networks and other implicit networks, and validates most of the theoretical claims made by the authors." ]
This paper introduces new parameterizations of equilibrium neural networks, i.e. networks defined by implicit equations. This model class includes standard multilayer and residual networks as special cases. The new parameterization admits a Lipschitz bound during training via unconstrained optimization: no projections or barrier functions are required. Lipschitz bounds are a common proxy for robustness and appear in many generalization bounds. Furthermore, compared to previous works we show well-posedness (existence of solutions) under less restrictive conditions on the network weights and more natural assumptions on the activation functions: that they are monotone and slope restricted. These results are proved by establishing novel connections with convex optimization, operator splitting on non-Euclidean spaces, and contracting neural ODEs. In image classification experiments we show that the Lipschitz bounds are very accurate and improve robustness to adversarial attacks.
[]
[ { "authors": [ "Cem Anil", "James Lucas", "Roger Grosse" ], "title": "Sorting out lipschitz function approximation", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Shaojie Bai", "J Zico Kolter", "Vladlen Koltun" ], "title": "Deep equilibrium models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Peter L Bartlett", "Dylan J Foster", "Matus J Telgarsky" ], "title": "Spectrally-normalized margin bounds for neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Heinz H Bauschke", "Patrick L Combettes" ], "title": "Convex analysis and monotone operator theory in Hilbert spaces, volume 408", "venue": null, "year": 2011 }, { "authors": [ "Stephen Boyd", "Neal Parikh", "Eric Chu" ], "title": "Distributed optimization and statistical learning via the alternating direction method of multipliers", "venue": "Now Publishers Inc,", "year": 2011 }, { "authors": [ "Ricky TQ Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Yun-Chung Chu", "Keith Glover" ], "title": "Bounds of the induced norm and model reduction errors for systems with repeated scalar nonlinearities", "venue": "IEEE Transactions on Automatic Control,", "year": 1999 }, { "authors": [ "Jeremy Cohen", "Elan Rosenfeld", "Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Fernando J D’Amato", "Mario A Rotea", "AV Megretski", "UT Jönsson" ], "title": "New results for analysis of systems with repeated nonlinearities", "venue": null, "year": 2001 }, { "authors": [ "Mahyar Fazlyab", "Alexander Robey", "Hamed Hassani", "Manfred Morari", "George Pappas" ], "title": "Efficient and accurate estimation of lipschitz constants for deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "WP Heath", "AG Wills" ], "title": "Zames-falb multipliers for quadratic programming", "venue": "IEEE Transactions on Automatic Control,", "year": 2007 }, { "authors": [ "R Bruce Kellogg" ], "title": "A nonlinear alternating direction method", "venue": "Mathematics of Computation,", "year": 1969 }, { "authors": [ "Diederik P Kingma", "Jimmy Lei Ba" ], "title": "Adam: A method for stochastic gradient descent", "venue": "In ICLR: International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Vishwesh V Kulkarni", "Michael G Safonov" ], "title": "All multipliers for repeated monotone nonlinearities", "venue": "IEEE Transactions on Automatic Control,", "year": 2002 }, { "authors": [ "Jia Li", "Cong Fang", "Zhouchen Lin" ], "title": "Lifted proximal operator machines", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Changliu Liu", "Tomer Arnon", "Christopher Lazarus", "Clark Barrett", "Mykel J Kochenderfer" ], "title": "Algorithms for verifying deep neural networks", "venue": null, "year": 1903 }, { "authors": [ "Winfried Lohmiller", "Jean-Jacques E Slotine" ], "title": "On contraction analysis for non-linear systems", "venue": null, "year": 1998 }, { "authors": [ "Alexandre Megretski", "Anders Rantzer" ], "title": "System analysis via integral quadratic constraints", "venue": "IEEE Transactions on Automatic Control,", "year": 1997 }, { "authors": [ "Arkadi Nemirovski" ], "title": "Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems", "venue": "SIAM Journal on Optimization,", "year": 2004 }, { "authors": [ "Yurii Nesterov" ], "title": "Dual extrapolation and its applications to solving variational inequalities and related problems", "venue": "Mathematical Programming,", "year": 2007 }, { "authors": [ "Yurii Nesterov", "Arkadii Nemirovskii" ], "title": "Interior-point polynomial algorithms in convex programming", "venue": null, "year": 1994 }, { "authors": [ "Patricia Pauli", "Anne Koch", "Julian Berberich", "Frank Allgöwer" ], "title": "Training robust neural networks using Lipschitz bounds", "venue": "arXiv preprint arXiv:2005.02929,", "year": 2020 }, { "authors": [ "Aditi Raghunathan", "Jacob Steinhardt", "Percy Liang" ], "title": "Certified defenses against adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Aditi Raghunathan", "Jacob Steinhardt", "Percy S Liang" ], "title": "Semidefinite relaxations for certifying robustness to adversarial examples", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Anders Rantzer" ], "title": "On the Kalman-Yakubovich-Popov lemma", "venue": "Systems & Control Letters,", "year": 1996 }, { "authors": [ "Jonas Rauber", "Roland Zimmermann", "Matthias Bethge", "Wieland Brendel" ], "title": "Foolbox native: Fast adversarial attacks to benchmark the robustness of machine learning models in pytorch, tensorflow, and jax", "venue": "Journal of Open Source Software,", "year": 2020 }, { "authors": [ "Max Revay", "Ruigang Wang", "Ian R Manchester" ], "title": "A convex parameterization of robust recurrent neural networks", "venue": "IEEE Control Systems Letters,", "year": 2020 }, { "authors": [ "Ernest K Ryu", "Stephen Boyd" ], "title": "Primer on monotone operator methods", "venue": "Appl. Comput. Math,", "year": 2016 }, { "authors": [ "John W Simpson-Porco", "Francesco Bullo" ], "title": "Contraction theory on riemannian manifolds", "venue": "Systems & Control Letters,", "year": 2014 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In ICLR: International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Vincent Tjeng", "Kai Y Xiao", "Russ Tedrake" ], "title": "Evaluating robustness of neural networks with mixed integer programming", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Mark M Tobenkin", "Ian R Manchester", "Alexandre Megretski" ], "title": "Convex parameterizations and fidelity bounds for nonlinear identification and reduced-order modelling", "venue": "IEEE Transactions on Automatic Control,", "year": 2017 }, { "authors": [ "Yusuke Tsuzuku", "Issei Sato", "Masashi Sugiyama" ], "title": "Lipschitz-margin training: Scalable certification of perturbation invariance for deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Ezra Winston", "J. Zico Kolter" ], "title": "Monotone operator equilibrium networks", "venue": null, "year": 2006 }, { "authors": [ "G. Zames" ], "title": "Realizability Condition for Nonlinear Feedback Systems", "venue": "IEEE Transactions on Circuit Theory,", "year": 1964 }, { "authors": [ "SiQi Zhou", "Angela P Schoellig" ], "title": "An analysis of the expressiveness of deep neural network architectures based on their lipschitz constants", "venue": "arXiv preprint arXiv:1912.11511,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural network models have revolutionized the field of machine learning: their accuracy on practical tasks such as image classification and their scalability have led to an enormous volume of research on different model structures and their properties (LeCun et al., 2015). In particular, deep residual networks with skip connections He et al. (2016) have had a major impact, and neural ODEs have been proposed as an analog with “implicit depth” (Chen et al., 2018). Recently, a new structure has gained interest: equilibrium networks (Bai et al., 2019; Winston & Kolter, 2020), a.k.a. implicit deep learning models (El Ghaoui et al., 2019), in which model outputs are defined by implicit equations incorporating neural networks. This model class is very flexible: it is easy to show that includes many previous structures as special cases, including standard multi-layer networks, residual networks, and (in a certain sense) neural ODEs.\nHowever model flexibility in machine learning is always in tension with model regularity or robustness. While deep learning models have exhibited impressive generalisation performance in many contexts it has also been observed that they can be very brittle, especially when targeted with adversarial attacks (Szegedy et al., 2014). In response to this, there has been a major research effort to understand and certify robustness properties of deep neural networks, e.g. Raghunathan et al. (2018a); Tjeng et al. (2018); Liu et al. (2019); Cohen et al. (2019) and many others. Global Lipschitz bounds (a.k.a. incremental gain bounds) provide a somewhat crude but nevertheless highly useful proxy for robustness (Tsuzuku et al., 2018; Fazlyab et al., 2019), and appear in several analyses of generalization (e.g. (Bartlett et al., 2017; Zhou & Schoellig, 2019)).\nInspired by both of these lines of research, in this paper we propose new parameterizations of equilibrium networks with guaranteed Lipschitz bounds. We build directly on the monotone operator framework of Winston & Kolter (2020) and the work of Fazlyab et al. (2019) on Lipschitz bounds.\nThe main contribution of our paper is the ability to enforce tight bounds on the Lipschitz constant of an equilibrium network during training with essentially no extra computational effort. In addition, we prove existence of solutions with less restrictive conditions on the weight matrix and more natural assumptions on the activation functions via novel connections to convex optimization and contracting dynamical systems. Finally, we show via small-scale image classification experiments that the proposed parameterizations can provide significant improvement in robustness to adversarial attacks with little degradation in nominal accuracy. Furthermore, we observe small gaps between certified Lipschitz upper bounds and observed lower bounds computed via adversarial attack." }, { "heading": "2 RELATED WORK", "text": "Equilibrium networks, Implicit Deep Models, and Well-Posedness. As mentioned above, it has been recently shown that many existing network architectures can be incorporated into a flexible model set called an equilibrium network (Bai et al., 2019; Winston & Kolter, 2020) or implicit deep model (El Ghaoui et al., 2019). In this unified model set, the network predictions are made not by forward computation of sequential hidden layers, but by finding a solution to an implicit equation involving a single layer of all hidden units. One major question for this type of networks is its wellposedness, i.e. the existence and uniqueness of a solution to the implicit equation for all possible inputs. El Ghaoui et al. (2019) proposed a computationally verifiable but conservative condition on the spectral norm of hidden unit weight. In Winston & Kolter (2020), a less conservative condition was developed based on monotone operator theory. Similar monotonicity constraints were previously used to ensure well-posedness of a different class of implicit models in the context of nonlinear system identification (Tobenkin et al., 2017, Theorem 1). On the question of well-posedness, our contribution is a more flexible model set and more natural assumptions on the activation functions: that they are monotone and slope-restricted.\nNeural Network Robustness and Lipschitz Bounds. The Lipschitz constant of a function measures the worst-case sensitivity of the function, i.e. the maximum “amplification” of difference in inputs to differences in outputs. The key features of a good Lipschitz bounded learning approach include a tight estimation for Lipschitz constant and a computationally tractable training method with bounds enforced. For deep networks, Tsuzuku et al. (2018) proposed a computationally efficient but conservative approach since its Lipschitz constant estimation method is based on composition of estimates for different layers, while Anil et al. (2019) proposed a combination of a novel activation function and weight constraints. For equilibrium networks, El Ghaoui et al. (2019) proposed an estimation of Lipschitz bounds via input-to-state (ISS) stability analysis. Fazlyab et al. (2019) estimates for deep networks based on incremental quadratic constraints and semidefinite programming (SDP) were shown to give state-of-the-art results, however this was limited to analysis of an already-trained network. The SDP test was incorporated into training via the alternating direction method of multipliers (ADMM) in Pauli et al. (2020), however due to the complexity of the SDP the training times recorded were almost 50 times longer than for unconstrained networks. Our approach uses a similar condition to Fazlyab et al. (2019) applied to equilibrium networks, however we introduce a novel direct parameterization method that enables learning robust models via unconstrained optimization, removing the need for computationally-expensive projections or barrier terms." }, { "heading": "3 PROBLEM FORMULATION AND PRELIMINARIES", "text": "" }, { "heading": "3.1 PROBLEM STATEMENT", "text": "We consider the weight-tied network in which x ∈ Rd denotes the input, and z ∈ Rn denotes the hidden units, y ∈ Rp denotes the output, given by the following implicit equation\nz = σ(Wz + Ux+ bz), y = Woz + by (1)\nwhere W ∈ Rn×n, U ∈ Rn×d, and Wo ∈ Rp×n are the hidden unit, input, and output weights, respectively, bz ∈ Rn and by ∈ Rp are bias terms. The implicit framework includes most current neural network architectures (e.g. deep and residual networks) as special cases. To streamline the presentation we assume that σ : R → R is a single nonlinearity applied elementwise, although our results also apply in the case that each channel has a different activation function, nonlinear or linear.\nEquation (1) is called an equilibrium network since its solutions are equilibrium points of the difference equation zk+1 = σ(Wzk +Ux+ bz) or the ODE ż(t) = −z(t) + σ(Wz(t) +Ux+ bz). Our goal is to learn equilibrium networks (1) possessing the following two properties:\n• Well-posedness: For every input x and bias bz , equation 1 admits a unique solution z.\n• γ-Lipschitz: It has a finite Lipschitz bound of γ, i.e., for any input-output pairs (x1, y1), (x2, y2) we have ‖y1 − y2‖2 ≤ γ‖x1 − x2‖2." }, { "heading": "3.2 PRELIMINARIES", "text": "Monotone operator theory. The theory of monotone operators on Euclidean space (see the survey Ryu & Boyd (2016)) has been extensively applied in the development of equilibrium networks (Winston & Kolter, 2020). In this paper, we will use the monotone operator theory on non-Euclidean spaces (Bauschke et al., 2011), in particular, we are interested in a finite-dimensional Hilbert space H, which we identify with Rn equipped with a weighted inner product 〈x, y〉Q := y>Qx where Q 0. The main benefit is that we can construct a more expressive equilibrium network set. A brief summary or relevant theory can be found in Appendix C.1; here we give some definitions that are frequently used throughout the paper. An operator is a set-valued or single-valued function defined by a subset of the space A ⊆ H×H. A function f : H → R ∪ {∞} is proper if f(x) <∞ for at least one x. The subdifferential and proximal operators of a proper function f are defined as\n∂f(x) := {g ∈ H | f(y) ≥ f(x) + 〈y − x, g〉Q, ∀y ∈ H},\nproxαf (x) := {z ∈ H | z = arg min u\n1 2 ‖u− x‖2Q + αf(u)}\nrespectively, where ‖x‖Q := √ 〈x, x〉Q is the induced norm. For n = 1, we only consider the case ofQ = 1. An operatorA is monotone if 〈u−v, x−y〉Q ≥ 0 and strongly monotone with parameter m if 〈u− v, x− y〉Q ≥ m‖x− y‖2Q for all (x, u), (y, v) ∈ A. The operator splitting problem is that of finding a zero of a sum of two operators A and B, i.e. find an x such that 0 ∈ (A+B)(x).\nDynamical systems theory. In this paper, we will also treat the solutions of (1) as equilibrium points of certain dynamical systems ż(t) = f(z(t)). Then, the well-posedness and robustness properties of (1) can be guaranteed by corresponding properties of the dynamical system’s solution set. A central focus in robust and nonlinear control theory for more than 50 years – and largely unified by the modern theory of integral quadratic constraints (Megretski & Rantzer, 1997) – has been on systems which are interconnections of linear mappings and “simple” nonlinearities, i.e. those easily bounded in some sense by quadratic functions. Fortuitously, this characteristic is shared with deep, recurrent, and equilibrium neural networks, a connection that we use heavily in this paper and has previously been exploited by Fazlyab et al. (2019); El Ghaoui et al. (2019); Revay et al. (2020) and others. A particular property we are interested in is called contraction (Lohmiller & Slotine, 1998), i.e., any pair of solutions z1(t) and z2(t) exponentially converge to each other:\n‖z1(t)− z2(t)‖ ≤ α‖z1(0)− z2(0)‖e−βt\nfor all t > 0 and some α, β > 0. Contraction can be established by finding a Riemannian metric with respect to which nearby trajectories converge, which is a differential analog of a Lyapunov function. A nice property of a contracting dynamical system is that if it is time-invariant, a unique equilibrium exists and it possesses a certain level of robustness. Moreover, contraction can also be linked to monotone operators, i.e. a system is contracting w.r.t. to a constant (state-independent) metric Q if and only if the operator −f is strongly monotone w.r.t. Q-weighted inner product. We collect some directly relevant results from systems theory in Appendix C.2." }, { "heading": "4 MAIN RESULTS", "text": "This section contains the main theoretical results of the paper: conditions implying well-posedness and Lipschitz-boundedness of equilibrium networks, and direct (unconstrained) parameterizations such that these conditions are automatically satisfied. Assumption 1. The activation function σ is monotone and slope-restricted in [0, 1], i.e.,\n0 ≤ σ(x)− σ(y) x− y ≤ 1, ∀x, y ∈ R, x 6= y. (2)\nRemark 1. We will show below (Proposition 1 in Section 4.2) that Assumption 1 is equivalent to the assumption on σ in Winston & Kolter (2020), i.e. that σ(·) = prox1f (·) for some proper convex function f . However, the above assumption is arguably more natural, since it is easily verified for standard activation functions. Note also that if different channels have different activation functions, then we simply require that they all satisfy (2).\nThe following conditions are central to our results on well-posedness and Lipschitz bounds: Condition 1. There exists a Λ ∈ D+, with D+ denoting diagonal positive-definite matrices, such that W satisfies 2Λ− ΛW −WTΛ 0. (3) Condition 2. Given a prescribed Lipschitz bound γ > 0, there exists Λ ∈ D+ such that W,Wo, U satisfy\n2Λ− ΛW −WTΛ− 1 γ WTo Wo − 1 γ ΛUUTΛ 0. (4)\nRemark 2. Note that Condition 2 implies Condition 1 since 1/γ(WTo Wo + ΛUUTΛ) 0. As a partial converse, if Condition 1 holds, then for any Wo, U there exist a sufficiently large γ such that Condition 2 is satisfied.\nThe main theoretical results of this paper are the following: Theorem 1. If Assumption 1 and Condition 1 hold, then the equilibrium network (1) is well-posed, i.e. for all x and bz , equation (1) admits a unique solution z. Moreover, it has a finite Lipschitz bound from x to y.\nTheorem 2. If Assumption 1 and Condition 2 hold, then the equilibrium network (1) is well-posed and has a Lipschitz bound of γ.\nAs a consequence, we call (1) a Lipschitz bounded equilibrium network (LBEN) if its weights satisfy either (3) or (4). The full proofs appear in Appendices E.1 and E.2, but here we sketch some of the main ideas. We can represent (1) as an algebraic interconnection between linear and nonlinear parts:\nv = Wz + Ux+ bz, z = σ(v), y = Woz + by. (5)\nIt can be shown that for any pair of solutions to the nonlinear part za = σ(va), zb = σ(vb), if we define ∆v = va − vb and ∆z = za − zb then Assumption 1 implies the following:\n〈∆v −∆z,∆z〉Λ ≥ 0. (6) for any Λ ∈ D+. This and Condition 1 can be used to prove global stability of a unique equilibrium of the differential equation v̇ = −v+Wσ(v) +Ux+ bz , which proves there is a unique solution to (1) for any inputs. Next, straightforward manipulations of Condition 2 show that any pairs of inputs xa, xb and outputs ya, yb satisfy the following, where ∆x = xa − xb and ∆y = ya − yb:\nγ‖∆x‖22 − 1\nγ ‖∆y‖22 ≥ 2〈∆v −∆z,∆z〉Λ ≥ 0,\nwhere the inequality comes (6). This implies the Lipschitz bound ‖∆y‖2 ≤ γ‖∆x‖2 . Remark 3. In Fazlyab et al. (2019) it was claimed that (6) holds with a richer (more powerful) class of multipliers Λ which were previously introduced for robust stability analysis of systems with repeated nonlinearities (Chu & Glover, 1999; D’Amato et al., 2001; Kulkarni & Safonov, 2002). However this is not true: a counterexample was given in Pauli et al. (2020), and here we provide a brief explanation: even if the nonlinearities σ(vi) are repeated when considered as functions of vi, their increments ∆zi = σ(vi + ∆vi)− σ(vi) are not repeated when considered as functions of ∆vi, since they depend on the particular vi which generally differs between units.\nExample 1. We illustrate the extra flexibility of Condition 1 compared to the condition of Winston & Kolter (2020) by a toy example. Consider W ∈ R2×2 and take a slice near W = 0 of the form\nW = [ 0 W12 0 W22 ] , for which we have: 2I −W −WT = [ 2 −W12 −W12 2− 2W22 ] . (7)\nBy Sylvester’s criterion, this matrix is positive-definite if and only if W22 < 1 and det(2I −W − WT ) = 4(1−W22)−W 212 > 0, which defines a parabolic region in the W12,W22 plane. Applying our condition (3), without loss of generality take Λ = diag(1, α) with α > 0 and we have\n2Λ− ΛW −WTΛ = [\n2 −W12 −W12 2α− 2αW22\n] .\n−80 −60 −40 −20 0 W22\n−40\n−20\n0\n20\n40\nW 12\nGray region: the condition from Winston & Kolter (2020) is feasible: 2I −W −WT 0. White region (including gray region): our well-posedness condition is feasible: ∃Λ ∈ D+ : 2Λ− ΛW −WTΛ 0. Black region: neither condition feasible.\nThe positivity test is now W22 < 1 and 4α(1−W22)−W 212 > 0. For each W12 there is sufficiently large α such that the second condition is satisfied, since the first implies 1 −W22 > 0. Hence the only constraint on W is that W22 < 1, which yields a much larger region in the W12,W22 plane (see Figure 1). Interestingly, in this simple example with ReLU activation, the condition W22 < 1 is also a necessary condition for well-posedness (El Ghaoui et al., 2019, Theorem 2.8)." }, { "heading": "4.1 DIRECT PARAMETERIZATION FOR UNCONSTRAINED OPTIMIZATION", "text": "Training a network that satisfies Condition 1 or 2 can be formulated as an optimization problem with convex constraints. In fact, Condition 1 is a linear matrix inequality (LMI) in the variables Λ and ΛW , from which W can be determined uniquely. Similarly, via Schur complement, Condition 2 is an LMI in the variables Λ,ΛW,ΛU,Wo, and γ, from which all network weights can be determined. In a certain theoretical sense LMI constraints are tractable – Nesterov & Nemirovskii (1994) proved they are polynomial-time solvable – however for even for moderate-scale networks (e.g. ≤ 100 activations) the associated barrier terms or projections become a major computational bottleneck.\nIn this paper, we propose direct parameterization that allows learning via unconstrained optimization problem, i.e. all network parameters are transformations of free (unconstrained) matrix variables, in such a way that LMI constraints (3) or (4) are automatically satisfied.\nFor well-posedness, i.e. Condition (1), we parameterize via the following free variables: V ∈ Rn×n, d ∈ Rn, and skew-symmetric1 matrix S = −ST ∈ Rn×n, from which the hidden unit weight is\nW = I −Ψ(V TV + I + S), (8) where Ψ = diag ( ed )\nand > 0 is some small constant to ensure strict positive-definiteness. Then it follows from straightforward manipulations that Condition 1 holds with Λ = Ψ−1 if and only if W can be written as (8). When Ψ = I , this reduces to the parameterization of Winston & Kolter (2020).\nSimilarly, for a specific Lipschitz bound, i.e. Condition 2, we add to the parameterization the free input and output weights U and Wo, and arbitrary γ > 0. We can construct\nW = I −Ψ ( 1\n2γ WTo Wo +\n1\n2γ Ψ−1UUTΨ−1 + V TV + I + S\n) , (9)\nfor which (4) is automatically satisfied. Again, it can easily be verified that this construction is necessary and sufficient, i.e. any W satisfying (4) can be constructed via (9)." }, { "heading": "4.2 MONOTONE OPERATOR PERSPECTIVE", "text": "In this section, we will show that finding the solution to LBEN (1) is equivalent to solving a wellposed operator splitting problem, and hence a unique solution exists. First, we need the following observation on the activation function σ. Proposition 1. Assumption 1 holds if and only if there exists a convex proper function f : R → R ∪ {∞} such that σ(·) = prox1f (·).\n1Note that S can be parameterized via its upper or lower triangular components, or via S = N −NT with N free, which can be more straightforward if W is defined implicitly via linear operators, e.g. convolutions.\nThe proof of Proposition 1 with a construction of f appears in Appendix E.3, along with a list of f for popular σ. It is well-known in monotone operator theory (Ryu & Boyd, 2016) that for any convex closed proper function f , the proximal operator prox1f (x) is monotone and non-expansive (i.e. slope-restricted in [0, 1]). Proposition 1 is a converse result for scalar functions. Remark 4. To our knowledge Proposition 1 is novel, however for several popular activation functions the corresponding functions f were computed in Li et al. (2019) (see also Table 3 in Appendix E.4). Compared with Li et al. (2019), our work gives a necessary and sufficient conditions.\nNow we connect LBEN (1) to an operator splitting problem. Proposition 2. Finding a solution of LBEN (1) is equivalent to solving the well-posed operator splitting problem 0 ∈ (A+B)(z) with the operators\nA(z) = (I −W )(z)− (Ux+ bz), B = ∂f (10) where f(z) := ∑n i=1 λif(zi) with λi as the ith diagonal element of Λ.\nThe proof appears in Appendix E.4 and Theorem 1 follows directly since the above operator splitting problem has a unique solution for any x, bz .\nComputing an equilibrium. There exist various of operator splitting algorithms to compute the solution of LBEN (1), e.g., ADMM (Boyd et al., 2011) and Peaceman-Rachford splitting (Kellogg, 1969). Winston & Kolter (2020) found that Peaceman-Rachford splitting converges very rapidly when properly tuned, and our experience agrees with this.\nGradient backpropagation. As shown in (Winston & Kolter, 2020, Section 3.5), the gradients of the loss function `(·) can be represented by\n∂` ∂(·) = ∂` ∂z? (I − JW )−1J ∂(Wz? + Ux+ bz) ∂(·) (11)\nwhere z? denotes the solution of (1), (·) denotes some learnable parameters in the parameterization (8) or (9), and J ∈ Dσ(Wz? + Ux+ bz) with Dσ as the Clarke generalized Jacobian of σ. Since σ is piecewise differentiable, then the set Dσ(Wz? +Ux+ bz) is a singleton almost everywhere. The following proposition reveals that (11) is well-defined, see proof in Appendix E.5. Proposition 3. The matrix I − JW is invertible for all z?, x and bz ." }, { "heading": "4.3 CONNECTIONS TO CONVEX OPTIMIZATION", "text": "Since LBEN (1) is equivalent to an operator splitting problem, an interesting question is whether it can further be connected to a convex optimization problem. Here we construct an equivalent convex problem for the LBEN whose parameterization satisfies S = 0. Proposition 4. If the direct parameterization (either (8) or (9)) of an LBEN satisfies S = 0, then for all x and bz , the solution of (1) is the minimizer of the following strongly convex optimization problem:\nmin z\n〈 1\n2 (I −W )z − Ux− bz, z 〉 Λ + f(z). (12)\nThe proof is in Appendix E.6. Furthermore, for an important subclass of LBEN where σ is ReLU, it has an equivalent convex quadratic programming (QP) formulation. Proposition 5. Consider an LBEN (1) with ReLU activation. For all x and bz , the solution of (1) is the minimizer of the following strongly convex QP problem:\nmin z\n1 2 z>Hz + p>z s.t. z ≥ 0, (I −W )z ≥ Ux+ bz (13)\nwhere H = 2Λ− ΛW −W>Λ and p = −Λ(Ux+ bz).\nNote that the QP (13) also works for the case where S is non-zero. The proof (see Appendix E.7) is built on the “key insights” of ReLU activation from Raghunathan et al. (2018b). This allows one to compute the solution of LBEN (1) using the many free or commercial QP solvers." }, { "heading": "4.4 CONTRACTING NEURAL ODES", "text": "In this section, we will prove the existence of a solution to (1) from a different perspective: by showing it is the equilibrium of a contracting dynamical system (a “neural ODE”). We first add a smooth state v(t) ∈ Rn to avoid the algebraic loop in (5). This idea has long been recognized as helpful for well-posedness questions (Zames, 1964). We define the dynamics of v(t) by the following ODE: v̇(t) = −v(t) +Wz(t) + Ux+ bz, z(t) = σ(v(t)). (14) The well-posedness of (1) is equivalent to the existence and uniqueness of an equilibrium of (14) for all x and bz , which is established by the following proposition. Proposition 6. If Assumption 1 and Condition 1 hold, then the neural ODE (14) is contracting w.r.t. some constant metric P 0.\nThe proof is in Appendix E.8. Moreover, the metric P can be found via semidefinite programming. The above proposition also proves that the nonlinear operator−f with f(v) = −v+Wσ(v)+Ux+ bz , zeros of which define solutions of LBEN (1), is actually monotone w.r.t. the P -weighted inner product, which gives a first-order cutting-plane oracle for the zero location v? such that f(v?) = 0. I.e. given a test point vt 6= v?, it proves that v? is in the half-space defined by 〈v?−vt, f(vt)〉P > 0. This may offer alternative ways to solve LBEN (1), e.g. via Nemirovski (2004); Nesterov (2007)." }, { "heading": "4.5 FEEDFORWARD NETWORKS AS A SPECIAL CASE", "text": "Consider a multi-layer feedforward network of the form\nz1 = U0x+ b0, z`+1 = σ(W`z` + b`), ` = 1, . . . , L− 1, y = WLzL + bL, (15) which can be rewritten as an equilibrium network (1) as shown in Appendix A The above equilibrium network is obviously well-posed as a unique solution exists. The following proposition shows that (44) is also an LBEN. Proposition 7. The LBEN parameterization (8) contains all feedforward networks.\nIn Winston & Kolter (2020), a set of well-posed equilibrium network, called monotone operator equilibrium network (MON), is introduced via the following parameterization\nW = (1−m)I −A>A+B> −B (16) where m > 0 is a hyper-parameter, A,B are learnable matrices. The MON parameterization can be understood as a special case of LBEN with a fixing Ψ = I . Proposition 8. The MON parameterization (16) does not contain all feedforward networks, and if m ≥ 1 it does not contain any feedforward networks.\nFrom the proof (see Appendix E.11). The set of feedforward networks in MON shrinks as the hyperparameter m increases. Most experiments in Winston & Kolter (2020) use m = 1, which excludes all feedforward networks.\nIn the feedforward case, our Lipschitz bound condition (4) is equivalent to the state-of-art bound estimation method in Fazlyab et al. (2019). The major benefit of our direct parameterization (9) is that it allows such bounds to be imposed during training without any additional computational cost. The details are given in Appendix D." }, { "heading": "5 EXPERIMENTS", "text": "In this section we test our approach on the MNIST and CIFAR-10 image classification problems. Our numerical experiments focus on model robustness, the trade-off between model performance and the Lipschitz constant, and the tightness of the Lipschitz bound. We compare the proposed LBEN to unconstrained equilibrium networks, monotone operator equilibrium network (MON) of Winston & Kolter (2020), and fully connected networks trained using Lipschitz margin training (LMT) (Tsuzuku et al., 2018). When studying model robustness to adversarial attacks, we use the L2 Fast Gradient Method, implemented as part of the Foolbox toolbox (Rauber et al., 2020). All\nmodels are trained on a either a standard desktop computer with an NVIDIA GeForce RTX 2080 graphics card or using a google cloud instance with a Nvidia Tesla V100 graphics card. Details of the models and training procedure can be found in Appendix F, all code will be made available online but links are omitted due to the double-blind review process." }, { "heading": "5.1 MNIST EXPERIMENTS WITH FULLY-CONNECTED NETWORKS", "text": "In Figure 2a the test error versus the observed Lipschitz constant, computed via adversarial attack for each of the models trained. We can see clearly that the parameter γ in LBEN offers a trade-off between test error and Lipschitz constant. Comparing the LBENγ=5 with both MON and LBENγ<∞, we also note a slight regularizing effect in the lower test error.\nBy comparison, LMT (Tsuzuku et al., 2018) with c as a tunable regularization parameter displays a qualitatively similar trade-off, but underperforms LBEN in terms of both test error and robustness. If we examine the unconstrained equilibrium model, we observe a Lipschitz constant more than an order of magnitude higher, i.e. this model has regions of extremely high sensitivity, without gaining any accuracy in terms of test error.\nFor the LBEN models, the lower and upper bounds on the Lipschitz constant are very close: the markers are very close to their corresponding lines in Figure 2a, see also the table of numerical results in Appendix A in which the approximation accuracy is in many cases around 90%.\nNext we tested robustness of classification accuracy to adversarial attacks of various sizes, the results are shown in Figure 2b and summarized in Table 1. We can clearly see that decreasing γ (i.e. stronger regularization) in the LBEN models results in a far more gradual degradation of performance as perturbation size increases, with only a mild impact on nominal (zero perturbation) test error.\nNext, we examined the impact of our parameterization on computational complexity compared to other equilibrium models. The test and training errors versus number of epochs are plotted in Figure 5, and we can see that all models converge similarly, and also take roughly the same amount of time per epoch. This is a clear contrast to the results of Pauli et al. (2020) in which imposing Lipschitz constraints resulted in fifty-fold increase in training time. Interestingly, we can also see in Figure 5 the effect of regularisation for LBEN with γ = 5: higher training error but lower test error. We have observed several cases where the unconstrained equilibrium model became unstable during training, LBEN never exhibits this problem.\nFinally, we examined the quality of the Lipschitz bounds as a function of network size, comparing the upper and lower bounds on fully connected networks with width 20 to 1000. The results are shown in Figure 6. It can be observed that network size only has a mild effect on the quality of the Lipschitz bounds, which decrease slightly as width is increased by a factor of 50." }, { "heading": "5.2 CIFAR-10 EXPERIMENTS WITH CONVOLUTIONAL NETWORKS", "text": "The previous example looked at simple fully connected networks, however, our approach can also be applied to structured layers such as convolutions. Here, we perform several experiments exploring the use of convolutional layers on the CIFAR-10 dataset. To study the improved expressibility we will compare the LBEN to the LBEN with its metric set to the identity, denoted LBEN Λ=I . Note that the model set LBEN Λ=I,γ<∞ corresponds to the MON. Additional model details can be found in Appendix F.2.\nIn Figure 3a, we have plotted the test performance versus the observed Lipschitz constant for the LBEN and LBEN Λ=I for varying Lipschitz bound γ = 1, 2, 3, 5, 50, along with the LBENγ<∞, MON, and feed-forward convolutional networks with 40, 81, 160, and 200 channels. Again, we see that the Lipschitz bound has a regularizing effect, trading off between nominal fit and robustness. Additionally, we see that the LBEN provides both better performance and robustness than the traditional feed-forward convolutional networks of similar sizes, highlighting the benefit of the equilibrium network structure.\nComparing LBEN and LBENΛ=I , we can see that the metric gives higher quality models for LBEN with specified γ, but it is slightly worse for LBEN γ < ∞ compared to MON. This is likely due to the extra expressiveness of the model leading to some overfitting. This can also be seen in the training curves in Figure 7.\nFigure 3b shows the test error versus the size of adversarial perturbation for the lBEN and 162 channel feed-forward convolutional network. We observe that the LBEN provides a much more gradual loss in performance than the feed-forward network, with γ = 5 offering an excellent mix of nominal performance and robustness. The feed-forward networks of different sizes exhibited similar results, however only one is plotted in Figure 3b for clarity." }, { "heading": "6 CONCLUSIONS", "text": "In this paper, we have shown that the flexible framework of equilibrium networks can be made robust via a simple and direct parameterization which results in guaranteed Lipschitz bounds. These results can also be directly applied (as a special case) to standard multilayer and residual deep neural networks, and also provide a direct parameterization of nonlinear ODEs satisfying strong stability and robustness properties.\nExtension to equilibrium network structures more general than (1) is an interesting area for future research. Our results can be extended to more general multivariable “activations” if they can be described accurately via monotonicity properties or integral quadratic constraints. One particular example where this is possible is where the “activation” computes the arg min of a quadratic program of the sort that appears in constrained model predictive control (Heath & Wills, 2007)." }, { "heading": "A EXPERIMENTAL RESULTS ON MNIST CHARACTER RECOGNITION", "text": "This appendix contains tables of results on MNIST and CIFAR-10 data sets.\nLegend:\n• Err: Test error (%), • ‖a‖2: `2 norm of adversarial attack. • γup: certified upper bound on Lipschitz constant (for models that provide one). • γlow: observed lower bound on Lipschitz constant via adversarial attack.\n• γ approx: approximation ratio of Lipschitz constant as percentage = 100× ( γlow γup ) .\nModels:\n• LBEN: the proposed Lipschitz bounded equilibrium network.. • MON: the monotone operator equilibrium network of Winston & Kolter (2020). • UNC: an unconstrained equilibrium network, i.e. W directly parameterized. • LMT: Lipschitz Margin Training model as in Tsuzuku et al. (2018). • Lip-NN: The Lipschitz Neural Network model of Pauli et al. (2020). Note these figures\nare as reported in (Pauli et al., 2020), all other figures are calculated by the authors of the present paper.\nModel Err: ‖a‖2 = 0 Err: ‖a‖2 ≤ 5 Err: ‖a‖2 ≤ 10 γup γlow γ approx LBENγ<∞ 2.03 56.0 82 - 9.8 - LBENγ=5 1.81 46.4 95.4 5 2.912 58.2% LBENγ=1 2.36 19.4 85.5 1 0.865 86.5% LBENγ=0.8 2.59 17.4 80.1 0.8 0.715 89.4% LBENγ=0.4 4.44 16.1 65.0 0.4 0.372 93% LBENγ=0.2 7.41 14.4 42.6 0.2 0.184 92%\nMON 2.04 55.8 88.6 - 7.75 - UNC 2.08 48.75 77.9 - 239.0 - LMTc=1 2.3 59.4 88.1 - 17.5 - LMTc=100 3.4 65.4 92.0 - 7.66 - LMTc=250 6.92 61.8 98.4 - 6.92 - LMTc=1000 12.23 78.4 98.9 - 3.10 -\nLip-NN 3.55 - - 8.74 - -\nTable 1: Results from MNIST experiments.\nFi gu\nre 4:\nR an\ndo m\nse le\nct io\nn of\nM N\nIS T\nad ve\nrs ar\nia le\nxa m\npl es\nfr om\nFi gu\nre 2b\n.T op\nto bo\ntto m\nis in\ncr ea\nsi ng\npe rt\nur ba\ntio n\nsi ze\n.L ef\ntt o\nri gh\nta re\ndi ff\ner en\nte xa\nm pl\nes ." }, { "heading": "B EXPERIMENTAL RESULTS ON CIFAR-10 DATASET", "text": "" }, { "heading": "C PRELIMINARIES", "text": "" }, { "heading": "C.1 MONOTONE OPERATORS WITH NON-EUCLIDEAN INNER PRODUCTS", "text": "We present some basic properties of monotone operators on a finite-dimensional Hilbert space H, which we identify with Rn equipped with a weighted inner product 〈x, y〉Q = y>Qx with Q 0. For n = 1, we only consider the case of Q = 1. The induced norm ‖x‖Q is defined as √ 〈x, x〉Q. A relation or operator is a set-valued or single-valued map defined by a subset of the spaceA ⊆ H×H; we use the notation A(x) = {y | (x, y) ∈ A}. If A(x) is a singleton, we called A a function. Some commonly used operators include: the linear operator A(x) = {(x,Ax) | x ∈ H}; the operator sum A + B = {(x, y + z) | (x, y) ∈ A, (x, z) ∈ B}; the inverse operator A−1 = {(y, x) | (x, y) ∈ A}; and the subdifferential operator ∂f = {(x, ∂f(x))} with x = dom f and ∂f(x) = {g ∈ H | f(y) ≥ f(x) + 〈y − x, g〉Q, ∀y ∈ H}. An operator A has Lipschitz constant L if for any (x, u), (y, v) ∈ A\n‖u− v‖Q ≤ L‖x− y‖Q. (17)\nAn operator A is non-expansive if L = 1 and contractive if L < 1. An operator A is monotone if\n〈u− v, x− y〉Q ≥ 0, ∀(x, u), (y, v) ∈ A. (18)\nIt is strongly monotone with parameter m if\n〈u− v, x− y〉Q ≥ m‖x− y‖2Q, ∀(x, u), (y, v) ∈ A. (19)\nA monotone operator A is maximal monotone if no other monotone operator strictly contains it, which is a property required for the convergence of most fixed point iterations. Specifically, an affine operator A(x) = Wx + b is (maximal) monotone if and only if QW + W>Q 0 and strongly monotone if QW + W>Q mI . A subdifferential ∂f is maximal monotone if and only if f is a convex closed proper function.\nThe resolvent and Cayley operators for an operator A are denoted RA and CA and respectively defined as\nRA = (I + αA) −1, CA = 2RA − I (20)\nfor any α > 0. When A(x) = Wx+ b, then\nRA(x) = (I + αW ) −1(x− αb) (21)\nand when A = ∂f for some CCP function f , then the resolvent is given by a proximal operator\nRA(x) = prox α f (x) := arg min\nz\n1 2 ‖x− z‖2Q + αf(z). (22)\nThe resolvent and Cayley operators are non-expansive for any maximal monotone A, and are contractive for strongly monotone A. Operator splitting methods consider finding a zero in a sum of operators (assumed here to be maximal monotone), i.e., find z such that 0 ∈ (A + B)(z). For example, the convex optimization problem in (12) can be formulated as an operator splitting problem with A(z) = (I −W )z − b and B = ∂f. Proposition 2 shows that A is strongly monotone and Lipschitz with some parameters of m and L. Here we give some popular operator splitting methods for this problem as follows.\n• Forward-backward splitting: zk+1 = RB(zk − αA(zk)), i.e.,\nuk = ((1− α)I + αW )zk + αb zk+1 = proxαf (u k) (23)\n• Peaceman-Rachford splitting: uk+1 = CACB(uk), zk = RB(uk), i.e.,\nuk+1/2 = 2zk − uk, zk+1/2 = (I + α(I −W ))−1(uk+1/2 + αb), uk+1 = 2xk+1/2 − uk+1/2, zk+1 = proxαf (u k+1).\n(24)\n• Douglas-Rachford splitting (or ADMM): uk+1 = 1/2(I + CACB)(uk), zk = RB(uk), i.e.,\nuk+1/2 = 2zk − uk, zk+1/2 = (I + α(I −W ))−1(uk+1/2 + αb), uk+1 = 2xk+1/2 − uk+1/2, zk+1 = proxαf (u k+1).\n(25)\nA sufficient condition for forward-backward splitting to converge is α < 2m/L2. The PeacemanceRachford and Douglas-Rachford methods converge for any α > 0, although the convergence speed will often vary substantially based upon α." }, { "heading": "C.2 DYNAMICAL SYSTEM THEORY", "text": "In this section, we present some concepts and results of dynamical system theory that are used in this paper. We consider a nonlinear system of the form\nż(t) = f(z(t)) (26)\nwhere z(t) ∈ Rn is the state, and the function f is assumed to be Lipschitz continuous. By Picard’s existence theorem we have a unique a solution for any initial condition. The above system is timeinvariant since f is not explicitly depends on t. System (26) is called linear time-invariant (LTI) system if f(z) = Az + b for some matrix A ∈ Rn×n and b ∈ Rn. The point z? ∈ Rn is call an equilibrium of (26) if f(z?) = 0.\nThe central concern in dynamical system theory is stability. While there are many different stability notions (Khalil, 2002), here we mainly focus on two of them: exponential stability and contraction w.r.t a constant metric Q 0. System (26) is said to be locally exponentially stable at the equilibrium z? w.r.t. to the metric Q if there exist some positive constants α, β, δ such that for any initial condition z(0) ∈ Bδ(z?) := {z | ‖z − z?‖Q < δ}, the following condition holds:\n‖z(t)− z?‖ ≤ α‖z(0)− z?‖Qe−βt, ∀t > 0. (27) And it is said to be globally exponentially stable if the above condition also holds for any δ > 0. The exponentially stability can be verified via Lyapunov’s second method, i.e., finding a Lyapunov function V = ‖z‖2P with P 0 such that V̇ (t) ≤ −2βV (t) along the solutions, i.e.,\n(z − z?)>Pf(z) + f(z)>P (z − z?) + 2β(z − z?)>P (z − z?) ≤ 0. (28)\nSystem (26) is said to be contracting w.r.t. the metric Q if there exist some positive constants α, β such that for any pair of solutions z1(t) and z2(t), we have\n‖z1(t)− z2(t)‖Q ≤ α‖z1(0)− z2(0)‖Qe−βt, ∀t > 0. (29) Note that contraction is a much stronger notion than global exponential stability as Condition (27) can be implied by Condition (29) by setting z1 = z and z2 = z?. However, unlike the Lyapunov analysis, contraction analysis can be done via simple local analysis which does not require any prior-knowledge about the equilibrium z?. Specifically, contraction can be established by the local exponential stability of the associated differential system defined by\n∆̇z = Df(z)∆z\nwhere ∆z(t) is the infinitesimal variation between z(t) and its neighborhood solutions, and Df is Clarke generalized Jacobian. The condition for (26) to be contracting can be represented as a state-dependent Linear Matrix Inequality (LMI) as follows\nPDf(z) + Df(z)>P + 2βP ≺ 0 (30) for some P 0 and all z ∈ Rn. For an LTI system, exponential stability and contraction are equivalent and the stability condition can be s if A is Hurwitz stable (i.e. all eigenvalues of A have strictly negative real part).\nFor most applications, the dynamic system usually involves an external input x(t) ∈ Rm and an output y(t) ∈ Rp, whose state-space representation takes the form of\nż(t) = f(z(t), x(t)), y(t) = h(z(t), x(t)). (31)\nHere we measure the robustness of the above system under input perturbation by incremental L2gain. That is, system (31) has an incremental L2-gain bound of γ if for any pair of inputs x1(·), x2(·) with ∫ T 0 ‖x1(t) − x2(t)‖22dt < ∞ for all T > 0, and any initial conditions z1(0) and z2(0), the solutions of (31) exists and satisfy∫ T 0 ‖y1(t)− y2(t)‖22 dt ≤ γ2 ∫ T 0 ‖x1(t)− x2(t)‖22 dt+ κ(z1(0), z(0)) (32)\nfor some function κ(z1, z2) ≥ 0 with κ(z, z) = 0. Note that γ can be viewed as a Lipschitz bound of all the mappings defined by (31) with some initial condition from the input signal x(·) to\ny(·). For any two constant inputs x1, x2, let z1, z2 and y1, y2 be the corresponding equilibrium and steady-state output, respectively. From (32) we have\n‖y1 − y2‖22 ≤ ‖x1 − x2‖22 + κ(z1, z2)/T, which implies a Lipschitz bound of γ as T →∞. A particular class of nonlinear systems that have strong connections to various neural networks is the so-called Luré system, which takes the form of\nż(t) = Az(t) +Bφ(Cz(t)) (33)\nwhere A,B,C are constant matrices with proper size, and φ is a static nonlinearity with sector bounded of [α, β]: for all solution (v, w) with w = φ(v)\n(w − αv)>(βv − w) ≥ 0 (34)\nor equivalently [ v w ]> Π [ v w ] ≥ 0 with\nΠ =\n[ 2αβI (α+ β)I (α+ β)I −2I ] . (35)\nThis implies that the origin is an equilibrium since φ(0) = 0. The above system can be viewed as a feedback interconnection of a linear system\nG :\n{ ż(t) = Az(t) +Bw(t)\nv(t) = Cz(t) (36)\nand a nonlinear memoryless component w(t) = φ(v(t)). The above linear system can also be described by a transfer function G(s) with s ∈ C. We refer to Hespanha (2018) for details about frequency-domain concepts and results of linear systems. The frequency-domain representation for the sector bounded condition (34) can be written as[\nv̂(jω) ŵ(jω)\n]∗ Π [ v̂(jω) ŵ(jω) ] ≥ 0 ∀ω ∈ R (37)\nwhere v̂(jω) and ŵ(jω) are Fourier transforms of v and w, respectively, (·)∗ denotes the complex conjugate. Then, the closed-loop stability of the feedback interconnection can be verified by the Integral Quadratic Constraint (IQC) theorem (Megretski & Rantzer, 1997). Although the IQC framework allows for more general dynamic multipliers, here we only focus on the simple constant multiplier defined in (35). Theorem 3. LetG be stable and φ be a static nonlinearity with sector bound of [α, β]. The feedback interconnection of G and φ is stable if here exists > 0 such that[\nG(jω) I\n]∗ Π [ G(jω) I ] − I, ∀ω ∈ R. (38)\nThe Kalman-Yakubovich-Popov (KYP) lemma (Rantzer, 1996) can be applied to demonstrate the equivalence of Condition 3 in Theorem 3 to an LMI condition. The result is stated as follows. Theorem 4. There exists a > 0 such that (38) holds if and only if there exists a matrix P = P> such that [\nA>P + PA PB B>P 0\n] + [ C> 0 0 I ] Π [ C 0 0 I ] ≺ 0." }, { "heading": "D LBEN PARAMETERIZATION FOR FEEDFORWARD NETWORKS", "text": "Given an equilibrium network (1) with weights U,W , and Wo, we can estimate its Lipschitz bound γ by solving the following SDP with (n+ 1) decision variables:\nmin γ>0,Λ∈D+ γ s.t. 2Λ− ΛW −W>Λ −ΛU W>o−U>Λ γI 0 Wo 0 γI 0. (39)\nNote that the above LMI constraint is equivalent to (4) via Schur complement. A tight upper bound is then obtained by minimizing γ. When a deep neural network (a special case of equilibrium network) is considered, the above SDP yields the same bound estimation as LipSDP-Neuron in Fazlyab et al. (2019) since both formulations involve minimizing the gain bound γ subject to an equivalent constraint (41).\nTraining a feedforward network with a prescribed Lipschitz bound is a challenge problem due to the LMI constraint (39) as well as the sparse structure of W . Following the similar idea of direct parameterization, we will construct a parameterization built on (9) to represent the following weight\nW = 0 W1 . . .\n... . . . 0 0 · · · WL−1 0 . (40) We first look at a simple case where W is a dense strictly lower triangular matrix. Given a square matrix H , its LDU partition is defined as H = [H]D + [H]L + [H]U where [H]D is a diagonal matrix, [H]L([H]U ) is a strictly lower(upper) triangular matrix. Given any hyper-parameter γ > 0, the parameterization contains the following free variables: V ∈ Rn×n,Wo ∈ Rp×n, and Û ∈ Rn×d. Let S = [H]L − [H]>L , Ψ = [H]−1D and U = ΨÛ where H = V >V + I + (W>o Wo + Û Û>)/2γ. Then, the LBEN parameterization (9) yields\nW = I −Ψ ( 1\n2γ WTo Wo +\n1\n2γ Ψ−1UUTΨ−1 + V TV + I + S\n) = −2[H]−1D [H]L,\nwhich is a dense lower triangular matrix. To impose the sparse pattern like (40), we need\nH = Λ1 H > 1 H1 Λ2 H > 2 H2 Λ3 H > 3\n. . . . . . . . . HL−2 ΛL−1 H > L−1\nHL−1 ΛL where Λi belongs to D+ with 1 ≤ i ≤ L, and Hj has the same dimension as Wj for 1 ≤ j ≤ L− 1. To make V >V have the same band structure as H , we further parameterize V as follows\nV = Γ1 Φ1V1 Γ2 . . . . . .\nΦL−1VL−1 ΓL where Γi,Φj ∈ D+ and V >j Vj = I . The unitary matrix Vj can be parameterized by Vj = eSj where S>j = −Sj . The diagonal blocks of V >V are Γ2i + Φ2i with ΦL = 0 while the lower off-diagonal blocks are Γj+1ΦjVj with 1 ≤ j ≤ L−1. Similar techniques can be applied to the parameterization of Wo and Û ." }, { "heading": "E PROOFS", "text": "" }, { "heading": "E.1 PROOF OF THEOREM 1", "text": "We presents two proofs for the well-posedness of equilibrium network (1). All these proofs are based on the following lemma. Lemma 1 (Simpson-Porco & Bullo (2014)). For a time-invariant contracting dynamical system, all its solutions converge to a unique equilibrium.\n(Monotone operator perspective): This proof is mainly based on Proposition 2, which states that the solution of (1) is also a zero of the operator splitting problem 0 ∈ (A+B)(z), where the operators\nA and B are given in (10). Condition 1 implies that the operator A is strongly monotone while Assumption 1 implies that the operator B is maximal monotone. Furthermore, the Clay operator CA is contractive and CB is non-expansive. Thus, applying Peaceman-Rachford algorithm to 0 ∈ (A + B)(z) yields a contracting discrete-time system (24) since CACB is a contractive operator. Since (24) is time-invariant, it yields a unique solution z for any x and bz .\n(Neural ODE perspective): This proof is built on Proposition 6, which states that the neural ODE (14) is a contracting continuous-time dynamical system under the Assumption 1 and Condition 1. For any fixed input x and bz , system (14) is also time-invariant and hence its solution converges to a unique equilibrium, which is also the solution of (1).\nWe now prove the Lipschitz boundedness of a well-posed equilibrium network. Condition 1 implies that there exists a constant > 0 such that\n2Λ− ΛW −WTΛ I. For any δ ∈ (0, ) and weights Wo, U , we can find a sufficiently large but finite γ such that\n1 γ (WTo Wo + ΛUU >Λ) ( − δ)I.\nThen, Condition 2 holds for Λ and γ since\n2Λ− ΛW −WTΛ− 1 γ (WTo Wo + ΛUU >Λ) δI 0.\nFrom Theorem 2, γ is a Lipschitz bound for the well-posed equilibrium network (1)." }, { "heading": "E.2 PROOF OF THEOREM 2", "text": "Rearranging Eq. (4) yields\n2Λ− ΛW −WTΛ 1 γ (WTo Wo + ΛUU TΛ) 0.\nThe well-posedness of the equilibrium network (1) follows by Theorem 1. To obtain the Lipschitz bound, we first apply Schur complement to (4):[\n2Λ− ΛW −W>Λ− 1γW>o Wo −ΛU −U>Λ γI\n] 0.\nLeft-multiplying [ ∆>z ∆ > x ] and right-multiplying [ ∆>z ∆ > x ]> gives\n2∆>z Λ∆z − 2∆>z ΛW∆z − 1\nγ ∆>z W > o Wo∆z − 2∆>z ΛU∆x + γ‖∆x‖22 ≥ 0.\nSince (5) implies ∆v = W∆z + U∆x and ∆y = Wo∆z , the above inequality is equivalent to\nγ‖∆x‖22 − 1\nγ ‖∆y‖22 ≥ 2∆>z Λ∆z − 2∆zΛ∆v = 2〈∆v −∆z,∆z〉Λ. (41)\nThen, the Lipschitz bound of γ for the equilibrium network (1) follows by (6)." }, { "heading": "E.3 PROOF OF PROPOSITION 1", "text": "(if): It is well-known that if f is convex closed proper function, then prox1f is monotone and nonexpansive, i.e., it is slope-restricted in [0, 1]. Here f is not necessary to be closed as dom f (i.e. the range of σ) could be open interval (zl, zr) or half-open interval (zl, zr] or [zl, zr). This can be resolved by defining f̂ as the restriction of f on the closed interval [ẑl, ẑr], and then make ẑl → zl and ẑr → zr. (only if): Assumption 1 implies that σ is a non-decreasing and piece-wise differentiable function on R. Then, the range of σ is an interval, denoted by Z . We will construct the derivative function f ′ on Z first and then integrate it to obtain f . Let {zj ∈ Z}j∈Z be the sequence containing all points such that either σ′(x−) = 0 or σ′(x+) = 0 for all x ∈ σ−1(zj). Note that σ−1(z) is a singleton for\nall z ∈ (zj , zj+1), whereas σ−1(zj) is a closed interval of the forms (−∞, xr], [xl, xr] or [xl,∞). Then, we define f ′ as follows\nf ′(z) = min[σ−1(z)]− z, if z = zj and minσ−1(z) > −∞, max[σ−1(z)]− z, if z = zj and minσ−1(z) = −∞, σ−1(z)− z, otherwise.\nWithout loss of generality, we assume that 0 ∈ Z and σ−1(0) is well-defined. We define the function f as follows\nf(z) =\n{∫ z 0 f ′(ζ)dζ + C if z ∈ Z,\n∞ otherwise, where C is an arbitrary constant. Note that f is a convex function as f ′ is a piecewise differentiable function on Z and for those points where x = σ−1(z) is well-defined, f ′ is differentiable with f ′′(z) = 1/σ′(x) − 1 ≥ 0 as σ′(x) ∈ (0, 1]. Finally, the definition of f ′ implies that 0 ∈ z − σ−1(z) + ∂f(z), which implies that z = σ(x) is the unique minimizer of 1/2(z − x)2 + f(z). Furthermore, since σ is well-defined, we can conclude that f is bounded from below. We also provide a list of f for common activation functions in Table 3. A similar list can also be found in Li et al. (2019)." }, { "heading": "E.4 PROOF OF PROPOSITION 2", "text": "Similar to Winston & Kolter (2020), we first show that the solution of (1), if it exists, is an fixed point of the forward-backward iteration (23) with α = 1:\nzk+1 = RB(z k − αAzk) = prox1f (zk − α(I −W )zk + α(Ux+ bz)) = σ(Wzk + Ux+ bz).\nThe last equality follows by\nσ(x) = arg minz1 1 2 (z1 − x1)2 + f(z1)\n... arg minzn 1 2 (zn − xn)2 + f(zn)\n = arg min z 1 2 ‖z − x‖2Λ + n∑ i=1 λif(zi) = prox 1 f (x).\nNote that the necessary condition for σ(·) to be diagonal is that the weight Λ is positive diagonal. Now we prove the well-posedness of LBEN by showing that the operator splitting problem 0 ∈ (A+B)(z) has a unique solution for any x and bz . Both Condition 1 and 2 implies that the operator A is strongly monotone and its Cayley operator CA is contractive. Then, the Peaceman-Rachford iteration (24) is contracting and hence it converges to a unique fixed point." }, { "heading": "E.5 PROOF OF PROPOSITION 3", "text": "The matrix J is diagonal with elements in [0, 1]. Decompose Λ = Π(J+µI) for some small µ > 0, i.e. Π = Λ(J + µI)−1, which is diagonal and positive-definite. By denoting H = Π(I −W ) + (I −W )TΠ we obtain the following inequality from (3):\nΠJ(I −W ) + (I −W )TJΠ + µH I, which can be rearranged as\nΠ(I − JW ) + (I − JW )TΠ I + 2Π(I − J)− µH. Since 2Π(I − J) 0, we can choose a sufficiently small µ such that\nΠ(I − JW ) + (I − JW )TΠ 0, which further implies that I − JW is strongly monotone w.r.t. Π-weighted inner product, and is therefore invertible." }, { "heading": "E.6 PROOF OF PROPOSITION 4", "text": "First, we show that (12) is strongly convex. Since f(z) is a conic combination of convex functions f(zi), we only need to show that the quadratic term is strongly convex, i.e.,\n∇2J = Λ(I −W ) + (I −W )>Λ 0 where\nJ(z) =\n〈 1\n2 (I −W )z − Ux− bz, z 〉 Λ\nwhich follows by either Condition 1 or (2). Moreover, since S = 0 for the direction parameterization of W , we have Λ(I −W ) = (I −W )>Λ and hence ∂J = A. Then, finding the global minimizer of the strongly convex optimization problem (12) is equivalent to finding a zero for the operator splitting problem 0 ∈ ∂(J + f)(z) = (A+B)(z)." }, { "heading": "E.7 PROOF OF PROPOSITION 5", "text": "The proof is based on the “key insights” of ReLU activation from Raghunathan et al. (2018b). That is, a ReLU constraint z = max(x, 0) is equivalent to the following three linear and quadratic constraints between z and x: (i) z(z − x) = 0, (ii) z ≥ x, and (iii) z ≥ 0. From this observation an equilibrium network (1) can be equivalently expressed as the following constraints (I) z>(z−q) = 0, (II) z ≥ q, and (III) z ≥ 0, where q = Wz + Ux+ b. Note that (II) and (III) can be rewritten as the linear constraints in the QP problem (13) while (I) is equivalent to J(z) = 0 with\nJ(z) := z>Λ(z − q) = 1 2 z>Hz + p>z\nfor any Λ ∈ D+. It is obvious that J(z) ≥ 0 for all z satisfying (II) and (III), and hence the solution of (1) is a global minimizer of the QP problem (13). If Λ satisfies either Condition 1 or 2, then H is positive-definite and(13) is a strongly convex QP problem. Thus, its global minimizer is unique, which is also the solution of LBEN (1)." }, { "heading": "E.8 PROOF OF PROPOSITION 6", "text": "From (14) the dynamics of ∆v and ∆z can be formulated as a feedback interconnection of a linear system ∆̇v = −∆v + W∆z and a static nonlinearity ∆z = σ(va) − σ(vb). The linear system can be represented by a transfer function is G(s) = 1/(s + 1)W . The nonlinear component can be rewritten as ∆z = Φ(va, vb)∆v where Φ as a diagonal matrix with each Φii ∈ [0, 1]. For the nonlinear component Φ, its input and output signals satisfies the quadratic constraint (6). For the linear system G, we have the following lemma. Lemma 2. If Condition 1 holds, then for all ω ∈ {R ∪∞}[\nG(jω) I ]∗ [ 0 Λ Λ −2Λ ] [ G(jω) I ] ≺ 0. (42)\nThe KYP Lemma (Theorem 4) states that (42) is equivalent to the existence of a P = P> such that[ −2P PW WTP 0 ] + [ 0 Λ Λ −2Λ ] ≺ 0.\nIt is clear from the upper-left block that P 0. The above inequality also implies 2〈−∆v +W∆z,∆v〉P ≤ 〈∆z −∆v,∆z〉Λ − (‖∆z‖22 + ‖∆v‖22) ≤ − (‖∆z‖22 + ‖∆v‖22) for some > 0. The contraction property of the neural ODE (14 follows since d\ndt ‖∆v‖2P = 2〈−∆v +W∆z,∆v〉P ≤ − (‖∆z‖22 + ‖∆v‖22) ≤ −2β‖∆v‖2P\nfor some sufficiently small β > 0. As a byproduct of the above inequality, we will show that the operator −f with with f(v) = −v +Wσ(v) + Ux+ bz is strictly monotone w.r.t. the P -weighted inner product since\n〈−f(va) + f(vb), va − vb〉P = 〈∆v −W∆z,∆v〉P ≥ β‖∆v‖2P ." }, { "heading": "E.9 PROOF OF LEMMA 2", "text": "Note that (42) is equivalent to 2Λ−G0(jω)ΛW −G0(−jω)WTΛ µI (43) where G0(jω) = 11+jω . For some ω ∈ (R ∪∞) let g = <G0(jω) = <G0(−jω), where < denotes real part. It is easy to verify that g = 1/(ω2 + 1) ∈ [0, 1]. From (3) we have\n2gΛ− gΛW − gWTΛ g I for some > 0. Rearranging the above inequality yields\n2Λ− gΛW − gWTΛ g I + (1− g)2Λ Now, since g ∈ [0, 1] the right-hand-side is a convex combination of two positive definite matrices: I and 2Λ, therefore (43) holds for some µ > 0 and all ω ∈ (R ∪∞)." }, { "heading": "E.10 PROOF OF PROPOSITION 7", "text": "It is straightforward to verify that an equilibrium network with the following weights is identical to the feedforward network (15):\nz = z1 z2 ... zL , W = 0 W1 . . . ... . . . 0\n0 · · · WL−1 0\n , U = U0 0 ... 0 , Wo = [0 · · · 0 WL] . (44) To construct an LBEN parameterization in the form (8) for W , we first need the following lemma. Lemma 3. Condition 1 holds for any strictly lower triangular W .\nProof. We prove it by showing that for any δ > 0, there exists a Λ ∈ D+ such that H(Λn,Wn) := Λn(I −Wn) + (I −Wn)>Λn 22−nδI. (45) where Λn,Wn are the upper left n × n elements of Λ,W , respectively. For n = 1, λ1 > δ is sufficient since W1 = 0. Assuming that (45) holds for Λn and Wn, then we have\nH(Λn+1,Wn+1)− 21−nδI = [ H(Λn,Wn)− 21−nδI −Λnw>n+1\n−wn+1Λn 2(λn+1 − 2−nδ)\n] , (46)\nwhere Λn+1 = diag(Λn, λn+1) and Wn+1 = [ [Wn 0 ] 0 wn+1 0 ] . By applying Schur complement to (46), Inequality (45) holds for the case of n+ 1 if λn+1 > 2−nδ + 2n−2|Λnwn+1|2/δ.\nBased on the above lemma, we can construct a V such that V >V = 1/2[Λ(I−W )+(I−W )>Λ]− I where = 21−nδ. By choosing Ψ = Λ−1 and S = (ΛW −W>Λ)/2, the LBEN parameterization (8) recovers the exact W . Thus, LBEN contains all feedforward networks (44).\nWe note that “skip connections” as in a residual network can easily be added to the above structure via additional non-zero blocks in the lower-left part of the weight W ." }, { "heading": "E.11 PROOF OF PROPOSITION 8", "text": "From the MON parameterization (16) we have\nH(m,W ) := 2(1−m)I −W −W> = 2A>A 0. Let Wm be the set of non-zero and strictly lower triangular W such that H(m,W ) 0. Note that Wm1 ⊂ Wm2 if m1 > m2. Because H(m1,W ) 0 implies H(m2,W ) = H(m1,W ) + 2(m1 − m2)I 0 for all m2 < m1. Proposition 8 follows if limm→0Wm does not contain all strictly lower triangular W . Since W is a strictly lower triangular, H(0,W ) is a semidefinite matrix whose diagnoal elements are 2. As the norm of W increases, H(0,W ) becomes indefinite. Taking the feedforward network (44) with L = 2 as an example, the set ofW0 is characterized by W1W > 1 4I since\nH(0,W ) = [ 2I −W>1 −W1 2I ] 0.\nNow we show that Wm = ∅ for all m ≥ 1. Since the diagnoal elements of H(m,W ) are nonpositive when m ≥ 1, the matrix H(m,W ) is not semi-definite for any strictly lower triangular W ." }, { "heading": "F TRAINING DETAILS", "text": "" }, { "heading": "F.1 MNIST EXAMPLE", "text": "This section contains the model structures and the details of the training procedure used for the MNIST examples. All models are trained using the ADAM optimizer Kingma & Ba (2015) with an initial learning rate of 1×103. All models are trained for 40 Epochs, and the learning rate is reduced by a factor of 10 every 10 epochs.\nThe models in the MNIST example are all fully connected models with 80 hidden neurons and ReLU activations. For the equilibrium models, the forward and backward passes models are performed using the Peaceman-Rachford iteration scheme with = 1 and a tolerance of 1 × 10−2. When evaluating the models, we decrease the tolerance of the spitting method to 1 × 10−4. We use the same α tuning procedure as Winston & Kolter (2020). All models were trained using the same initial point. Note that for LBEN, this requires initializing the metric Λ = I .\nThe feed-forward models trained using Lipschitz margin training were trained using the original author’s code which can be found at https://github.com/ytsmiling/lmt." }, { "heading": "F.2 CIFAR-10 EXAMPLE", "text": "This section contains the model structures and the details of the training procedure used for the CIFAR-10 examples. All models are trained using the ADAM optimizer Kingma & Ba (2015) with an initial learning rate of 1 × 103. The models were trained for 25 epochs and the learning rate was reduced by a factor of 10 after 15 epochs. Each model contains a single convolutional layer, an average pooling layer with kernel size 2, and a linear output layer.\nThe convolutional LBEN has 81 channels and is parametrized as discussed below. The MON similarly has 81 channels. Unless otherwise stated, the feed-forward convolutional network has 162 channels which gives it approximately the same number of parameters as the LBEN.\nThe MON was evaluated using the Peaceman-Rachford Iteration scheme." }, { "heading": "CONVOLUTIONAL LBEN", "text": "Following the approach of Winston & Kolter (2020), we parametrize U and V in equation 9 via convolutions. The skew symmetric matrix is constructed by taking the skew symmetric part of a convolution S̄, so that S = 12 (S̄− S̄>). Similar, to Winston & Kolter (2020), we also find that using a weight normalized parametrization improves performance. Specifically, we use the following parametrization: V =\n√ α V̂|V̂ | , S̄ = β Ŝ |Ŝ| , U = √ η Û|Û | and Wo = √ ξ Ŵo|Ŵo| .\nIn Winston & Kolter (2020) Peaceman-Rachford is used and the operator I − W can be quickly inverted using the fast Fourier transform. This situation is more complicated in our case as the term W>outWout cannot be represented as a strict convolution and this is not diagonalized by the Fourier matrix,. Instead, we apply Forward-Backward Splitting algorithm shown in equation 23 which does not require a matrix inversion.\nWe have observed that the rate of convergence of the Forward-Backward splitting algorithm is highly dependent on the monotonicity parameter m. In particular, for the convolutional models, we found there was a strong trade-off between the ease of solve for the equilibrium versus the model expressibility and the accuracy of the Lipschitz bound." } ]
2,020
null
SP:d7c00cd82b5d4cd035635e74b8525cf5603d305b
[ "In this paper, the authors proposed a new constrained policy optimization algorithm and a worst-case version of the constrained MDP framework. Tho proposed constrained policy optimization algorithm is based on CPO, and a novel advantage function (CSAE) based on the concept of a \"safe\" state. Experiments in control simulation tasks are provided.", "The authors propose to improve a safe RL algorithm, constrained policy optimizaiton, that can learn the optimal safe policy while exploring unsafe states less often during the training process. In particular, they dampen the estimated advantage associated with unsafe states, which encourages the RL algorithm to explore safe states more often during the learning process. In addition, the authors aim to find a policy that satisfies the constraints with high probability, rather than only in expectation, by considering the worst-case constraints. The empirical results show that a safe RL algo that dampens the advantage and respects worst-case constraints are able to learn policies with large returns and avoid unsafe states." ]
Reinforcement Learning (RL) with safety guarantee is critical for agents performing tasks in risky environments. Recent safe RL algorithms, developed based on Constrained Markov Decision Process (CMDP), mostly take the safety requirement as additional constraints when learning to maximize the return. However, they usually make unnecessary compromises in return for safety and only learn sub-optimal policies, due to the inability of differentiating safe and unsafe stateactions with high rewards. To address this, we propose Cost-sensitive Advantage Estimation (CSAE), which is simple to deploy for policy optimization and effective for guiding the agents to avoid unsafe state-actions by penalizing their advantage value properly. Moreover, for stronger safety guarantees, we develop a Worst-case Constrained Markov Decision Process (WCMDP) method to augment CMDP by constraining the worst-case safety cost instead of the average one. With CSAE and WCMDP, we develop new safe RL algorithms with theoretical justifications on their benefits for safety and performance of the obtained policies. Extensive experiments clearly demonstrate the superiority of our algorithms in learning safer and better agents under multiple settings.
[]
[ { "authors": [ "Joshua Achiam", "David Held", "Aviv Tamar", "Pieter Abbeel" ], "title": "Constrained policy optimization", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Eitan Altman" ], "title": "Constrained Markov decision processes, volume 7", "venue": "CRC Press,", "year": 1999 }, { "authors": [ "Felix Berkenkamp", "Matteo Turchetta", "Angela Schoellig", "Andreas Krause" ], "title": "Safe model-based reinforcement learning with stability guarantees", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Richard Cheng", "Gábor Orosz", "Richard M Murray", "Joel W Burdick" ], "title": "End-to-end safe reinforcement learning through barrier functions for safety-critical continuous control", "venue": null, "year": 1903 }, { "authors": [ "Yinlam Chow", "Mohammad Ghavamzadeh" ], "title": "Algorithms for cvar optimization in mdps", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Yinlam Chow", "Ofir Nachum", "Edgar Duenez-Guzman", "Mohammad Ghavamzadeh" ], "title": "A lyapunovbased approach to safe reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Gal Dalal", "Krishnamurthy Dvijotham", "Matej Vecerik", "Todd Hester", "Cosmin Paduraru", "Yuval Tassa" ], "title": "Safe exploration in continuous action spaces", "venue": "arXiv preprint arXiv:1801.08757,", "year": 2018 }, { "authors": [ "Yan Duan", "Xi Chen", "Rein Houthooft", "John Schulman", "Pieter Abbeel" ], "title": "Benchmarking deep reinforcement learning for continuous control", "venue": "Proceedings of The 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Javier Garcıa", "Fernando Fernández" ], "title": "A comprehensive survey on safe reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2015 }, { "authors": [ "Javier Garcıa", "Daniel Acera", "Fernando Fernández" ], "title": "Safe reinforcement learning through probabilistic policy reuse", "venue": "RLDM", "year": 2013 }, { "authors": [ "Chris Gaskett" ], "title": "Reinforcement learning under circumstances beyond its control", "venue": null, "year": 2003 }, { "authors": [ "Peter Geibel", "Fritz Wysotzki" ], "title": "Risk-sensitive reinforcement learning applied to control under constraints", "venue": "Journal of Artificial Intelligence Research,", "year": 2005 }, { "authors": [ "Alexander Hans", "Daniel Schneegaß", "Anton Maximilian Schäfer", "Steffen Udluft" ], "title": "Safe exploration for reinforcement learning", "venue": "In ESANN, pp", "year": 2008 }, { "authors": [ "Matthias Heger" ], "title": "Consideration of risk in reinforcement learning", "venue": "In Machine Learning Proceedings", "year": 1994 }, { "authors": [ "Oliver Mihatsch", "Ralph Neuneier" ], "title": "Risk-sensitive reinforcement learning", "venue": "Machine learning,", "year": 2002 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K. Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Martin Pecka", "Tomas Svoboda" ], "title": "Safe exploration techniques for reinforcement learning–an overview", "venue": "In International Workshop on Modelling and Simulation for Autonomous Systems,", "year": 2014 }, { "authors": [ "LA Prashanth" ], "title": "Policy gradients for cvar-constrained mdps", "venue": "In International Conference on Algorithmic Learning Theory,", "year": 2014 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "Highdimensional continuous control using generalized advantage estimation", "venue": "arXiv preprint arXiv:1506.02438,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "David Silver", "Aja Huang", "Chris J. Maddison", "Arthur Guez", "Laurent Sifre", "George van den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Vedavyas Panneershelvam", "Marc Lanctot", "Sander Dieleman", "Dominik Grewe", "John Nham", "Nal Kalchbrenner", "Ilya Sutskever", "Timothy P. Lillicrap", "Madeleine Leach", "Koray Kavukcuoglu", "Thore Graepel", "Demis Hassabis" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. Nature,", "year": 2016 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton", "Yutian Chen", "Timothy Lillicrap", "Fan Hui", "Laurent Sifre", "George van den Driessche", "Thore Graepel", "Demis Hassabis" ], "title": "Mastering the game of go without human knowledge", "venue": "URL http://dx. doi.org/10.1038/nature24270", "year": 2017 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Introduction to reinforcement learning, volume 135", "venue": "MIT press Cambridge,", "year": 1998 }, { "authors": [ "Aviv Tamar", "Yonatan Glassner", "Shie Mannor" ], "title": "Optimizing the cvar via sampling", "venue": "In Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Chen Tessler", "Daniel J Mankowitz", "Shie Mannor" ], "title": "Reward constrained policy optimization", "venue": "arXiv preprint arXiv:1805.11074,", "year": 2018 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 } ]
[ { "heading": "1 INTRODUCTION", "text": "In recent years, Reinforcement Learning (RL) has achieved remarkable success in learning skillful AI agents in various applications ranging from robot locomotion (Schulman et al., 2015a; Duan et al., 2016; Schulman et al., 2015c), video games (Mnih et al., 2015) and the game of Go (Silver et al., 2016; 2017). These agents are either trained in simulation or in risk-free environments, and the deployed RL algorithms can focus on maximizing the cumulative return by exploring the environment arbitrarily. However, this is barely workable for real-world RL problems where the safety of the agent is important. For example, a navigating robot cannot take the action of crashing into a front obstacle even if the potential return on reaching the target faster is higher. Actually, in reality, some states or actions might be unsafe and harmful to the system, and the agent should learn to avoid them in deployment when performing certain tasks. Conventional RL algorithms do not particularly consider such safety-constrained environments, which limits their practical application.\nRecently, Safe Reinforcement Learning (Garcıa & Fernández, 2015; Mihatsch & Neuneier, 2002; Altman, 1999) has been proposed and drawn increasing attention. Existing safe RL algorithms generally fall into two categories based on whether or not the agents are required to always stay safe during learning and exploration. The algorithms with exploration safety (Dalal et al., 2018; Pecka & Svoboda, 2014) insist that safety constraints never be violated even during learning, and thus they usually require certain prior knowledge of the environment to be available, e.g., in the form of human demonstrations. Comparatively, deployment safety (Achiam et al., 2017; Chow et al., 2018) RL algorithms train the agents from interaction with the environment and allow safety constraints violation during learning to some extent. This is reasonable since whether a state is safe will not be clear until the agent visits that state. Since human demonstrations are too difficult or expensive to collect in some cases and may not cover the whole state space, we focus on deployment safety in this work.\nRL problems with deployment safety are typically formulated as Constrained Markov Decision Process (CMDP) (Altman, 1999) that extends MDP by requiring the agent to satisfy cumulative cost constraints in expectation in the meanwhile of maximizing the expected return. Leveraging the\nsuccess of recent deep learning powered policy optimization methods (Schulman et al., 2015b), Constrained Policy Optimization (CPO) (Achiam et al., 2017) makes the first attempt on highdimensional control tasks in continuous CMDPs. However, CPO only considers the total cost of a trajectory of a sequence of state-action pairs during policy optimization. It does not differentiate the safe state-action pairs from the unsafe ones in the trajectories. Due to such incapability of exploiting the intrinsic structure of environments and trajectories, CPO sacrifices too much on the expected return for learning the safety policy.\nIn this work, we propose Cost-sensitive Advantage Estimation (CSAE) which generalizes the conventional advantage estimation for safe RL problems by differentiating safe and unsafe states, based on the cost information returned by the environment during training. CSAE depresses the advantage value of unsafe state-action pairs but controls effects upon their adjacent safe state-actions in the trajectories. Thus, the learned policy can maximally gain rewards from the safe states. Based on CSAE, we develop a new safe RL algorithm with proved monotonic policy performance improvement in terms of both safety and return from safe states, showing superiority over other safe RL algorithms. Moreover, to further enhance the agent’s ability of enforcing safety constraints, we propose Worst-case Constrained Markov Decision Process (WCMDP), an extension of CMDP by constraining the cumulative cost in worst cases through the Conditional Value-at-Risk (Tamar et al., 2015), instead of that in expectation. This augmentation makes the learned policy not only safer but also better, both experimentally and theoretically.\nWith CSAE and WCMDP, we develop a new safe RL algorithm by relating them to trust region methods. We conduct extensive experiments to evaluate our algorithm on several constrained robot locomotion tasks based on Mujoco (Todorov et al., 2012), and compare it with well-established baselines. The results demonstrate that the agent trained by our algorithm can collect a higher reward, while satisfying the safety constraints with less cost." }, { "heading": "2 RELATED WORK", "text": "Safe Reinforcement Learning has drawn growing attention. There are various definitions of ‘safety’ in RL (Garcıa & Fernández, 2015; Pecka & Svoboda, 2014), e.g., the variance of return (Heger, 1994; Gaskett, 2003), fatal transitions (Hans et al., 2008) and unknown states (Garcıa et al., 2013). In this paper, we focus on the RL problems with trajectory-based safety cost, under the constrained MDP (CMDP) framework. Through Lagrangian method, Geibel & Wysotzki (2005) propose to convert CMDP into an unconstrained problem to maximize the expected return with a cost penalty. Though such a problem can be easily solved with well-designed RL algorithms, e.g. (Schulman et al., 2015b; 2017), the trade-off between return and cost is manually balanced with a fixed Lagrange multiplier, which cannot guarantee safety through learning. To address this, inspired by trust region methods (Schulman et al., 2015b), Constrained Policy Optimization (Achiam et al., 2017) (CPO) establishes linear approximation to the safety constraint and solves the corresponding optimization problem in the dual form. Compared with previous CMDP algorithms, CPO scales well to highdimensional continuous state-action spaces. However, CPO does not distinguish the safe states from the unsafe ones in the training process, limiting its performance in the return.\nBesides developing various optimization algorithms, some recent works also explore other approaches to enhance the safety constraints, e.g., adopting the Conditional Value-at-Risk (CVaR) of the cumulative cost as the safety constraint (Tamar et al., 2015). Along this direction, Tamar et al. (2015) develop a gradient estimator through sampling to optimize CVaR with gradient descent. Prashanth (2014) further applies this estimator to CVaR-Constrained MDP to solve the stochastic shortest path (SSP) problem.\nOur work considers a similar framework to CPO (Achiam et al., 2017), but it treats states differently by extending Generalized Advantage Estimation (Schulman et al., 2015c) to be safety-sensitive. Our proposed CSAE can boost the policy performance in terms of the return while ensuring the safety property. Moreover, our algorithm with WCMDP is safer than CPO in terms of constraint violation ratio during learning.\nThere are also some non-CMDP based algorithms for safe RL that are not in the scope of this work. In (Dalal et al., 2018), a linear safety-signal model is built to estimate per-step cost from state-action pairs and rectify the action into a safe one. However, this method requires a pre-collected dataset\nto fit the linear cost estimation model, which limits its application. Similarly, Cheng et al. (2019) augment the model-free controller to enforce safety per step by designing a modle-based controller with control barrier functions (CBFs). Some works introduce Lyapunov functions to build safe RL algorithms. For example, Berkenkamp et al. (2017) apply Lyapunov functions for safely recovering from exploratory actions, while Chow et al. (2018) construct Lyapunov functions that explicitly model constraints." }, { "heading": "3 PRELIMINARIES", "text": "A standard Markov Decision Process (MDP) (Sutton et al., 1998) is defined with a tuple (S,A, P,R, γ, µ), where S andA denote the set of states and actions respectively, P : S×A×S → [0, 1] is the transition dynamics modeling the probability of transferring from state s to s′ after taking action a, R(s, a, s′) represents the reward function during this transition, γ ∈ [0, 1] is the discount factor and µ : S 7→ [0, 1] denotes the starting state distribution. An MDP agent is usually equipped with a policy π(a|s), which denotes the probability distribution over actions a given a state s. The performance of a policy π is measured with the expected discounted total reward J(π) = Eτ∼π,s0∼µ[ ∑∞ t=0 γ\ntR(st, at, st+1)], where τ = (s0, a0, s1, . . . ) is a trajectory generated by following policy π. RL algorithms for MDPs try to find the policy π∗ that achieves the highest reward, i.e., π∗ = arg maxπ J(π). They commonly use the value function Vπ(s) = Eτ∼π[ ∑∞ t=0 γ\ntR(st, at, st+1)|s0 = s], the action value function Qπ(s, a) = Eτ∼π[ ∑∞ t=0 γ\ntR(st, at, st+1)|s0 = s, a0 = a] and the advantage function Aπ(s, a) = Qπ(s, a) − Vπ(s). The discounted future state distribution will also be useful, which is defined as dπ(s) = (1− γ) ∑ t=0 γ\ntP (st = s|π). Constrained Markov Decision Process (CMDP) (Altman, 1999) extends MDP to environments with safety cost that could harm the agent when undesired actions are taken. As various safety costs may exist in a single CMDP, we relate them with m cost functions {C1(s, a, s′), . . . , Cm(s, a, s′)}, each of which denotes the cost an agent receives for each transition (s, a, s′) (similar to reward functions). Let Ci(τ) = ∑∞ t=0 γ\ntCi(st, at, st+1) denote the cumulative cost along a trajectory τ generated from policy π. We consider a trajectory-based cost constraint in CMDP, which limits the cumulative cost in expectation JCi = Eτ∼π,s0∼µ[Ci(τ)] with value di. Then safe RL aims to learn the policy π under CMDP by solving the following problem,\nπ∗ = arg max J(π), s.t. JCi = Eτ∼π,s0∼µ[Ci(τ)] ≤ di, i = 1, . . . ,m. (1)\nSafe RL algorithms search for the policy π∗ that achieves the maximal cumulative reward and meanwhile does not violate the imposed safety constraints on the costs. In the following, analogous to the definition of value functions (i.e., Vπ , Qπ and Aπ), we use V Ciπ , Q Ci π and A Ci π to denote the cost-value functions w.r.t. cost function Ci." }, { "heading": "4 METHOD", "text": "In this section, we develop a policy gradient based algorithm for solving the safe Reinforcement Learning problem in Equation 1. We will first derive a novel cost-sensitive advantage estimation method and present theoretical guarantees on the performance of its learned policy in terms of rewards from safe states. Then, we further develop a worst-case constrained MDP to augment the safety guarantee for learning policies. Finally, we present our safe RL algorithm in details." }, { "heading": "4.1 COST-SENSITIVE ADVANTAGE ESTIMATION", "text": "Conventional policy optimization methods (either for RL or for Safe RL) usually model the policy with a parametric function approximator (e.g., neural networks), and directly optimize the expected return J(πθ), where πθ denotes the policy parameterized with θ. The gradient estimator g for policy gradient methods (Schulman et al., 2015b;c) generally takes the following form:\ng = E [ ∞∑ t=0 Φ(st, at)∇θπθ(at|st) ] , (2)\nwhere Φ(st, at) is responsible for guiding the policy updating direction and one popular choice for Φ(st, at) is Generalized Advantage Estimator (GAE) (Schulman et al., 2015c) which substantially reduces the variance of policy gradient estimate. The formulation for GAE1 is given by\n GAE(γ,λ) t := ∞∑ l=0 (γλ)lδt+l, (3)\nwhere λ ∈ [0, 1] is a hyper-parameter. When λ = 0, it reduces to one-step TD error estimator; when λ = 1, it reduces to the empirical return estimator.\nCost-sensitive Advantage Estimation Existing safe RL algorithms directly deploy these estimators without adaptation to the specific feature of safe RL problems and fail to consider the safety requirement within the gradient estimation. For example, CPO (Achiam et al., 2017) uses environment reward to estimate the advantage function for policy optimization, without considering that some high-reward states may also be unsafe.\nIn safe RL, an unsafe state with high reward would bias policy update towards favoring such a state and wrongly encourage the agent to violate cost constraints, if directly applying the GAE estimator. A natural solution is to penalize the reward for unsafe states. However, it is difficult to adjust the penalty appropriately. Specifically, over-penalization would suppress visiting the nearby safe states with high reward as their Φ(st, at) will be negatively affected during bootstraping. On the other hand, the unsafe state cannot be avoided when the penalty is too small.\nSince δt can be considered as an estimate of the advantage value of taking action at at step t, the policy gradient estimator g points to the direction of increasing π(at|st) only if the advantage of at is greater than zero. Therefore, to guarantee that agents can gain rewards mainly from safe states, we propose to generalize GAE for safe RL by zeroing the TD error δ of unsafe states to avoid the agents from further exploring these regions. This is given by\n CSAE(γ,λ) t := ∞∑ l=0 (γλ)lαt+lδt+l, (4)\nwhere αt is a binary variable denoting whether a transition (st, at, st+1) is safe (αt = 1) or not (αt = 0). Following standard assumption in safe RL (Achiam et al., 2017), given the returned cost from the environment in the training phase, αt can be obtained by binarizing the cost value C(st, at, st+1), i.e., αt = 1[C(st, at, st+1) > 0]. With this new advantage estimation, the policy gradient estimator for safe RL is given by\ngCSAE = E [ ∞∑ t=0  CSAE(γ,λ) t ∇θπθ(at|st) ] ,\nwhich is compatible with any policy gradient based methods.\nCSAE and Reward Reshaping The above CSAE is equivalent to a moderate reward reshaping to penalize the reward for unsafe states. More specifically, it replaces the reward value for an unsafe state with the expected one-step reward an agent can receive at this state:\nR̄(st, at, st+1) = { R(st, at, st+1), if αt = 1, Ea,s′∼τ [R(st, a, s′)] , if αt = 0.\n(5)\nUsing this reshaped reward function induces the above CSAE advantage estimator. To see this, we use rt and r̄t to substituteR(st, at, st+1) and R̄(st, at, st+1), respectively, in the following and drop subscript π from the value function for notation simplicity2. Following standard definition, at time step t, a k-step advantage estimation A(k)t using the value function V and our revised reward signal r̄ can be expressed as\nA (k) t = −V (st) + r̄t + γr̄t+1 + · · ·+ γk−1r̄t+k−1 + γkV (st+k). (6)\n1We use ÂGAE(γ,λ)t to denote  GAE(γ,λ)(st, at).\n2Note that the reward revision mechanism in Equation 5 is only used for advantage estimation. For fitting the value function during learning, we still use the original reward function R(s, a, s′).\nBy substituting one-step TD error δt and reward function (Equation 5) into Equation 6, the above advantage can be rewritten as\nA (k) t = k−1∑ l=0 γlαt+lδt+l. (7)\nSee the appendix for the complete proof. Analogous to GAE, CSAE can be obtained by taking the exponentially-weighted average of above k-step advantage: ÂCSAE(γ,λ)t := (1 − λ) ∑∞ k=1 λ k−1A (k) t = ∑∞ l=0(γλ)\nlαt+lδt+l. This provides another perspective, from reward reshaping, to interpret the proposed CSAE. As policy optimization methods will automatically force agents to find high-reward regions in the state space, using the averaged reward can prevent unsafe yet highreward states from attracting the agent during learning.\nFrom the reward reshaping perspective, another possible approach to deal with the cost is to include the cost ct in the reward by reshaping rt toRt = rt+λ×ct. But it is difficult to properly choose the trade-off parameter λ due to: 1) if λ has fixed value, it is not easy to balance rt and ct as their best trade-off varies across environments, as verified by Tessler et al. (2018). In contrast, our proposed method is free of hyperparameter tuning and easy to deploy. 2) if λ is treated as the dual variable for safety hard constraints and updated in a similar way as PDO, the performance is worse than our method, due to the optimization difficulties, as justified in our experiments.\nWorst-Case Constraints As discussed in Sec. 3, in a CMDP, the trajectory-based safety cost for cost function Ci is computed and constrained in expectation, i.e., JCi(π) = Eτ∼π[ ∑∞ t=0 γ\ntCi(st, at, st+1)] ≤ di. However, this will certainly lead the agent to violate the constraints frequently during learning. To further enhance safety, we instead consider the worst cases and constrain the cost from the trajectories incurring largest cost.\nWe propose the Worst-case Constrained MDP (WCMDP), an MDP with a constraint on the CVaR of cost values (Tamar et al., 2015; Prashanth, 2014) in safe RL. It tries to find a policy that maximizes the cumulative return, while ensuring the conditional expectation of other cost functions given some confidence level β, to be bounded. Formally, for a cost function Ci and a given β ∈ (0, 1), the worst case constraint is given by\nJβCi(π) = Eτ∼∆π,β [ ∞∑ t=0 γtCi(st, at, st+1) ] , (8)\nwhere ∆π,β is the set of top β worst trajectories with the largest costs. We found the performance is robust to the value of β and we empirically set β = 0.1. Accordingly, the safety constraint related to cost function Ci is expressed as J β Ci (π) ≤ di." }, { "heading": "4.2 SAFE RL ALGORITHM WITH CSAE", "text": "Different from general RL problems, for safe RL, it is critical to ensure that the agent mostly gains reward from safe states and transitions. Thus, we are concerned with the following cost-sensitive return developed from the reshaped rewards in Equation 5 in safe RL:\nJsafe(π) := Eτ∼π,s0∼µ [ γt ∞∑ t=0 R̄(st, at, st+1) ] , (9)\nwhere R̄(st, at, st+1) = αtR(st, at, s′) + (1 − αt)Ea,st+1 [R(st, a, s′)]. Different from the conventional return that accumulates the rewards from both safe and unsafe states, the above reshaped return characterizes how much the agent can gain reward from safe state-actions. In this section, we demonstrate adopting the proposed CSAE in policy optimization would naturally optimize Jsafe. To this end, we establish the following theoretical result that gives performance guarantees for the policies in terms of the cost-sensitive return Jsafe(π). Theorem 1. For any policies π′, π with π ′ .\n= maxs |Ea∼π′ [ÂCSAE(γ,λ)π (s, a)]|, the following bound holds:\nJsafe(π ′)− Jsafe(π) ≥\n1\n1− γ Es∼dπ a∼π′\n[ ÂCSAE(γ,λ)π (s, a)− 2γ π ′\n1− γ DTV (π\n′||π)[s] ] . (10)\nHereDTV denotes the total variance divergence, which is defined asDTV (p||q) = 12 ∑ i |pi−qi| for discrete probability distributions p and q. Due to space limit, we defer all the proofs to the appendix.\nThe above result bounds the difference of two policies in terms of the cost-sensitive return via the CSAE. Leveraging such a result, our safe RL algorithm updates the policy by\nπk+1 = arg max π\nEs∼dπk ,a∼π[ÂCSAE(γ,λ)πk (s, a)]− νkDTV (π||πk)[s]\ns.t. JβCi = Eτ∼∆π,β [Ci(τ)] ≤ di, i = 1, . . . ,m. (11)\nIn particular, from Equation 10, for appropriate coefficients νk, the above update ensures monotonically non-decreasing return from safe states. Details of the practical implementation of this algorithm are provided in the appendix." }, { "heading": "5 EXPERIMENTS", "text": "As this work targets at obtaining safer and better policies, through experiments we aim to investigate: 1) whether our designed CSAE is effective for guiding the policy optimization algorithm to achieve higher cumulative reward while satisfying safety constraints; 2) whether the new policy search algorithm induced from WCMDP can guarantee stronger safety without sacrificing the performance; and 3) whether our method is able to adjust the advantage value of each transition properly to better guide policy optimization. Therefore, we evaluate our methods on multiple high-dimensional control problems that mainly include two different tasks. 1) Circle (Schulman et al., 2015b) where the agent is required to walk in a circle to achieve the highest cumulative reward, but the safe region is restricted to lie in the middle of two vertical lines. 2) Gather where several apples are randomly placed in both safe and unsafe regions, and an agent should collect as many apples as possible from the safe regions and avoid entering the unsafe regions. In our experiments, the reward for collecting one apple is 10, and the cost is 1 for each time the agent walks into an unsafe region. See Fig. 3 for an example of the gather environment. For the circle environment, we use three different robot agents in Mujoco (Todorov et al., 2012), i.e., point mass, ant and humanoid. For the gather environment, we conduct experiments with point mass and ant.\nWe use CSAE (Sec. 4.2) to denote the safe policy search algorithm equipped with our proposed cost-sensitive advantage estimation, and CSAE-WC to denote the algorithm that further includes worst-case constraints. We compare these two methods with three well-established baselines. TRPO (Schulman et al., 2015b): the most widely used policy optimization method; CPO (Achiam et al., 2017): the state-of-the-art safe RL algorithm for large-scale CMDP; PDO: a primal-dual optimization based safe RL algorithm (Achiam et al., 2017). For all the experiments, we use a multi-layer perceptron with two hidden layers of (64, 32) units as the policy network. Our implementation is based on rllab (Duan et al., 2016) and the Github repository3. The hyper-parameters for the environments and algorithms are given in the supplementary material.\nResults The learning curves for all the methods and environments are plotted and compared in Fig. 1. The first row is the cumulative reward. As we are dealing with environments with safety cost, we only accumulate the rewards collected through safe transitions as an optimal safe RL algorithm should be able to acquire rewards from safe states and avoid high-reward unsafe states. We also visualize the full returns in Fig. 1 (second row) for completeness. From the results, one can observe that our CSAE surpasses CPO throughout all the environments. This demonstrates the effectiveness of CSAE for learning safe agents with higher rewards. Furthermore, with the help of worst-case constraints, CSAE-WC performs the best in terms of rewards from safe states for PointCircle and PointGather or comparably well for AntCircle, HumanCircle and AntGather, outperforming CPO. The second and third rows in Fig. 1 plot the cumulative cost and ratio of the safe trajectories4 in all the trajectories at each sampling. Specifically, a safe ratio of 1 means all the collected trajectories are safe. From the results, the cost value of TRPO agents explodes as the training proceeds, while all the other three methods converges. Among them, CSAE achieves comparable cost value as CPO and higher safe ratio. CSAE-WC surpasses the other methods—it not only satisfies the constraint with less cost but also achieves highest safe ratio (nearly 1). These results clearly show that our method is effective at both enforcing safety and collecting more rewards, or it is safer and better.\n3https://github.com/jachiam/cpo/ 4One trajectory is counted as safe if its cumulative cost is smaller or equal to the constraint value d.\nVisualization To intuitively justify our method indeed learns agents that take safer and better actions, we visualize agent trajectories for the circle task (Fig. 2) and the gather task (Fig. 3). Fig. 2 shows TRPO agent follows the circle specified by the reward function without considering constraints. The other safe RL agents can learn to obey the constraints to some extent. However, they do not perform well as they usually get stuck in a corner (e.g., for PDO and CPO). Our CSAE-WC agents, however, can walk along the arcs and safe boundaries. Similar observations can be made in AntGather, where TRPO agent inevitably violates the constraint and rushes into unsafe regions (i.e., the red squares). The other agents learn to avoid such cost but sacrifice the rewards. However, CSAE and CSAE-WC can work better to collect more rewards than others. In summary, both visualizations in Fig. 2 and Fig. 3 demonstrate the effectiveness of our method for learning better agents that generate more reasonable and safer trajectories.\nAnalysis We here investigate how our proposed CSAE helps the training process and the resulted agents. We use PointCircle as the environment to conduct the following analysis. First, we justify\nthe method of replacing the reward with the expected one-step reward (Equation 5) for unsafe states. We compare it with a simple reward reshaping method that zeros the reward of unsafe transitions and plot their learning curves (of average return) in Fig. 4a. The results show that our method (denoted by “Mean” in Fig. 4a) performs much better. This indicates that our method can overcome the shortcomings of penalizing the reward of unsafe transitions not properly.\nSecond, it is important for safe RL algorithms to help the agent distinguish high-reward but unsafe states from the safe ones. To investigate the differences of safe RL algorithms (PDO, CPO and our CSAE-WC) in this ability, we sample 300 trajectories (100 from each method). For different algorithms, we use their deployed reward and value functions to estimate the advantage value for each transition in these trajectories. The advantage values are visualized in Fig. 4b, where more reddish means higher relative advantage value and bluish means lower value. From such visualization, one can observe that these three methods can recognize high-reward and safe state-actions by assigning higher advantage values, as shown in the left-bottom and right-top in Fig. 4b. However, our algorithm CSAE-WC prefers these safe and high-reward regions more with higher advantage values. Importantly, as shown in the right-bottom (unsafe but high-reward regions), our method gives state-actions within such regions much lower advantage. In contrast, PDO and CPO even assign above-the-average advantages to them. This result clearly demonstrates the superior and desired ability of our method to distinguish unsafe states from the safe ones for policy learning." }, { "heading": "6 CONCLUSION", "text": "In this paper we consider Safe Reinforcement Learning and propose a novel CSAE method to appropriately estimate the advantage value for policy optimization under risky environments. Compared to conventional advantage estimation, CSAE eliminates the negative effect of high-reward but unsafe state-actions by depressing their advantages. To further enforce safety constraints, we augment the CMDP with the worst-case cost constraint and proposed WCMDP. We theoretically analyze their performance and safety benefits. We then develop a new safe RL algorithm which is shown effective for learning safer and better agents in multiple large-scale continuous control environments." }, { "heading": "7 APPENDIX", "text": "" }, { "heading": "7.1 POLICY OPTIMIZATION WITH WORST-CASE CONSTRAINTS", "text": "Since solving the exactly optimal policy is intractable for large-scale problems, policy gradient based methods represent the policy with a θ-parameterized model, and try to search for the best policy within the parameter space Πθ, i.e., π∗ = arg maxπ∈Πθ J(πθ). Similarly, in the optimization problem induced from the worst case constrained policy optimization, we additionally require the policy to satisfy a set of safety constraints ΠβC . In other words, we are optimizing the policy to achieve highest cumulative reward over the intersection of Πθ and Π β C , which is formulated as\nmax π∈Πθ\nJ(π), s.t. JβCi(π) ≤ di, i = 1, . . . ,m.\nCompared to CMDP objective in Equation 1, our proposed method requires the worst β-quantile trajectories (instead of the average cost) to still satisfy the safety constraints. This will yield a safer policy as proved later. Before presenting our algorithm in full details, the following result is given that is useful for connecting the worst case safety cost of two different policies and their difference.\nTheorem 2. Let Pβ denote the state transition probability P of β-worst case trajectories. For any policies π and π′, define π ′\nCi\n. = maxs |Ea∼π′,s′∼Pβ [Ci(s, a, s′)+γVCi(s′)−VCi(s)]|. Let dπβ denote\nthe discounted future state distribution for the β-worst trajectories. Then the following bound holds:\nJβCi(π ′) ≤ JβCi(π)−\n1\n1− γ Es∼dπβ a∼π′\n[ ACiπ (s, a) + 2γ π ′ Ci\n1− γ DTV (π\n′||π)[s] ] .\n(12)\nThe above gives an upper bound on the worst case cost for policy π′. Explicitly constraining such an upper bound during the policy learning would enforce less cost constraint violation. Compared with the risk-sensitive CVaR models (Chow & Ghavamzadeh, 2014), this work is among the first to introduce such worst-case constraints into safe RL problems. Besides, it is also the first to present theoretical analysis on the expected worst-case cost of two policies, which is of independent interest.\nWe now show how to develop a practical algorithm for safe RL based on WCMDP and CSAE. Inspired by trust region methods (Schulman et al., 2015b), by replacing JCi with J β Ci\nand applying Theorem 2, we reformulate the update in Equation 11 into\nπk+1 = arg max π\nEs∼dπk ,a∼π[ÂCSAEπk (s, a)]\ns.t. JβCi(πk) + 1\n1− γ Es∼dξπk\na∼π\n[ ACiπk(s, a) ] ≤ di,\nDKL(π||πk) ≤ δ, i = 1, . . . ,m,\n(13)\nwhich is guaranteed to produce policies with monotonically non-decreasing returns from safe stateactions. Meanwhile, the policies will satisfy the original safety cost constraints." }, { "heading": "7.2 ALGORITHM DETAILS", "text": "To efficiently solve Equation 13, we linearize the objective and cost constraint around πk and expand the trust region constraint to second order, similar to (Schulman et al., 2015b; Achiam et al., 2017). We use θk to denote parameters of policy πk. Denote the gradient of objective and constraint for JβCi as g and bi, respectively, and the Hessian of the KL-divergence as H . The approximation to Equation 13 is given by\nθk+1 = arg max θ g>(θ − θk)\ns.t. b>i (θ − θk) + J β Ci (πk)− di ≤ 0, i = 1, . . . ,m 1\n2 (θ − θk)>H(θ − θk) ≤ δ.\n(14)\nAlgorithm 1 Worst-case Constrained Policy Optimization Input: Initial policy π0 ∈ Πθ, tolerance α and confidence level β for i = 0, 1, 2, dots do\nSample trajectories Di = {τ}, τ ∼ πθi . Form sample estimates ĝ, b̂, Ĥ, ĉ with Di if If the primal problem in Equation 14 is feasible then\nSolve dual problem in Equation 15 to get λ∗, ν∗ Compute updated policy parameters θ∗ with Equation 16.\nelse Compute recovery policy parameters θ∗ with\nθ∗ = θk − √ 2δ\nbTH−1b H−1b.\nend if Obtain θk+1 by backtracking line search to enforce satisfaction of sample estimates of constraints in Equation 13.\nend for\nAs H is always positive semi-definite, the above optimization problem can be efficiently solved in its dual form when the gradient g and bi are appropriately estimated. Here, g can be easily obtained by taking derivative of the objective after replacing GAE with our proposed CSAE. For estimating the gradient bi of the CVaR constraint J β Ci\n, we adopt the likelihood estimate proposed by Tamar et al. (2015):\nbi = ∇θJβCi= Eτ∼∆π,βs0∼µ\n[( JβCi(s0)− VaRβ(J β Ci (s0)) ) ∇θ log πθ(a|s) ] .\nHere VaRβ(J β Ci (s0)) is empirically estimated from the batch of sampled trajectories used for each update. Then we use the same algorithm as CPO (Achiam et al., 2017) to learn the policy.\nWe now derive the full algorithm to solve the optimization problem in Equation 14, which can also be found in CPO Achiam et al. (2017). Let ci denotes J β Ci (πk) − di, B . = [b1, . . . , bm] and c . = [c1, . . . , cm] T , we can express the dual to Equation 14 as follows:\nmax λ≥0,ν 0 − 1 2λ\n( gTH−1B − 2rT ν + νTSν ) + νT c− λδ\n2 , (15)\nwhere r = gTH−1B, S = BTH−1B. Solving this problem is much easier than the primal problem especially when the number of constraints is low. Let λ∗, ν∗ denote a solution to the dual problem, the solution to the primal is given by\nθ∗ = θk + 1\nλ∗ H−1(g −Bν∗). (16)\nWe now ready to present the full algorithm in Algorithm 1." }, { "heading": "7.3 EXPERIMENTAL PARAMETERS", "text": "For circle tasks, the cost function is given by\nC(s, a, s′) = 1[|x| > xlim], (17)\nwhere x is the horizontal position of the agent after this transition, xlim is a hyper-parameter specifying the location of the two vertical lines defining the safe regions.\nFor all the experiments, we set the discount factor λ to be 0.995, and the KL step size for trust region to be 0.01. The other parameters for environment and algorithms in our experiments are listed in the following table.\n7.4 PROOF OF k-STEP ADVANTAGE\nWe have V (st) = Ea,st+1 [rt + γV (st+1)]. Rearranging it gives Eat,st+1 [rt] = V (st) − γEst+1 [V (st+1)], which actually provides an unbiased estimator r̂t for the expected one-step reward Eat,st+1 [rt], as given by r̂t = V (st)− γV (st+1). (18) Rewrite r̄t as r̄t = αtrt + (1− αt)r̂t, then we have\nA (k) t = −V (st) + r̄t + γr̄t+1 + · · ·+ γk−1r̄t+k−1 + γkV (st+k)\n= −V (st) + αtrt + γαt+1rt+1 + · · ·+ γk−1αt+k−1rt+k−1 + γkV (st+k) + (1− αt)r̂t + γ(1− αt+1)r̂t+1 + · · ·+ γk−1(1− αt+k−1)r̂t+k−1\n= −V (st) + αtrt + γαt+1rt+1 + · · ·+ γk−1αt+k−1rt+k−1 + γkV (st+k) + (1− αt)[V (st)− γV (st+1)] + γ(1− αt+1)[V (st+1)− γV (st+2)] + · · · + γk−1(1− αt+k−1)[V (st+k−1)− γV (st+k)]\n= −V (st) + αt[rt + γV (st+1)− V (st)] + [V (st)− γV (st+1)] + γkV (st+k) + γαt+1[rt+1 + γV (st+2)− V (st+1)] + γ[V (st+1)− γV (st+2)] + · · · + γk−1αt+k−1[rt+k−1 + γV (st+k)− V (st+k−1)] + γk−1[V (st+k−1)− γV (st+k)]\n= k−1∑ l=0 γl αt+lδt+l,\nwhere δt+l . = rt+l + γV (st+l+1)− V (st+l)." }, { "heading": "7.5 PROOF OF THEOREM 1", "text": "Lemma 1. Achiam et al. (2017) For any function f : S → R and any policy π,\n(1− γ)Es∼µ[f(s)] + Es∼dπ a∼π′ s′∼P [γf(s′)]− Es∼dπ [f(s)] = 0. (19)\nCombining this with Equation 9, we obtain the following, for any function f and any policy π:\nJ safeπ = Es∼µ[f(s)] + 1\n1− γ Es∼dπ a∼π′ s′∼P\n[r̄(s, a, s′) + γf(s′)− f(s)] (20)\nIn particular, we choose the function f(s) to be value function V π(s). Thus, we have\nJ safeπ = Es∼µ[V π(s)] + 1\n1− γ Es∼dπ a∼π′ s′∼P\n[r̄(s, a, s′) + γV π(s′)− V π(s)]\nLemma 2. For any function: f : S → R and any policies π and π′, define\nLπ,f (π ′) . = Es∼dπ\na∼π s′∼P\n[( π′(a|s) π(a|s) − 1 ) (r̄(s, a, s′) + γf(s′)− f(s)) ] , (21)\nand π ′ f . = maxs |Ea∼π′,s′∼P [r̄(s, a, s′) + γf(s′)− f(s)]|. Then the following bounds hold:\nJ safe(π′)− J safe(π) ≥ 1 1− γ\n( Lπ,f (π ′)− 2 π ′ f DTV (d π||dπ) ) ,\nJ safe(π′)− J safe(π) ≤ 1 1− γ\n( Lπ,f (π ′) + 2 π ′ f DTV (d π||dπ) ) ,\n(22)\nwhere DTV is the total variational divergence.\nProof. The proof can be established by following the one for Lemma 2 in Achiam et al. (2017) where we substitute J safeπ in Equation 20.\nLemma 3. Achiam et al. (2017) The divergence between discounted furture state visitation distributions, ‖dπ′ − dπ‖1, is bounded by an average divergence of the policies π′ and π:\n‖dπ ′ − dπ‖1 ≤\n2γ\n1− γ Es∼dπ [DTV (π′||π)[s]], (23)\nwhere DTV (π′||π)[s] = (1/2) ∑ a |π′(a|s)− π(a|s)|.\nNow, with Lemma 2 and Lemma 3, we are ready to prove Theorem 1 as follows.\nProof. By choosing f(st) = V π(st) to be the safety value function in Lemmas 2 and , we have\nLπ,f (π ′) = E st∼dπ\na∼π′ st+1∼P\n[r̄t + γV π(st+1)− V π(st)]− E st∼dπ\nat∼π st+1∼P\n[r̄t + γV π(st+1)− V π(st)]\n= E st∼dπ a∼π′ st+1∼P [r̄t + γr̄t+1 + γ 2V π(st+2)− V π(st)]− E st∼dπ at∼π st+1∼P [r̄t + γr̄t+1 + γ 2V π(st+2)− V π(st)]\n= E st∼dπ a∼π′ st+1∼P\n[r̄t + · · ·+ γk−1r̄t+k−1 + γkV π(st+k)− V π(st)]\n− E st∼dπ at∼π st+1∼P [r̄t + · · ·+ γk−1r̄t+k−1 + γkV π(st+k)− V π(st)]\n= E st∼dπ a∼π′ st+1∼P [Akt ]− E st∼dπ a∼π st+1∼P [Akt ]\nThus, computing the exponentially average of Lπ,fπ′ with λ as the weighting cofficient gives:\nLπ,f (π ′) = E st∼dπ\na∼π′ st+1∼P\n[ACSAEt ]− E st∼dπ a∼π\nst+1∼P [ACSAEt ] ≥ E st∼dπ a∼π′ st+1∼P [ACSAEt ]. (24)\nThe last inequality comes from the fact that E st∼dπ a∼π\nst+1∼P\n[ACSAEt ] ≤ 0. Then applying Lemma 3 gives\nthe result." }, { "heading": "7.6 PROOF OF THEOREM 2", "text": "Define ξ to be the β-worst-case distribution over the trajectories, i.e., ξ(τ) = 1/β if C(τ) is among the top β most costly trajectories; and ξ(τ) = 0 otherwise. Denote Pβ = ξ ◦ P to be the weighted probability distribution and dπβ to be the discounted future state distribution for the β-worst cases.\nThen the expected cost over the β-worst-case trajectories can be expressed compactly as:\nJβC(π) = 1\n1− γ E s∼dπβ a∼π s′∼Pβ\n[C(s, a, s′)]. (25)\nWe also have the following identity:\n(I − γPβ)dπβ = (1− γ)µ. (26)\nWith the above relation, we can obtain the following lemma. Lemma 4. For any function f : S → R and any policy π,\n(1− γ)Es∼µ[f(s)] + E s∼dπβ a∼π s′∼Pβ [γf(s′)]− Es∼dπβ [f(s)] = 0. (27)\nCombining with Equation 25, we have\nJβC(π) = Es∼µ[f(s)] + 1\n1− γ E s∼dπβ a∼π s′∼Pβ\n[C(s, a, s′) + γf(s′)− f(s)]. (28)\nChoosing the cost value function V πC as f gives:\nJβC(π) = Es∼µ[V π C (s)] +\n1\n1− γ E s∼dπβ a∼π s′∼Pβ [C(s, a, s′) + γV πC (s ′)− V πC (s)]. (29)\nFollowing the proof for Theorem 1, we obtain Theorem 2." }, { "heading": "7.7 EXPERIMENTS ON WCMDP", "text": "To further study how our two contributions (CSAE and WCMDP) contribute to the final algorithm, we perform ablation study where the safe algorithm does not dampen the advantage function but respects the worst-case constraints, which is referred as WC in the following. We compare WC with CSAE, CSAE-WC and the other baseline methods in Fig. 5. Compared to CSAE, though WC is able to give better safety guarantee, it actually produces inferior performance, especially on PointCircle and AntGather. Besides, CSAE also demonstrates faster convergence speed than WC. By involving them in the same algorithm, CSAE-WC is able to combine their strengths and overcome their weaknesses, thus results in superior return performance and safety guarantee." } ]
2,020
null
SP:87fb323fc2a1b385c9a695c7669f509c835ef0aa
[ "This paper presents C&S method that predicts node labels in the transductive semi-supervised node classification setting. C&S uses the three-stage-pipeline approach. First, label probabilities are predicted with simple and scalable classifiers such as MLP. Then, the predicted errors are diffused over graphs. Finally, the labels are further smoothened to give the final node label prediction. The authors demonstrate that their simple C&S approach beats many existing GNN approaches.", "This paper shows modified label propagation can perform better than GCN. The idea is as follows: it first uses MLP on node features to get the initial labels, and then use two steps--correction and smoothness to postprocessing the labels. And this postprocessing is based on the traditional label propagation algorithm. It shows that this simple method matches GCN performances on various datasets. " ]
Graph Neural Networks (GNNs) are a predominant technique for learning over graphs. However, there is relatively little understanding of why GNNs are successful in practice and whether they are necessary for good performance. Here, we show that for many standard transductive node classification benchmarks, we can exceed or match the performance of state-of-the-art GNNs by combining shallow models that ignore the graph structure with two simple post-processing steps that exploit correlation in the label structure: (i) an “error correlation” that spreads residual errors in training data to correct errors in test data and (ii) a “prediction correlation” that smooths the predictions on the test data. We call this overall procedure Correct and Smooth (C&S), and the post-processing steps are implemented via simple modifications to standard label propagation techniques that have long been used in graph-based semi-supervised learning. Our approach exceeds or nearly matches the performance of state-of-the-art GNNs on a wide variety of benchmarks, with just a small fraction of the parameters and orders of magnitude faster runtime. For instance, we exceed the best-known GNN performance on the OGB-Products dataset with 137 times fewer parameters and greater than 100 times less training time. The performance of our methods highlights how directly incorporating label information into the learning algorithm (as is common in traditional methods) yields easy and substantial performance gains. We can also incorporate our techniques into big GNN models, providing modest gains in some cases.
[ { "affiliations": [], "name": "Qian Huang" }, { "affiliations": [], "name": "Horace He" }, { "affiliations": [], "name": "Abhay Singh" }, { "affiliations": [], "name": "Ser-Nam Lim" }, { "affiliations": [], "name": "Austin R. Benson" } ]
[ { "authors": [ "P. Battaglia", "Jessica B. Hamrick", "V. Bapst", "A. Sanchez-Gonzalez", "V. Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "D. Raposo", "Adam Santoro", "R. Faulkner", "Çaglar Gülçehre", "H. Song", "A. Ballard", "J. Gilmer", "G. Dahl", "Ashish Vaswani", "Kelsey R. Allen", "C. Nash", "V. Langston", "Chris Dyer", "N. Heess", "Daan Wierstra", "Pushmeet Kohli", "M. Botvinick", "Oriol Vinyals", "Y. Li", "Razvan Pascanu" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": null, "year": 2018 }, { "authors": [ "Aleksandar Bojchevski", "Johannes Klicpera", "Bryan Perozzi", "Amol Kapoor", "Martin Blais", "Benedek Rózemberczki", "Michal Lukasik", "Stephan Günnemann" ], "title": "Scaling graph neural networks with approximate PageRank", "venue": "In International Conference on Knowledge Discovery and Data Mining,", "year": 2020 }, { "authors": [ "Olivier Chapelle", "Jason Weston", "Bernhard Schölkopf" ], "title": "Cluster kernels for semi-supervised learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2003 }, { "authors": [ "Kamalika Chaudhuri", "Fan Chung", "Alexander Tsiatas" ], "title": "Spectral clustering of graphs with general degrees in the extended planted partition model", "venue": "In The Conference on Learning Theory,", "year": 2012 }, { "authors": [ "Ming Chen", "Zhewei Wei", "Zengfeng Huang", "Bolin Ding", "Yaliang Li" ], "title": "Simple and deep graph convolutional networks", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Alex Chin", "Yatong Chen", "Kristen M. Altenburger", "Johan Ugander" ], "title": "Decoupled smoothing on graphs", "venue": "In The Web Conference,", "year": 2019 }, { "authors": [ "David Easley", "Jon Kleinberg" ], "title": "Networks, Crowds, and Markets", "venue": null, "year": 2010 }, { "authors": [ "Buchnik Eliav", "Edith Cohen" ], "title": "Bootstrapped graph diffusions: Exposing the power of nonlinearity", "venue": "International Conference on Measurement and Modeling of Computer Systems,", "year": 2018 }, { "authors": [ "Matthias Fey", "Jan Eric Lenssen" ], "title": "Fast graph representation learning with pytorch geometric", "venue": "In ICLR Workshop Representation Learning on Graphs and Manifolds,", "year": 2019 }, { "authors": [ "Fabrizio Frasca", "Emanuele Rossi", "Davide Eynard", "Benjamin Chamberlain", "Michael Bronstein", "Federico Monti" ], "title": "SIGN: Scalable inception graph neural networks", "venue": "In ICML Workshop on Graph Representation Learning and Beyond,", "year": 2020 }, { "authors": [ "Hongchang Gao", "Jian Pei", "Heng Huang" ], "title": "Conditional random field enhanced graph convolutional neural networks", "venue": "In The 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2019 }, { "authors": [ "Lise Getoor" ], "title": "Link-based classification. In Advanced Methods for Knowledge Discovery from Complex Data", "venue": null, "year": 2005 }, { "authors": [ "Lise Getoor", "Nir Friedman", "Daphne Koller", "Benjamin Taskar" ], "title": "Learning probabilistic models of relational structure", "venue": "In International Conference on Machine Learning,", "year": 2001 }, { "authors": [ "David F Gleich", "Michael W Mahoney" ], "title": "Using local spectral methods to robustify graph-based learning algorithms", "venue": "In International Conference on Knowledge Discovery and Data Mining,", "year": 2015 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "William L Hamilton", "Rex Ying", "Jure Leskovec" ], "title": "Representation learning on graphs: Methods and applications", "venue": "IEEE Data Engineering Bulletin,", "year": 2017 }, { "authors": [ "Keith Henderson", "Brian Gallagher", "Lei Li", "Leman Akoglu", "Tina Eliassi-Rad", "Hanghang Tong", "Christos Faloutsos" ], "title": "It’s who you know: graph mining using recursive structural features", "venue": "In International Conference on Knowledge Discovery and Data Mining,", "year": 2011 }, { "authors": [ "Keith Henderson", "Brian Gallagher", "Tina Eliassi-Rad", "Hanghang Tong", "Sugato Basu", "Leman Akoglu", "Danai Koutra", "Christos Faloutsos", "Lei Li" ], "title": "RolX: structural role extraction & mining in large graphs", "venue": "In International Conference on Knowledge Discovery and Data Mining,", "year": 2012 }, { "authors": [ "Weihua Hu", "M. Fey", "M. Zitnik", "Yuxiao Dong", "H. Ren", "Bowen Liu", "Michele Catasta", "J. Leskovec" ], "title": "Open graph benchmark: Datasets for machine learning on graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Rania Ibrahim", "David Gleich" ], "title": "Nonlinear diffusion for community detection and semi-supervised learning", "venue": "In The Web Conference,", "year": 2019 }, { "authors": [ "Junteng Jia", "Austin R. Benson" ], "title": "Residual correlation in graph neural network regression", "venue": "In ICML Workshop on Graph Representation Learning and Beyond workshop,", "year": 2020 }, { "authors": [ "Junteng Jia", "Austin R Benson" ], "title": "A unifying generative model for graph learning algorithms: Label propagation, graph convolutions, and combinations", "venue": null, "year": 2021 }, { "authors": [ "Thorsten Joachims" ], "title": "Transductive learning via spectral graph partitioning", "venue": "In International Conference on Machine Learning,", "year": 2003 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Johannes Klicpera", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Predict then propagate: Graph neural networks meet personalized pagerank", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Danai Koutra", "Tai-You Ke", "U Kang", "Duen Horng Polo Chau", "Hsing-Kuo Kenneth Pao", "Christos Faloutsos" ], "title": "Unifying guilt-by-association approaches: Theorems and fast algorithms", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2011 }, { "authors": [ "Jure Leskovec", "Jon Kleinberg", "Christos Faloutsos" ], "title": "Graph evolution: Densification and shrinking diameters", "venue": "ACM Transactions on Knowledge Discovery from Data,", "year": 2007 }, { "authors": [ "Guohao Li", "Matthias Müller", "Ali Thabet", "Bernard Ghanem" ], "title": "Deepgcns: Can gcns go as deep as cnns", "venue": "In IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Leland McInnes", "John Healy", "Nathaniel Saul", "Lukas" ], "title": "Großberger. UMAP: Uniform Manifold Approximation and Projection", "venue": "Journal of Open Source Software,", "year": 2018 }, { "authors": [ "Miller McPherson", "Lynn Smith-Lovin", "James M Cook" ], "title": "Birds of a feather: Homophily in social networks", "venue": "Annual Review of Sociology,", "year": 2001 }, { "authors": [ "Galileo Namata", "Ben London", "Lise Getoor", "Bert Huang" ], "title": "Query-driven active surveying for collective classification", "venue": "In International Workshop on Mining and Learning with Graphs,", "year": 2012 }, { "authors": [ "Mark EJ Newman" ], "title": "Mixing patterns in networks", "venue": "Physical Review E,", "year": 2003 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga", "Alban Desmaison", "Andreas Kopf", "Edward Yang", "Zachary DeVito", "Martin Raison", "Alykhan Tejani", "Sasank Chilamkurthy", "Benoit Steiner", "Lu Fang", "Junjie Bai", "Soumith Chintala" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Leto Peel" ], "title": "Graph-based semi-supervised learning for relational networks", "venue": "In SIAM International Conference on Data Mining", "year": 2017 }, { "authors": [ "Meng Qu", "Yoshua Bengio", "Jian Tang" ], "title": "GMNN: Graph markov neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Usha Nandini Raghavan", "Réka Albert", "Soundar Kumara" ], "title": "Near linear time algorithm to detect community structures in large-scale networks", "venue": "Physical Review E,", "year": 2007 }, { "authors": [ "Yu Rong", "Wenbing Huang", "Tingyang Xu", "Junzhou Huang" ], "title": "Dropedge: Towards deep graph convolutional networks on node classification", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Cosma Shalizi" ], "title": "Advanced data analysis from an elementary point of view", "venue": null, "year": 2013 }, { "authors": [ "Yunsheng Shi", "Zhengjie Huang", "Shikun Feng", "Yu Sun" ], "title": "Masked label prediction: Unified massage passing model for semi-supervised classification", "venue": null, "year": 2009 }, { "authors": [ "Amanda L Traud", "Peter J Mucha", "Mason A Porter" ], "title": "Social structure of facebook networks", "venue": "Physica A: Statistical Mechanics and its Applications,", "year": 2012 }, { "authors": [ "Francesco Tudisco", "Austin R Benson", "Konstantin Prokopchik" ], "title": "Nonlinear higher-order label spreading", "venue": "In The Web Conference,", "year": 2021 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph Attention Networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Fei Wang", "Changshui Zhang" ], "title": "Label propagation through linear neighborhoods", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2007 }, { "authors": [ "Hongwei Wang", "Jure Leskovec" ], "title": "Unifying graph convolutional neural networks and label propagation", "venue": null, "year": 2002 }, { "authors": [ "Felix Wu", "Amauri Souza", "Tianyi Zhang", "Christopher Fifty", "Tao Yu", "Kilian Weinberger" ], "title": "Simplifying graph convolutional networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Zonghan Wu", "Shirui Pan", "Fengwen Chen", "Guodong Long", "C. Zhang", "Philip S. Yu" ], "title": "A comprehensive survey on graph neural networks", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Hao Yin", "Austin R Benson", "Jure Leskovec", "David F Gleich" ], "title": "Local higher-order graph clustering", "venue": "In International Conference on Knowledge Discovery and Data Mining,", "year": 2017 }, { "authors": [ "Hanqing Zeng", "Hongkuan Zhou", "Ajitesh Srivastava", "Rajgopal Kannan", "Viktor Prasanna" ], "title": "GraphSAINT: Graph sampling based inductive learning method", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yilin Zhang", "Karl Rohe" ], "title": "Understanding regularized spectral clustering via graph conductance", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Dengyong Zhou", "Olivier Bousquet", "Thomas N Lal", "Jason Weston", "Bernhard Schölkopf" ], "title": "Learning with local and global consistency", "venue": "In Advances in Neural Information Processing Systems,", "year": 2004 }, { "authors": [ "Xiaojin Zhu", "Zoubin Ghahramani", "J. Lafferty" ], "title": "Semi-supervised learning using gaussian fields and harmonic functions", "venue": "In International Conference on Machine Learning,", "year": 2003 }, { "authors": [ "Xiaojin Jerry Zhu" ], "title": "Semi-supervised learning literature survey", "venue": "Technical report, University of Wisconsin-Madison Department of Computer Sciences,", "year": 2005 } ]
[ { "heading": "1 INTRODUCTION", "text": "Following the success of neural networks in computer vision and natural language processing, there are now a wide range of graph neural networks (GNNs) for making predictions involving relational data (Battaglia et al., 2018; Wu et al., 2020). These models have had much success and sit atop leaderboards such as the Open Graph Benchmark (Hu et al., 2020). Often, the methodological developments for GNNs revolve around creating strictly more expressive architectures than basic variants such as the Graph Convolutional Network (GCN) (Kipf & Welling, 2017) or GraphSAGE (Hamilton et al., 2017a); examples include Graph Attention Networks (Veličković et al., 2018), Graph Isomorphism Networks (Xu et al., 2018), and various deep models (Li et al., 2019; Rong et al., 2019; Chen et al., 2020). Many ideas for new GNN architectures are adapted from new architectures in models for language (e.g., attention) or vision (e.g., deep CNNs) with the hopes that success will translate to graphs. However, as these models become more complex, understanding their performance gains is a major challenge, and scaling them to large datasets is difficult.\nHere, we see how far we can get by combining much simpler models, with an emphasis on understanding where there are easy opportunities for performance improvements in graph learning, particularly transductive node classification. We propose a simple pipeline with three main parts (Figure 1): (i) a base prediction made with node features that ignores the graph structure (e.g., a shallow multi-layer perceptron or just a linear model); (ii) a correction step, which propagates uncertainties from the training data across the graph to correct the base prediction; and (iii) a smoothing of the predictions over the graph. Steps (ii) and (iii) are post-processing and implemented with classical methods for graph-based semi-supervised learning, namely, label propagation techniques ∗Equal contribution †Work done while at Cornell University\n(Zhu, 2005).1 With a few modifications and new deployment of these classic ideas, we achieve stateof-the-art performance on several node classification tasks, outperforming big GNN models. In our framework, the graph structure is not used to learn parameters (which is done in step (i)) but instead as a post-processing mechanism. This simplicity leads to models with orders of magnitude fewer parameters that take orders of magnitude less time to train and can easily scale to large graphs. We can also combine our ideas with state-of-the-art GNNs, although the performance gains are modest.\nA major source of our performance improvements is directly using labels for predictions. This idea is not new — early diffusion-based semi-supervised learning algorithms on graphs such as the spectral graph transducer (Joachims, 2003), Gaussian random field models (Zhu et al., 2003), and and label spreading (Zhou et al., 2004) all use this idea. However, the motivation for these methods was semi-supervised learning on point cloud data, so the “node features” were used to construct the graph itself. Since then, these techniques have been used for learning on relational data consisting of a graph and some labels but no node features (Koutra et al., 2011; Gleich & Mahoney, 2015; Peel, 2017; Chin et al., 2019); however, they have largely been ignored in the context of GNNs. (That being said, we still find that even simple label propagation, which ignores features, does surprisingly well on a number of benchmarks.) This provides motivation for combining two orthogonal sources of prediction power — one coming from the node features (ignoring graph structure) and one coming from using the known labels directly in predictions.\nRecent research connects GNNs to label propagation (Wang & Leskovec, 2020; Jia & Benson, 2020; 2021) as well as Markov Random fields (Qu et al., 2019; Gao et al., 2019), and some techniques use ad hoc incorporation of label information in the features (Shi et al., 2020). However, these approaches are usually still expensive to train, while we use label propagation in two understandable and low-cost ways. We start with a cheap “base prediction” from a model that uses only node features and ignores the graph structure. After, we use label propagation for error correction and then to smooth final predictions. These post-processing steps are based on the fact that errors and labels on connected nodes tend to be positively correlated. Assuming similarity between connected nodes is at the center of much network analysis and corresponds to homophily or assortative mixing (McPherson et al., 2001; Newman, 2003; Easley & Kleinberg, 2010). In the semi-supervised learning literature, the analog is the smoothness or cluster assumption (Chapelle et al., 2003; Zhu, 2005). The good performance of label propagation that we see across a wide variety of datasets suggests that these correlations hold on common benchmarks.\n1One of the main methods that we use (Zhou et al., 2004) is often called label spreading. The term “label propagation” is used in a variety of contexts (Zhu, 2005; Wang & Zhang, 2007; Raghavan et al., 2007; Gleich & Mahoney, 2015). The salient point for this paper is that we assume positive correlations on neighboring nodes and that the algorithms work by “propagating” information from one node to another.\nOverall, our methodology demonstrates that combining several simple ideas yields excellent performance in transductive node classification at a fraction of the cost, in terms of both model size (i.e., number of parameters) and training time. For example, on the OGB-Products benchmark, we out-perform the current best-known GNN with more than two orders of magnitude fewer parameters and more than two orders of magnitude less training time. However, our goal is not to say that current graph learning methods are poor or inappropriate. Instead, we aim to highlight easier ways in which to improve prediction performance in graph learning and to better understand the source of performance gains. Our main finding is that more direct incorporation of labels into the learning algorithms is key. We hope that our approach spurs new ideas that can help in other graph learning tasks, such as inductive node classification, link prediction, and graph prediction." }, { "heading": "1.1 ADDITIONAL RELATED WORK", "text": "The Approximate Personalized Propagation of Neural Predictions (APPNP) framework is most relevant to our work, as they also smooth base predictions (Klicpera et al., 2018). However, they focus on integrating this smoothing into the training process so that their model can be trained end to end. Not only is this significantly more computationally expensive, it also prevents APPNP from incorporating label information at inference. Compared to APPNP, our framework produces more accurate predictions, is faster to train, and more easily scales to large datasets. That being said, APPNP can also be used without end-to-end training, which can make it faster but less accurate. Our framework also complements the Simplified Graph Convolution (Wu et al., 2019) and other algorithms designed to increase scalability (Bojchevski et al., 2020; Zeng et al., 2019; Frasca et al., 2020). The primary focus of our approach, however, is using labels directly, and scalability is a byproduct. There is also prior work connecting GCNs and label propagation. Wang & Leskovec (2020) use label propagation as a pre-processing step to weight edges for GNNs, whereas we use label propagation as a post-processing step and avoid GNNs. Jia & Benson (2020; 2021) use label propagation with GNNs for regression tasks, and our error correction step adapts some of their ideas for the case of classification. Finally, there are several recent approaches that incorporate nonlinearity into label propagation methods to compete with GNNs and achieve scalability (Eliav & Cohen, 2018; Ibrahim & Gleich, 2019; Tudisco et al., 2021), but these methods focus on settings of low label rates and do not use feature-based learning." }, { "heading": "2 CORRECT AND SMOOTH (C&S) MODEL", "text": "We start with some notation. We assume that we have an undirected graph G = (V,E), where there are n = |V | nodes with features on each node represented by a matrix X ∈ Rn×p. Let A be the adjacency matrix of the graph, D be the diagonal degree matrix, and S be the normalized adjacency matrix D−1/2AD−1/2. For the prediction problem, the node set V is split into a disjoint set of unlabeled nodes U and labeled nodes L, which are subsets of the indices {1, . . . , n}. We will further split the labeled nodes into a training set Lt and validation set Lv . We represent the labels by a one-hot-encoding matrix Y ∈ Rn×c, where c is the number of classes (i.e., Yij = 1 if i ∈ L is known to be in class j, and 0 otherwise, where the ith row of Y is all zero if i ∈ U ), Our problem is transductive node classification: assign each node j ∈ U a label in {1, . . . , c}, given G, X , and Y . Our approach starts with a simple base predictor on node features that does not rely on any learning over the graph. After, we perform two types of label propagation (LP): one that corrects the base predictions by modeling correlated error and one that smooths the final prediction. We call the combination of these two methods Correct and Smooth (C&S; Figure 1). The LPs are only postprocessing steps, and our pipeline is not trained end-to-end. Furthermore, the graph is only used in the post-processing steps (and in a pre-processing step to augment the features X), but not for the base predictions. This makes training fast and scalable compared to standard GNN models. Moreover, we take advantage of both LP (which performs fairly well on its own without features) and the node features. We find that combining these complementary signals yields excellent predictions." }, { "heading": "2.1 SIMPLE BASE PREDICTOR", "text": "To start, we use a simple base predictor that does not rely on the graph structure. More specifically, we train a model f to minimize ∑ i∈Lt `(f(xi), yi), where xi is the ith row of X , yi is the ith row\nof Y , and ` is a loss function, For this paper, f is either a linear model or a shallow multi-layer perceptron (MLP) followed by a softmax, and ` is the cross-entropy loss. The validation set Lv is used to tune hyperparameters such as learning rates and the hidden layer dimensions for the MLP. From f , we get a base prediction Z ∈ Rn×c, where each row of Z is a probability distribution resulting from the softmax. Omitting the graph structure for these base predictions avoids most of the scalability issues with GNNs. In principle, though, we can use any base predictor for Z, including those based on GNNs, and we explore this in Section 3. However, for our pipeline to be simple and scalable, we just use linear classifiers or MLPs with subsequent post-processing, which we describe next." }, { "heading": "2.2 CORRECTING BASE PREDICTIONS WITH ERROR CORRELATION", "text": "Next, we improve the accuracy of the base prediction Z by incorporating labels to correlate errors. The key idea is that we expect errors in the base prediction to be positively correlated along edges in the graph. In other words, an error at node i increases the chance of a similar error at neighbors of i. Thus, we should “spread” such uncertainty over the graph. Our approach here is inspired in part by residual propagation (Jia & Benson, 2020), where a similar concept is used for node regression tasks, as well as generalized least squares and correlated error models more broadly (Shalizi, 2013). To this end, we first define an error matrix E ∈ Rn×c, where error is the residual on the training data and zero elsewhere:\nELt,: = YLt,: − ZLt,:, ELv,: = 0, EU,: = 0. (1)\nThe residuals in rows of E corresponding to training nodes are zero only when the base predictor makes a perfect prediction. We smooth the error using the label spreading technique of Zhou et al. (2004), optimizing the objective\nÊ = arg min W∈Rn×c\ntrace(WT (I − S)W ) + µ‖W − E‖2F . (2)\nThe first term encourages smoothness of the error estimation over the graph, and is equal to∑c k=1 ∑ (i,j)∈E( Wik/ √ Dii − Wjk/ √ Djj)2. The second term keeps the solution close to the initial guess E of the error. As derived in Zhou et al. (2004), the solution can be obtained via the iteration E(t+1) = (1 − α)E + αSE(t), where α = 1/(1 + µ) and E(0) = E, which converges rapidly to Ê. This iteration is a propagation (or diffusion or spreading) of the error, and we add the smoothed errors to the base prediction to get corrected predictions Z(r) = Z + Ê. We emphasize that this is a post-processing technique and there is no coupled training with the base predictions.\nThis type of propagation is motivated by a particular correlated Gaussian error assumption for regression problems (Jia & Benson, 2020; 2021). For the classification problems we consider, we find that the smoothed errors Ê might not be at the right scale. We know that\n‖E(t+1)‖2 ≤ (1− α)‖E‖+ α‖S‖2‖E(t)‖2 = (1− α)‖E‖2 + α‖E(t)‖2. (3) When E(0) = E, we then have that ‖E(t)‖2 ≤ ‖E‖2. Thus, the propagation cannot completely correct the errors on all nodes in the graph, as it does not have enough “total mass,” and we find that adjusting the scale of the residual can help substantially in practice. To do this, we propose two variations of scaling the residual.\nAutoscale. Intuitively, we want to scale the size of errors in Ê to be approximately the size of the errors in E. We only know the true errors at labeled nodes, so we approximate the scale with the average error over the training nodes. Formally, let eTj ∈ Rc and êTj correspond to the jth rows of E and Ê and define σ = 1|Lt| ∑ j∈Lt ‖ej‖1. Then we define the corrected predictions on an unlabeled node i ∈ U to be Z(r)i,: = Zi,: + σ/‖êi‖1 · êTi .\nScaled Fixed Diffusion (FDiff-scale). Alternatively, we can use a diffusion like the one from Zhu et al. (2003), which keeps the known errors at training nodes fixed. More specifically, we iterate E(t+1)U,: = [D −1AE(t)]U,: and keep fixed E (t) L,: = EL,: until convergence to Ê, starting with E(0) = E. Intuitively, this fixes error values where we know the error (on the labeled nodes L), while other nodes keep averaging over the values of their neighbors until convergence. With this type of propagation, the maximum and minimum values of entries in E(t) do not go beyond those in EL. We still find it effective to select a scaling hyperparameter s to produce Z(r) = Z + sÊ." }, { "heading": "2.3 SMOOTHING FINAL PREDICTIONS WITH PREDICTION CORRELATION", "text": "At this point, we have a score vector Z(r), obtained from correcting the base predictor Z with a model for the correlated error Ê. To make a final prediction, we further smooth the corrected predictions. The motivation is that adjacent nodes in the graph are likely to have similar labels, which is expected given network homophily or assortative properties of a network. Thus, we can encourage smoothness over the distribution over labels by another label propagation. First, we start with our best guess H ∈ Rn×c of the labels:\nHLt,: = YLt,:, HLv∪U,: = Z (r) Lv∪U,:. (4)\nHere, the true labels are used at the training nodes and the corrected predictions are used for the validation and unlabeled nodes, the latter of which no longer correspond to probability distributions. We can (and should) also use the true labels at the validation labels, which we discuss later in the experiments, but the setup in Equation (4) aligns more closely with standard GNN evaluation. We then iterate H(t+1) = (1 − α)H + αSH(t) with H(0) = H until convergence to give the final prediction Ŷ . The classification for a node i ∈ U is arg maxj∈{1,...,c} Ŷij .\nAs with error correlation, the smoothing here is a post-processing step, decoupled from the other steps. This type of prediction smoothing is similar in spirit to APPNP (Klicpera et al., 2018), which we compare against later. However, APPNP is typically trained end-to-end, propagates final-layer representations instead of softmaxes, does not use labels, and is motivated differently." }, { "heading": "2.4 SUMMARY AND ADDITIONAL CONSIDERATIONS", "text": "To summarize, we start with a cheap base prediction Z, using only node features but not the graph structure. After, we estimate errors Ê by propagating errors on the training data. Then, we add these errors back to the base predictions, forming corrected predictions. Finally, we treat the corrected predictions as score vectors on unlabeled nodes, and combine them with the known labels via another LP step for smoothed final predictions. We call this pipeline Correct and Smooth (C&S).\nBefore showing that this pipeline achieves state-of-the-art performance on transductive node classification, we briefly describe another simple way of improving performance: feature augmentation. The hallmark of deep learning is that we can learn features instead of engineering them. However, GNNs still rely on informative input features to make predictions. There are numerous ways to get useful features from just the graph topology to augment the raw node features (Henderson et al., 2011; 2012; Hamilton et al., 2017b). In our pipeline, we augment features with a regularized spectral embedding (Chaudhuri et al., 2012; Zhang & Rohe, 2018) coming from the leading k eigenvectors of the matrix D−1/2τ (A + τn11 T )D −1/2 τ , where 1 is a vector of all ones, τ is a regularization parameter set to the average degree, and Dτ is diagonal with ith diagonal entry equal to Dii + τ . The underlying matrix is dense, but we can apply matrix-vector products in time linear in the number of edges and use iterative eigensolvers to compute the embeddings quickly." }, { "heading": "3 EXPERIMENTS ON TRANSDUCTIVE NODE CLASSIFICATION", "text": "To demonstrate the effectiveness of our methods, we use nine datasets (Table 1). The Arxiv and Products datasets are from the Open Graph Benchmark (OGB) (Hu et al., 2020); the Cora, Citeseer, and Pubmed are three classic citation network benchmarks (Getoor et al., 2001; Getoor, 2005; Namata et al., 2012); and wikiCS is a web graph (Mernyei & Cangea, 2020). In these datasets, classes are categories of papers, products, or pages, and features are derived from text. We also use a Facebook social network of Rice University, where classes are dorm residences and features are attributes such as gender, major, and class year (Traud et al., 2012), as well as a geographic dataset of US counties where classes are 2016 election outcomes and features are demographic (Jia & Benson, 2020). Finally, we use an email dataset of a European research institute, where classes are department membership and there are no features (Leskovec et al., 2007; Yin et al., 2017).\nData splits. The training/validation/test splits for Arxiv and Products are given by the benchmark, and the splits for wikiCS come from Mernyei & Cangea (2020). For the Rice, US counties, and email data, we use 40%/10%/50% random splits. For the smaller citation networks, we use 60%/20%/20%\nrandom splits, as in Wang & Leskovec (2020). Standard deviations in prediction accuracy over splits is < 1% in most experiments and such variance does not change our qualitative comparisons.\nC&S setup and baselines. We use Linear and MLP models as simple base predictors based on node features. When a spectral embedding is included as a node feature, we refer to these models as Linear-SE and MLP-SE. We also evaluate Label Propagation itself (LP; specifically, the Zhou et al. (2004) version), which only uses labels. In all cases, the number of LP iterations is fixed to 50.\nFor GNN models comparable to our framework in terms of simplicity or style, we use GCN, SGC, and APPNP. For GCNs, we add residual connections from the input to every layer and from every layer to the output, as well as dropout. Thus, GCN is not the original model Kipf & Welling (2017) and instead serves as a fairly strong representative of out-of-the-box GNN capabilities. The number of layers and hidden layer dimensions for the GCNs are the same as the MLPs used by our base predictors. The GCN only uses raw node features, and additional results in Appendix C show that including spectral embeddings minimally changes performance. APPNP uses a linear model for base predictions, also with the raw node features.\nFinally, we include several “state-of-the-art” (SOTA) baselines. For Arxiv and Products, this is UniMP (Shi et al., 2020) (top of OGB leaderboard, as of October 1, 2020). For Cora, Citeseer and Pubmed, we use the top scores from Chen et al. (2020). For Email and US County, we use GCNII (Chen et al., 2020). For Rice31, we use GCN with spectral embedding as additional features, which is the best GNN-based model that we found. For wikiCS, we use APPNP as reported by Mernyei & Cangea (2020). Hyperparameters are tuned using the validation set.\nAll of the above models select hyperparameters using the validation set. See Appendix A for additional model architecture details." }, { "heading": "3.1 FIRST RESULTS ON NODE CLASSIFICATION", "text": "In our first set of results, we only use the training labels in our C&S framework, as these are what GNNs typically use to train models. For the results discussed here, this is generous to our baselines. The ability to include validation labels is an advantage of our approach (and LP in general), and this improves performance of our framework even further (Table 1). We discuss this in the next section.\nTable 2 reports the results, and we highlight a few important findings. First, within our model, there are substantial gains from the LP post-processing steps (e.g., the MLP-SE base prediction accuracy increases from 63% to 84% on Products). Second, even Linear with C&S outperforms GCNs in many cases, and simple LP is often competitive with GCNs. This is striking given that the main motivation for GCNs was to address the fact that connected nodes may not have similar labels (Kipf & Welling, 2017). Our results suggest that directly incorporating correlation in the graph with simple use of the features is often a better idea. Results in Appendix B show that both label propagation post-processing steps are important for performance. Third, our model variants can out-perform SOTA on Products, Cora, Email, Rice31, and US County (often substantially so). On the other datasets, there is not much difference between the best C&S model and the SOTA.\nMethod Arxiv Products\nLinear-SE-smooth 71.42 78.73 MLP-SE-smooth 72.48 80.34 GCN 71.74 75.64 To get a sense of how much using ground truth labels directly helps, we also evaluate a version of C&S where we smooth base predictions from a linear model or MLP, using the Zhou et al. (2004) version of label propagation. We call these Linear-SE-smooth and MLP-SE-smooth and find that they often outperform GCNs (right). Again, these results suggest that smoothed outputs are important, aligning with recent research (Wu et al., 2019; Bojchevski et al., 2020), and that the original motivations for GCNs might be misleading. However, there are still gaps in performance between these models and those in Table 2 that directly use labels. Next, we see how to improve performance of C&S even further by using more labels." }, { "heading": "3.2 FURTHER IMPROVEMENTS BY USING MORE LABELS", "text": "We improve the C&S performance by using both training and validation labels, instead of just the training labels as in Equation (4). Importantly, we do not use validation labels to update the base prediction model — they are just used to select hyperparameters. Using validation labels boosts performance even further: Table 3 shows accuracies and Table 1 shows gains over SOTA. The ability to incorporate validation labels is a benefit of our approach. On the other hand, GNNs do not have this advantage, as they often rely on early stopping to prevent overfitting, may not always\nTable 3: Performance of C&S, using both training and validation labels as ground truth in the final prediction smoothing (cf. Equation (4), Table 2).\nMethod Arxiv Products Cora Citeseer Pubmed\nLinear + C&S (Autoscale) 72.71 80.55 89.54 76.83 90.01 Linear-SE + C&S (Autoscale) 73.78 80.56 89.77 77.11 89.98 MLP-SE + C&S (Autoscale) 74.02 79.29 88.55 76.36 89.50 Linear + C&S (Fdiff-scale) 72.42 82.89 89.47 77.08 89.74 Linear-SE + C&S (Fdiff-scale) 72.93 83.27 89.53 77.29 89.57 MLP-SE + C&S (Fdiff-scale) 73.46 84.55 88.18 76.41 89.38 SOTA 73.65 82.56 88.49 77.99 90.30 Methods Email Rice31 US County wikiCS\nLinear + C&S (Autoscale) — 76.59 85.22 81.87 Linear-SE + C&S (Autoscale) 73.33 87.25 86.38 81.57 MLP-SE + C&S (Autoscale) 73.45 86.13 89.71 80.75\nLinear + C&S (Fdiff-scale) — 75.31 88.16 81.18 Linear-SE + C&S (Fdiff-scale) 72.57 87.89 88.06 81.06 MLP-SE + C&S (Fdiff-scale) 76.22 86.26 90.05 80.83 SOTA 71.96 86.50 88.08 79.84\n104 105 106 # Parameters\n76\n78\n80\n82\n84\nAc cu\nra cy\nogbn-products\nOGB Leaderboard C&S (Ours)\nFigure 2: Accuracy and model size on Products.\nTable 4: C&S with GNN base predictions.\nDataset Model Performance\nGAT 73.56 ogbn-arxiv GAT + C&S 73.86\nSOTA 73.79\nUS County GCNII (SOTA) 88.08 GCNII + C&S 89.59\nbenefit from more data (e.g., under distributional shift), and do not directly use labels. Thus, our comparisons in Table 2 are more generous than needed. With validation labels, our best model out-performs SOTA in seven of nine datasets, often by substantial margins (Table 1).\nThe evaluation procedure for GNN benchmarks differ from those for LP. For GNNs, a sizable validation set is often used (and needed) for substantial hyperparameter tuning, as well as early stopping. With LP, one can use the entire set of labeled nodes L with cross-validation to select the single hyperparameter α. Given the setup of transductive node classification, there is no reason not to use validation labels at inference if they are helpful (e.g., via LP in our case). The results in Tables 1 and 3 show the true performance of our model and is the proper point of comparison.\nOverall, our results highlight two important findings. First, big and expensive-to-train GNN models are not actually necessary to achieve top performance for transductive node classification on many datasets. Second, combining classical label propagation ideas with simple base predictors outperforms graph neural networks on these tasks." }, { "heading": "3.3 TRAINING TIME AND IMPROVING EXISTING GNNS", "text": "Our C&S framework often has significantly fewer parameters compared to GNNs or other SOTA solutions. As an example, we plot parameters vs. performance for the Products dataset in Figure 2. While having fewer parameters is useful, the real gain is in faster training time. Our models are typically orders of magnitude faster to train than models with comparable accuracy because we do not use the graph structure for our base prediction models. As one example, although our MLP-SE + C&S model for the Arxiv dataset has a similar number of parameters compared to the\n“GCN+linear+labels” method on the OGB leaderboard (Wang, 2020), our model runs 7 times faster per epoch and converges much faster. In addition, compared to the SOTA for the Products dataset, our framework with a linear base predictor has higher accuracy, trains over 100 times faster, and has 137 times fewer parameters.\nWe also evaluated our methods on an even larger dataset, the papers100M OGB benchmark (Hu et al., 2020). Here, we obtain 65.33% using C&S with the Linear model as the base predictor, which out-performs the state-of-the-art (63.29%, as of October 1, 2020).\nOur pipeline can also be used to improve the performance of GNNs in general. We used C&S with base predictions given by GCNII or GAT. This improves our results on some datasets, such as ogbnarxiv (Table 4). However, the performance improvements are sometimes only minor, suggesting that big models might be capturing the same signal as our simple C&S framework." }, { "heading": "3.4 PERFORMANCE VISUALIZATION", "text": "To aid in understanding the performance of our C&S framework, we visualize the predictions on the US County dataset (Figure 3). As expected, the residual error correlation tends to correct nodes where neighboring counties provide relevant information. For example, we see that many errors in the base predictions are corrected by the residual correlation (Figure 3b, left and right panels). In these cases, which correspond to parts of Texas and Hawaii, the demographic features of the counties are outliers compared to the rest of the country, leading both the linear model and GCN astray. The error correlation from neighboring counties is able to fix the predictions. We also see that the final prediction correlation can fix errors when nearby nodes are correctly classified, as shown in the center panel of Figure 3b. We observe similar behavior on the Rice31 dataset (Appendix D)." }, { "heading": "4 DISCUSSION", "text": "GNN models are becoming more expressive, more parameterized, and more expensive to train. Our results suggest that we should explore other techniques for improving performance, such as label propagation and feature augmentation. In particular, label propagation and its variants are longstanding, powerful ideas. More directly incorporating them into graph learning models has major benefits, and we have shown that these can lead to both better predictions and faster training.\nAcknowledgments. This research was supported by Facebook AI, NSF Award DMS-1830274, ARO Award W911NF19-1-0057, ARO MURI, and JP Morgan Chase & Co. We also thank Cornell University Artificial Intelligence for their support, as well as Marc Brockschmidt, Matthias Fey, Stephan Günnemann, Weihua Hu, and Junteng Jia for insightful discussions." }, { "heading": "A MODEL DETAILS", "text": "Here we provide some more details on the models that we use. In all cases we use the Adam optimizer and tune the learning rate. We follow the models and hyperparameters provided in OGB (Hu et al., 2020) and wikiCS (Mernyei & Cangea, 2020) and manually tune some hyperparameters on the validation data for the potential of better performance.\nFor our MLPs, every linear layer is followed by batch normalization, ReLU activation, and 0.5 dropout. The other parameters depend on the dataset as follows.\n• Products and Arxiv: 3 layers and 256 hidden channels with learning rate equal to 0.01. • Cora, Citseer, and Pubmed (Getoor et al., 2001; Getoor, 2005; Namata et al., 2012) and\nEmail (Leskovec et al., 2007; Yin et al., 2017): 3 layers and 64 hidden channels with learning rate = 0.01.\n• wikiCS: 3 layers and 256 hidden channels with learning rate equal to 0.005. • US County (Jia & Benson, 2020) and Rice31 (Traud et al., 2012): 5 layers and 256 hidden\nchannels with learning rate equal to 0.005.\nSOTA models for most datasets are taken from existing benchmarks. We determined SOTA for Email, US County, and Rice31 by evaluating several models discussed in the paper. The best performing baselines were as follows. For Email, GCNII with 5 layers, 256 hidden channels, learning rate equal to 0.01. For US County, GCNII with 8 layers, 256 hidden channels, learning rate equal to 0.03. For Rice31, we reused our GCN architecture and trained it over spectral embedding, which substantially outperformed the other GNN variants.\nAll models were implemented with PyTorch (Paszke et al., 2019) and PyTorch Geometric (Fey & Lenssen, 2019)." }, { "heading": "B PERFORMANCE RESULTS WITH ONLY THE CORRECTION STEP", "text": "Table 5 shows results with and without smoothing in the final predictions, i.e., just the “C step” vs. C&S. Including final prediction smoothing provides a substantial performance boost in many cases." }, { "heading": "C ANALYSIS OF PERFORMANCE GAINS FROM SPECTRAL EMBEDDINGS", "text": "Table 6 shows the effect of including spectral embeddings as node features on the accuracy of the MLP-based and GCN models. In the case of the Arxiv dataset, including the spectral embedding improves the MLP base prediction performance substantially and the C&S performance modestly, but hardly changes the performance of the GCN. For Pubmed, including the spectral embeddings barely changes the performance of any model." }, { "heading": "D ADDITIONAL VISUALIZATION", "text": "Full visualizations of C&S and GCN-SE performance for the US County dataset are in Figures 4 to 6. Similar visualizations for the Rice31 are in Figures 7 to 9, which are generated by projecting the 128-dimensional spectral embedding used in the main text down to two dimensions with UMAP (McInnes et al., 2018)." } ]
2,021
null
SP:6dbb656031537976500fc17775a52c782ef46729
[ "The paper proposes a method called neighbor2seq that converts the hierarchical structure of the center node to a sequence during message passing in graph neural networks. The proposed method aims to mitigate the issue of excessive computation and memory requirement of training graph neural networks. The proposed models Neighbor2Seq+Conv and Neighbor2Seq+Attn are tested on several datasets including a large scale benchmark dataset (ogbn-papers100M). The result shows some improvement especially on ogbn-papers100M while the improvement is not very obvious on other datasets.", "This paper proposed a simple graph neural network architecture that is easy to scale up and perform stochastic training. Instead of performing message passing as commonly used GNN, this paper first performs weighted combinations of node features per each hop of the neighbors of a center node, and then performs either CNN or attention mechanism to aggregate the features and obtain center node embedding. Since the feature aggregation can be performed offline, and the computation can easily be decomposed and stochastic training is straightforward, the method can easily scale up to graphs with 10M nodes. Experiments on median size or large size graphs show the comparable or better performance than alternatives. " ]
Modern graph neural networks (GNNs) use a message passing scheme and have achieved great success in many fields. However, this recursive design inherently leads to excessive computation and memory requirements, making it not applicable to massive real-world graphs. In this work, we propose the Neighbor2Seq to transform the hierarchical neighborhood of each node into a sequence. This novel transformation enables the subsequent use of general deep learning operations, such as convolution and attention, that are designed for grid-like data. Therefore, our Neighbor2Seq naturally endows GNNs with the efficiency and advantages of deep learning operations on grid-like data by precomputing the Neighbor2Seq transformations. In addition, our Neighbor2Seq can alleviate the over-squashing issue suffered by GNNs based on message passing. We evaluate our method on a massive graph, with more than 111 million nodes and 1.6 billion edges, as well as several medium-scale graphs. Results show that our proposed method is scalable to massive graphs and achieves superior performance across massive and mediumscale graphs.
[]
[ { "authors": [ "Uri Alon", "Eran Yahav" ], "title": "On the bottleneck of graph neural networks and its practical implications", "venue": "arXiv preprint arXiv:2006.05205,", "year": 2020 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Peter Battaglia", "Razvan Pascanu", "Matthew Lai", "Danilo Jimenez Rezende" ], "title": "Interaction networks for learning about objects, relations and physics", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2018 }, { "authors": [ "Kush Bhatia", "Kunal Dahiya", "Himanshu Jain", "Yashoteja Prabhu", "Manik Varma" ], "title": "The extreme classification repository: multi-label datasets & code", "venue": "URL http://manikvarma. org/downloads/XC/XMLRepository", "year": 2016 }, { "authors": [ "Aleksandar Bojchevski", "Johannes Klicpera", "Bryan Perozzi", "Amol Kapoor", "Martin Blais", "Benedek Rózemberczki", "Michal Lukasik", "Stephan Günnemann" ], "title": "Scaling graph neural networks with approximate pagerank", "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2020 }, { "authors": [ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann LeCun" ], "title": "Spectral networks and locally connected networks on graphs", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Chen Cai", "Yusu Wang" ], "title": "A note on over-smoothing for graph neural networks", "venue": "arXiv preprint arXiv:2006.13318,", "year": 2020 }, { "authors": [ "Deli Chen", "Yankai Lin", "Wei Li", "Peng Li", "Jie Zhou", "Xu Sun" ], "title": "Measuring and relieving the oversmoothing problem for graph neural networks from the topological view", "venue": "In Thirty-fourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Jianfei Chen", "Jun Zhu", "Le Song" ], "title": "Stochastic training of graph convolutional networks with variance reduction", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jie Chen", "Tengfei Ma", "Cao Xiao" ], "title": "Fastgcn: Fast learning with graph convolutional networks via importance sampling", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Wei-Lin Chiang", "Xuanqing Liu", "Si Si", "Yang Li", "Samy Bengio", "Cho-Jui Hsieh" ], "title": "Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Matthias Fey", "Jan Eric Lenssen" ], "title": "Fast graph representation learning with pytorch geometric", "venue": "arXiv preprint arXiv:1903.02428,", "year": 2019 }, { "authors": [ "Hongyang Gao", "Zhengyang Wang", "Shuiwang Ji" ], "title": "Large-scale learnable graph convolutional networks", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Aditya Grover", "Jure Leskovec" ], "title": "node2vec: Scalable feature learning for networks", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Kaveh Hassani", "Amir Hosein Khasahmadi" ], "title": "Contrastive multi-view representation learning on graphs", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Weihua Hu", "Bowen Liu", "Joseph Gomes", "Marinka Zitnik", "Percy Liang", "Vijay Pande", "Jure Leskovec" ], "title": "Strategies for pre-training graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Weihua Hu", "Matthias Fey", "Marinka Zitnik", "Yuxiao Dong", "Hongyu Ren", "Bowen Liu", "Michele Catasta", "Jure Leskovec" ], "title": "Open graph benchmark: Datasets for machine learning on graphs", "venue": "arXiv preprint arXiv:2005.00687,", "year": 2020 }, { "authors": [ "Ziniu Hu", "Yuxiao Dong", "Kuansan Wang", "Kai-Wei Chang", "Yizhou Sun" ], "title": "Gpt-gnn: Generative pre-training of graph neural networks", "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2020 }, { "authors": [ "Wenbing Huang", "Tong Zhang", "Yu Rong", "Junzhou Huang" ], "title": "Adaptive sampling towards fast graph representation learning", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Sergey Ivanov", "Evgeny Burnaev" ], "title": "Anonymous walk embeddings", "venue": "In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Wei Jin", "Tyler Derr", "Haochen Liu", "Yiqi Wang", "Suhang Wang", "Zitao Liu", "Jiliang Tang" ], "title": "Self-supervised learning on graphs: Deep insights and new direction", "venue": null, "year": 2006 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Johannes Klicpera", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Predict then propagate: Graph neural networks meet personalized pagerank", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Yann LeCun", "Bernhard Boser", "John S Denker", "Donnie Henderson", "Richard E Howard", "Wayne Hubbard", "Lawrence D Jackel" ], "title": "Backpropagation applied to handwritten zip code recognition", "venue": "Neural computation,", "year": 1989 }, { "authors": [ "Qimai Li", "Zhichao Han", "Xiao-Ming Wu" ], "title": "Deeper insights into graph convolutional networks for semi-supervised learning", "venue": "In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence", "year": 2018 }, { "authors": [ "Yujia Li", "Daniel Tarlow", "Marc Brockschmidt", "Richard Zemel" ], "title": "Gated graph sequence neural networks", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Meng Liu", "Hongyang Gao", "Shuiwang Ji" ], "title": "Towards deeper graph neural networks", "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2020 }, { "authors": [ "Meng Liu", "Zhengyang Wang", "Shuiwang Ji" ], "title": "Non-local graph neural networks", "venue": "arXiv preprint arXiv:2005.14612,", "year": 2020 }, { "authors": [ "Zheng Ma", "Ming Li", "Yuguang Wang" ], "title": "Pan: Path integral based convolution for deep graph neural networks", "venue": "arXiv preprint arXiv:1904.10996,", "year": 2019 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Hoang NT", "Takanori Maehara" ], "title": "Revisiting graph neural networks: All we have is low-pass filters", "venue": "arXiv preprint arXiv:1905.09550,", "year": 2019 }, { "authors": [ "Kenta Oono", "Taiji Suzuki" ], "title": "Graph neural networks exponentially lose expressive power for node classification", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Lawrence Page", "Sergey Brin", "Rajeev Motwani", "Terry Winograd" ], "title": "The pagerank citation ranking: Bringing order to the web", "venue": "Technical report, Stanford InfoLab,", "year": 1999 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Hongbin Pei", "Bingzhe Wei", "Kevin Chen-Chuan Chang", "Yu Lei", "Bo Yang" ], "title": "Geom-gcn: Geometric graph convolutional networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),", "year": 2014 }, { "authors": [ "Jiezhong Qiu", "Qibin Chen", "Yuxiao Dong", "Jing Zhang", "Hongxia Yang", "Ming Ding", "Kuansan Wang", "Jie Tang" ], "title": "Gcc: Graph contrastive coding for graph neural network pre-training", "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2020 }, { "authors": [ "Emanuele Rossi", "Fabrizio Frasca", "Ben Chamberlain", "Davide Eynard", "Michael Bronstein", "Federico Monti" ], "title": "Sign: Scalable inception graph neural networks", "venue": "arXiv preprint arXiv:2004.11198,", "year": 2020 }, { "authors": [ "Jonathan M Stokes", "Kevin Yang", "Kyle Swanson", "Wengong Jin", "Andres Cubillos-Ruiz", "Nina M Donghia", "Craig R MacNair", "Shawn French", "Lindsey A Carfrae", "Zohar Bloom-Ackerman" ], "title": "A deep learning approach to antibiotic", "venue": "discovery. Cell,", "year": 2020 }, { "authors": [ "Fan-Yun Sun", "Jordan Hoffman", "Vikas Verma", "Jian Tang" ], "title": "Infograph: Unsupervised and semisupervised graph-level representation learning via mutual information maximization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Lio", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Petar Velickovic", "William Fedus", "William L Hamilton", "Pietro Liò", "Yoshua Bengio", "R Devon Hjelm" ], "title": "Deep graph infomax", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Kuansan Wang", "Zhihong Shen", "Chiyuan Huang", "Chieh-Han Wu", "Yuxiao Dong", "Anshul Kanakia" ], "title": "Microsoft academic graph: When experts are not enough", "venue": "Quantitative Science Studies,", "year": 2020 }, { "authors": [ "Yue Wang", "Yongbin Sun", "Ziwei Liu", "Sanjay E Sarma", "Michael M Bronstein", "Justin M Solomon" ], "title": "Dynamic graph cnn for learning on point clouds", "venue": "Acm Transactions On Graphics (tog),", "year": 2019 }, { "authors": [ "Boris Weisfeiler", "Andrei A Lehman" ], "title": "A reduction of a graph to a canonical form and an algebra arising during this reduction", "venue": "Nauchno-Technicheskaya Informatsia,", "year": 1968 }, { "authors": [ "Felix Wu", "Amauri Souza", "Tianyi Zhang", "Christopher Fifty", "Tao Yu", "Kilian Weinberger" ], "title": "Simplifying graph convolutional networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Zichao Yang", "Diyi Yang", "Chris Dyer", "Xiaodong He", "Alex Smola", "Eduard Hovy" ], "title": "Hierarchical attention networks for document classification", "venue": "In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies,", "year": 2016 }, { "authors": [ "Rex Ying", "Ruining He", "Kaifeng Chen", "Pong Eksombatchai", "William L Hamilton", "Jure Leskovec" ], "title": "Graph convolutional neural networks for web-scale recommender systems", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Yuning You", "Tianlong Chen", "Zhangyang Wang", "Yang Shen" ], "title": "When does self-supervision help graph convolutional networks", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Hanqing Zeng", "Hongkuan Zhou", "Ajitesh Srivastava", "Rajgopal Kannan", "Viktor Prasanna" ], "title": "Graphsaint: Graph sampling based inductive learning method", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Difan Zou", "Ziniu Hu", "Yewen Wang", "Song Jiang", "Yizhou Sun", "Quanquan Gu" ], "title": "Layer-dependent importance sampling for training deep and large graph convolutional networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graph neural networks (GNNs) have shown effectiveness in many fields with rich relational structures, such as citation networks (Kipf & Welling, 2016; Veličković et al., 2018), social networks (Hamilton et al., 2017), drug discovery (Gilmer et al., 2017; Stokes et al., 2020), physical systems (Battaglia et al., 2016), and point clouds (Wang et al., 2019). Most current GNNs follow a message passing scheme (Gilmer et al., 2017; Battaglia et al., 2018), in which the representation of each node is recursively updated by aggregating the representations of its neighbors. Various GNNs (Li et al., 2016; Kipf & Welling, 2016; Veličković et al., 2018; Xu et al., 2019) mainly differ in the forms of aggregation functions.\nReal-world applications usually generate massive graphs, such as social networks. However, message passing methods have difficulties in handling such large graphs as the recursive message passing mechanism leads to prohibitive computation and memory requirements. To date, sampling methods (Hamilton et al., 2017; Ying et al., 2018; Chen et al., 2018a;b; Huang et al., 2018; Zou et al., 2019; Zeng et al., 2020; Gao et al., 2018; Chiang et al., 2019; Zeng et al., 2020) and precomputing methods (Wu et al., 2019; Rossi et al., 2020; Bojchevski et al., 2020) have been proposed to scale GNNs on large graphs. While the sampling methods can speed up training, they might result in redundancy, still incur high computational complexity, lead to loss of performance, or introduce bias (see Section 2.2). Generally, precomputing methods can scale to larger graphs as compared to sampling methods as recursive message passing is still required in sampling methods.\nIn this work, we propose the Neighbor2Seq that transforms the hierarchical neighborhood of each node to a sequence in a precomputing step. After the Neighbor2Seq transformation, each node and its associated neighborhood tree are converted to an ordered sequence. Therefore, each node can be viewed as an independent sample and is no longer constrained by the topological structure. This novel transformation from graphs to grid-like data enables the use of mini-batch training for subsequent models. As a result, our models can be used on extremely large graphs, as long as the Neighbor2Seq step can be precomputed.\nAs a radical departure from existing precomputing methods, we consider the hierarchical neighborhood of each node as an ordered sequence. The order information corresponds to hops between nodes. As a result of our Neighbor2Seq transformation, generic deep learning operations for gridlike data, such as convolution and attention, can be applied in subsequent models. In addition, our Neighbor2Seq can alleviate the over-squashing issue (Alon & Yahav, 2020) suffered by current GNNs. Experimental results indicate that our proposed method can be used on a massive graph, where most current methods cannot be applied. Furthermore, our method achieves superior performance as compared with previous sampling and precomputing methods." }, { "heading": "2 ANALYSIS OF CURRENT GRAPH NEURAL NETWORK METHODS", "text": "We start by defining necessary notations. A graph is formally defined as G = (V,E), where V is the set of nodes and E ⊆ V × V is the set of edges. We use n = |V | and m = |E| to denote the numbers of nodes and edges, respectively. The nodes are indexed from 1 to n. We consider a node feature matrix X ∈ Rn×d, where each row xi ∈ Rd is the d-dimensional feature vector associated with node i. The topology information of the graph is encoded in the adjacency matrix A ∈ Rn×n, whereA(i,j) = 1 if an edge exists between node i and node j, andA(i,j) = 0 otherwise." }, { "heading": "2.1 GRAPH NEURAL NETWORKS VIA MESSAGE PASSING", "text": "There are two primary deep learning methods on graphs (Bronstein et al.); those are, spectral methods and spatial methods. The spectral method in Bruna et al. (2014) extends convolutional neural networks (LeCun et al., 1989) to the graph domain based on the spectrum of the graph Laplacian. The main limitation of spectral methods is the high complexity. ChebNet (Defferrard et al., 2016) and GCN (Kipf & Welling, 2016) simplify the spectral methods and can be understood from the spatial perspective. In this work, we focus on the analysis of the current mainstream spatial methods. Generally, most existing spatial methods, such as ChebNet (Defferrard et al., 2016), GCN (Kipf & Welling, 2016), GG-NN (Li et al., 2016), GAT (Veličković et al., 2018), and GIN (Xu et al., 2019), can be understood from the message passing perspective (Gilmer et al., 2017; Battaglia et al., 2018). Specifically, we iteratively update node representations by aggregating representations from its immediate neighbors. These message passing methods have been shown to be effective in many fields. However, they have inherent difficulties when applied on large graphs due to their excessive computation and memory requirements, as described in Section 2.2." }, { "heading": "2.2 GRAPH NEURAL NETWORKS ON LARGE GRAPHS", "text": "The above message passing methods are often trained in full batch. This requires the whole graph, i.e., all the node representations and edge connections, to be in memory to allow recursive message passing on the whole graph. Usually, the number of neighbors grows very rapidly with the increase of receptive field. Hence, these methods cannot be applied directly on large-scale graphs due to the prohibitive requirements on computation and memory. To enable deep learning on large graphs, two families of methods have been proposed; those are methods based on sampling and precomputing.\nTo circumvent the recursive expansion of neighbors across layers, sampling methods apply GNNs on a sampled subset of nodes with mini-batch training. Sampling methods can be further divided into three categories. First, node-wise sampling methods perform message passing for each node in its sampled neighborhood. This strategy is first proposed in GraphSAGE (Hamilton et al., 2017), where neighbors are randomly sampled. This is extended by PinSAGE (Ying et al., 2018), which selects neighbors based on random walks. VR-GCN (Chen et al., 2018a) further proposes to use variance reduction techniques to obtain a convergence guarantee. Although these node-wise sampling methods can reduce computation, the remaining computation is still very expensive and some redundancy might have been introduced, as described in Huang et al. (2018). Second, layer-wise sampling methods sample a fixed number of nodes for each layer. In particular, FastGCN (Chen et al., 2018b) samples a fixed number of nodes for each layer independently based on the degree of each node. AS-GCN (Huang et al., 2018) and LADIES (Zou et al., 2019) introduce between-layer dependencies during sampling, thus alleviating the loss of information. Layer-wise sampling methods can avoid the redundancy introduced by node-wise sampling methods. However, the expensive sampling algorithms that aim to ensure performance may themselves incur high computational cost,\nas pointed out in Zeng et al. (2020). Third, graph-wise sampling methods build mini-batches on sampled subgraphs. Specifically, LGCN (Gao et al., 2018) proposes to leverage mini-batch training on subgraphs selected by Breadth-First-Search algorithms. ClusterGCN (Chiang et al., 2019) conducts mini-batch training on sampled subgraphs that are obtained by a graph clustering algorithm. GraphSAINT (Zeng et al., 2020) proposes to derive subgraphs by importance-sampling and introduces normalization techniques to eliminate biases. These graph-wise sampling methods usually have high efficiency. The main limitation is that the nodes in a sampled subgraph are usually clustered together. This implies that two distant nodes in the original graph usually cannot be feeded into the GNNs in the same mini-batch during training, potentially leading to bias in the trained models.\nThe second family of methods for enabling GNNs training on large graphs are based on procomputing. Specifically, SGC (Wu et al., 2019) removes the non-linearity between GCN layers, resulting in a simplification as Y = softmax(ÂLXW ). In this formulation,  = D̃− 1 2 ÃD̃− 1 2 is the sym-\nmetrically normalized adjacency matrix, à = A + I is the adjacency matrix with self-loops, D̃ is the corresponding diagonal node degree matrix with D̃(i,i) = ∑ j Ã(i,j), L is the size of receptive field (i.e., the number of considered neighboring hops), which is the same as a L-layer GCN, Y is the output of the softmax classifier. Since there is no learnable parameters in ÂLX , this term can be precomputed as a feature pre-processing step. Similarly, SIGN (Rossi et al., 2020) applies an inception-like model to the precomputed features Â`X for ` ∈ {1, · · · , L}, where L is the predefined size of receptive field. Instead of precomputing the smoothing features as in SGC and SIGN, PPRGo (Bojchevski et al., 2020) extends the idea of PPNP (Klicpera et al., 2018) by approximately precomputing the personalized PageRank (Page et al., 1999) matrix, thereby enabling model training on large graphs using mini-batches. Generally, the precomputing methods can scale to larger graphs than sampling methods because the latter still needs to perform the recursive message passing during training. Differing from these precomputing methods, we consider the hierarchical neighborhood of each node as an ordered sequence, thus retaining the useful information about hops between nodes and enabling subsequent powerful and efficient operations." }, { "heading": "3 THE PROPOSED NEIGHBOR2SEQ METHOD AND ANALYSIS", "text": "In this section, we describe our proposed method, known as Neighbor2Seq, which transforms the hierarchical neighborhood of each node into an ordered sequence, thus enabling the subsequent use of general deep learning operations. We analyze the scalability of our method (See Section 3.5) and describe how our method can alleviate the over-squashing issue suffered by current message passing methods (See Section 3.6)." }, { "heading": "3.1 OVERVIEW", "text": "As described in Section 2.1, existing message passing methods recursively update each node’s representation by aggregating information from its immediate neighbors. Hence, what these methods aim at capturing for each node is essentially its corresponding hierarchical neighborhood, i.e., the neighborhood tree rooted at current node, as illustrated in Figure 1 (b). In this work, we attempt to go beyond the message passing scheme to overcome the limitations mentioned in Section 2. We propose to capture the information of this hierarchical neighborhood by transforming it into an ordered sequence, instead of recursively squashing it into a fixed-length vector. Our proposed method is composed of three steps. First, we transform a neighborhood to a sequence for each node. Second, we apply a normalization technique to the derived sequence features. Third, we use general deep learning operations, i.e., convolution and attention, to learn from these sequence features and then make predictions for nodes. In the following, we describe these three steps in detail." }, { "heading": "3.2 NEIGHBOR2SEQ: TRANSFORMING NEIGHBORHOODS TO SEQUENCES", "text": "The basic idea of Neighbor2Seq is to transform the hierarchical neighborhood of each node to an ordered sequence by integrating the features of nodes in each layer of the neighborhood tree. Following the notations defined in Section 2, we let zi0, z i 1, · · · , ziL denote the resulting sequence for node i, where L is the height (i.e., the number of hops we consider) of the neighborhood tree rooted at node i. zi` ∈ Rd denotes the `-th feature of the sequence. The length of the resulting sequence for\neach node is L+ 1. Formally, for each node i, our Neighbor2Seq can be expressed as\nzi` = n∑ j=1 w(i, j, `)xj , ∀` ∈ {0, 1, 2, · · · , L}, (1)\nwhere w(i, j, `) denotes the number of walks with length ` between node i and j. n is the number of nodes in the graph. We definew(i, j, 0) as 1 for j = i and 0 otherwise. Hence, zi0 is the original node feature xi. Intuitively, zi` is obtained by computing a weighted sum of features of all nodes with walks of length ` to i, and the numbers of qualified walks are used as the corresponding weights. Our Neighbor2Seq is illustrated in Figure 1 (c). Note that the derived sequence has meaningful order information, indicating the hops between nodes. After we obtain ordered sequences from the original hierarchical neighborhoods, we can use generic deep learning operations to learn from these sequences, as detailed below." }, { "heading": "3.3 NORMALIZATION", "text": "Since the number of nodes in the hierarchical neighborhood grows exponentially as the hop number increases, different layers in the neighborhood tree have drastically different numbers of nodes. Hence, feature vectors of a sequence computed by Equation (1) have very different scales. In order to make the subsequent learning easier, we propose a layer to normalize the sequence features. We use a normalization technique similar to layer normalization (Ba et al., 2016). In particular, each feature of a sequence is normalized based on the mean and the standard deviation of its own feature values. Formally, our normalization process for each node i can be written as\nyi` =W`z i `, o i ` = yi` − µi` σi` γ` + β`, ∀` ∈ {0, 1, 2, · · · , L}\nµi` = 1\nd′ d′∑ c=1 yi`[c], σ i ` = √√√√ 1 d′ d′∑ c=1 (yi`[c]− µi`)2. (2)\nWe first apply a linear transformation W` ∈ Rd ′×d to produce a low-dimensional representation yi` ∈ Rd ′ for the `-th feature of the sequence, since the original feature dimension d is usually\nlarge. µi` ∈ R and σi` ∈ R are the mean and standard deviation of the corresponding representation yi`. γ` ∈ Rd ′ and β` ∈ Rd ′ denote the learnable affine transformation parameters. denotes the element-wise multiplication. Note that the learnable parameters in this normalization layer is associated with `, implying that each feature of the sequence is normalized separately. Using this normalization layer, we obtain the normalized feature vector oi` ∈ Rd ′ for every ` ∈ {0, 1, 2, · · · , L}." }, { "heading": "3.4 NEIGHBOR2SEQ+CONV AND NEIGHBOR2SEQ+ATTN", "text": "After obtaining an ordered sequence for each node, we can view each node in the graph as a sequence of feature vectors. We can use general deep learning techniques to learn from these sequences. In this work, we propose two models, namely Neighbor2Seq+Conv and Neighbor2Seq+Attn, in which convolution and attention are applied on the sequences of each node.\nAs illustrated in Figure 1 (d), Neighbor2Seq+Conv applies a 1-D convolutional neural network to the sequence features and then use an average pooling to yield a representation for the sequence. Formally, for each node i,\n( ôi0, ô i 1, · · · , ôiL ) = CNN ( oi0,o i 1, · · · ,oiL ) , ri = 1\nL+ 1 L∑ `=0 ôi`, (3)\nwhere CNN(·) denotes a 1-D convolutional neural network. ri denotes the obtained representation of node i that is used as the input to a linear classifier to make predictions for this node. Specifically, we implement CNN(·) as a 2-layer convolutional neural network composed of two 1-D convolutions. The kernel size is set according to the length of input sequence. The activation function between layers is ReLU (Krizhevsky et al., 2012).\nIncorporating attention is another natural idea to learn from sequences. As shown in Figure 1 (d), Neighbor2Seq+Attn uses an attention mechanism (Bahdanau et al., 2015) to integrate sequential feature vectors in order to derive a representation. Unlike convolutional neural networks, the vanilla attention mechanism cannot make use of the order of the sequence. Hence, we add positional encodings (Vaswani et al., 2017) to the features such that the position information of different features in the sequence can be incorporated. Formally, for each node i, we add positional encoding for each feature in the sequence as\nki` = o i ` + p i `, p i `[m] = sin ( ` 10000 2n d′ ) m = 2n\ncos\n( `\n10000 2n d′\n) m = 2n+ 1 . (4)\nThe positional encoding for `-th feature of node i is denoted as pi` ∈ Rd ′ . m ∈ {1, 2, · · · , d′} is the dimensional index. Intuitively, a position-dependent vector is added to each feature such that the order information can be captured. Then we use the attention mechanism with learnable query (Yang et al., 2016) to combine these sequential feature vectors to obtain the final representations ri for each node i. Formally,\nri = L∑ `=0 αi`k i `, α i ` = exp(ki` T q)∑L `=0 exp(k i ` T q) . (5)\nq ∈ Rd′ is the learnable query vector that is trained along with other model parameters. The derived representation ri will be taken as the input to a linear classifier to make prediction for node i." }, { "heading": "3.5 ANALYSIS OF SCALABILITY", "text": "Precomputing Neighbor2Seq. A well-known fact is that the value of w(i, j, `) in Equation (1) can be obtained by computing the power of the original adjacency matrix A. Following GCN, we add self-loops to make each node connected to itself. Concretely, w(i, j, `) = Ã`(i,j). Hence, the Neighbor2Seq can be implemented by computing the matrix multiplications Ã`X for ∀` ∈ {0, 1, 2, · · · , L}. Since there is no learnable parameters in the Neighbor2Seq step, these matrix multiplications can be precomputed sequentially for large graphs on CPU platforms with large memory.\nThis can be easily precomputed because the matrix à is usually sparse. For extremely large graphs, this precomputation can even be performed on distributed systems.\nEnabling mini-batch training. After we obtain the precomputed sequence features, each node in the graph corresponds to a sequence of feature vectors. Therefore, each node can be viewed as an independent sample. That is, we are no longer restricted by the original graph connectivity anymore. Then, we can randomly sample from all the training nodes to conduct mini-batch training. This is more flexible and unbiased than sampling methods as reviewed in Section 2.2. Our mini-batches can be randomly extracted over all nodes, opening the possibility that any pair of nodes can be sampled in the same mini-batch. In contrast, mini-batches in sampling methods are usually restricted by the fixed sampling strategies. This advantage opens the door for subsequent model training on extremely large graphs, as long as the corresponding Neighbor2Seq step can be precomputed.\nComputational complexity comparison. We compare our methods with several existing sampling and precomputing methods in terms of computational complexity. We let L denote the number of hops we consider. For simplicity, we assume the feature dimension d is fixed for all layers. For sampling methods, s is the number of sampled neighbors for each node. The computation of Neighbor2Seq+Conv mainly lies in the linear transformation (i.e., O(Ld2n)) in the normalization step and the 1-D convolutional neural networks (i.e.,\nO(Lkd2n), where k is the kernel size). Hence, the computational complexity for the forward pass of Neighbor2Seq+Conv is O((Ld2 + Lkd2)n). Neighbor2Seq+Attn has a computational complexity ofO((Ld2+Ld)n) because the attention mechanism is more efficient than 1-D convolutional neural networks. As shown in Table 1, the forward pass complexities of precomputing methods, including our Neighbor2Seq+Conv and Neighbor2Seq+Attn, are all linear with respect to the number of nodes n and do not depend on the number of edges m. Hence, the training processes of our models are computationally efficient." }, { "heading": "3.6 ALLEVIATING THE OVER-SQUASHING ISSUE", "text": "An inherent problem in message passing methods is known as the over-squashing (Alon & Yahav, 2020). In particular, recursively propagating information between neighbors creates a bottleneck because the number of nodes in the receptive field grows exponentially with the number of layers. This bottleneck causes the over-squashing issue; that is, information from the exponentially-growing receptive field is compressed into fixed-length vectors. Consequently, message passing methods fail to capture the message flowing from distant nodes and performs poorly when long-range information is essential for the prediction tasks. Note that the over-squashing issue is not identical to the oversmoothing issue. Over-smoothing is related to the phenomenon that node representations converge to indistinguishable limits when the number of layers increases (Li et al., 2018; Wu et al., 2019; NT & Maehara, 2019; Liu et al., 2020a; Oono & Suzuki, 2020; Cai & Wang, 2020; Chen et al., 2020). The virtual edges added in Gilmer et al. (2017) and recent non-local aggregations (Pei et al., 2020; Liu et al., 2020b) can be viewed as attempts to alleviate the over-squashing issue by incorporating distant nodes. Another study (Ma et al., 2019) considers message passing along all possible paths between two nodes, instead of propagating information between neighbors.\nOur Neighbor2Seq can alleviate the over-squashing issue because we transform the exponentiallygrowing nodes in hierarchical neighborhoods into an ordered sequence, instead of recursively squashing them into a fixed-size vector. With our Neighbor2Seq, capturing long-range information on graphs becomes similar to achieving this on sequence data, such as texts." }, { "heading": "4 DISCUSSIONS", "text": "" }, { "heading": "4.1 INFORMATION LOSS IN NEIGHBOR2SEQ", "text": "As shown in Figure 1 (c), Neighbor2Seq obtains the sequence by integrating features of nodes in each layer of the neighborhood tree. This transformation may lose the cross-layer dependency\ninformation in the tree. Specifically, the Neighbor2Seq ignores the identities of nodes that each walk passes through and only considers what are the nodes in each layer of the neighborhood tree. Nevertheless, this information can neither be captured by message passing methods because the aggregation is usually permutation-invariant. This implies that messages from different neighbors cannot be distinguished, as pointed in Pei et al. (2020). According to our experimental results in Table 5, our models without this information can outperform message passing methods, such as GCN. It is intriguing to have an in-depth exploration of whether such information is useful and how it can be captured." }, { "heading": "4.2 RELATIONS WITH THE WEISFEILER-LEHMAN HIERARCHY", "text": "As shown in Xu et al. (2019), most of current GNNs are at most as powerful as the WeisfeilerLehman (WL) graph isomorphism test (Weisfeiler & Lehman, 1968) in distinguishing graph structures. Our Neighbor2Seq is still under the WL hierarchy because the neighborhood tree used to obtain the sequence is indeed the one that the WL test uses to distinguish different graphs. We would be interested in exploring how Neighbor2Seq can be extended to go beyond the WL hierarchy as a future direction." }, { "heading": "4.3 BRIDGING THE GAP BETWEEN GRAPH AND GRID-LIKE DATA", "text": "The main difference between graph and grid-like data lies in the notion and properties of locality. Specifically, the numbers of neighbors differ for different nodes, and there is no order information among the neighbors of a node in graphs. These are the main obstacles preventing the use of generic deep learning operations on graphs. Our Neighbor2Seq is an attempt to bridge the gap between graph and grid-like data. Base on our Neighbor2Seq, many effective strategies for grid-like data can be naturally transferred to graph data. These include self-supervised learning and pre-training on graphs (Hu et al., 2019; Velickovic et al., 2019; Sun et al., 2019; Hassani & Khasahmadi, 2020; You et al., 2020; Hu et al., 2020b; Qiu et al., 2020; Jin et al., 2020).\nWe notice an existing work AWE Ivanov & Burnaev (2018) which also embed the information in graph as a sequence. In order to avoid confusion, we make a clarification about the fundamental and significant differences between AWE and our Neighbor2Seq. First, AWE produces a sequence embedding for the entire graph, while our Neighbor2Seq yields a sequence embedding for each node in the graph. Second, each element in the obtained sequence in AWE is the probability of an anonymous walk embedding. In our Neighbor2Seq, each feature vector in the obtained sequence for one node is computed by summing up the features of all nodes in the corresponding layer of the neighborhood tree. This point distinguishes these two methods fundamentally." }, { "heading": "5 EXPERIMENTAL STUDIES", "text": "" }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "Datasets. We evaluate our proposed models on 1 massive-scale graph and 4 medium-scale graphs using node classification tasks. The massive-scale graph ogbn-papers100M provided by the Open Graph Benchmark (OGB) (Hu et al., 2020a) is the existing largest benchmark dataset for node classification. Medium-scale graphs include ogbn-products (Hu et al., 2020a), Reddit (Hamilton et al., 2017), Yelp Zeng et al. (2020), and Flickr Zeng et al. (2020). These tasks cover inductive and transductive settings. The statistics of these datasets are summarized in Table 2. The detailed description of these datasets are provided in Appendix A.1.\nImplementation. We implemented our methods using Pytorch (Paszke et al., 2017) and Pytorch Geometric (Fey & Lenssen, 2019). For our proposed methods, we conduct the precomputation on\nthe CPU, after which we train our models on a GeForce RTX 2080 Ti GPU. We perform a grid search for the following hyperparameters: number of hops L, batch size, learning rate, hidden dimension d′, dropout rate, weight decay, and convolutional kernel size k. The chosen hyperparameters for our Neighbor2Seq+Conv and Neighbor2Seq+Attn are summarized in Appendix A.2 for reproducibility." }, { "heading": "5.2 RESULTS ON MASSIVE-SCALE GRAPHS", "text": "Since ogbn-papers100M is a massive graph with more than 111 million nodes and 1.6 billion edges, most existing methods have difficulty handling such a graph. We consider three baselines that have available results evaluated by OGB: Multilayer Perceptron (MLP), Node2Vec (Grover & Leskovec, 2016), and SGC (Wu et al., 2019). The results under transductive setting is reported in Table 3. Following OGB, we report accuracies for all models on training, validation, and test sets. The previous state-of-theart result on ogbn-papers100M is ob-\ntained by the precomputing method SGC. Our models outperform the baselines consistently in terms of training, validation, and test, which demonstrates the expressive power and the generalization ability of our method on massive graphs." }, { "heading": "5.3 RESULTS ON MEDIUM-SCALE GRAPHS", "text": "We also evaluate our models on medium-scale graphs, thus enabling comparison with more existing works. We conduct transductive learning on ogbn-products, a medium-scale graph from OGB. We also conduct inductive learning on Reddit, Yelp, and Flickr, which are frequently used for inductive learning by the community. The following baselines are considered: MLP, Node2Vec (Grover & Leskovec, 2016), GCN (Kipf & Welling, 2016), SGC Wu et al. (2019), GraphSAGE (Hamilton et al., 2017), FastGCN (Chen et al., 2018b), VR-GCN (Chen et al., 2018a), AS-GCN (Huang et al., 2018), ClusterGCN (Chiang et al., 2019), GraphSAINT (Zeng et al., 2020), and SIGN (Rossi et al., 2020).\nThe ogbn-products dataset is challenging because the splitting is not random. The splitting procedure is more realistic. The nodes (i.e., products) are sorted according to their sales ranking and the top 8% nodes are used for training, next 2% for validation, and the rest 90% for testing. This matches the real-world application where manual labeling is prioritized to important nodes and models are subsequently used to make prediction on less important nodes. Hence, ogbn-products is an ideal benchmark dataset to improve out-of-distribution prediction. As shown in Table 4, our Neighbor2Seq+Conv and Neighbor2Seq+Attn outperfom baselines on test set (i.e., 90% nodes), which further demonstrates the generalization ability of our method.\nThe results on inductive tasks are summarized in Table 5. On Reddit, our models perform better than all sampling methods and achieve the competitive result as SIGN. On Flickr, our models obtain significantly better results. Specifically, our Neighbor2Seq+Conv outperforms the previous state-ofthe-art models by an obvious margin. Although our models perform not as good as GraphSAINT on Yelp, we outperform other sampling methods and the precomputing model SIGN consistently on this dataset." }, { "heading": "5.4 COMPARISONS OF COMPUTIONAL EFFICIENCY", "text": "In order to show the computational efficiency, we conduct an empirical comparison with existing methods in terms of time comsuming during preprocessing, training, and inference. We consider the following representative sampling methods and precom-\nputing methods: ClusterGCN Chiang et al. (2019), GraphSAINT Zeng et al. (2020), SGC Wu et al. (2019), and SIGN Rossi et al. (2020). The comparison is performed on ogbn-products and the similar trend can be observed on other datasets. As demonstrated in Table 6, our approaches, like existing precomputing methods, are more computationally efficient than sampling methods in terms of training and inference. Compared with existing precomputing methods, our methods achieve a better balance between performance and efficiency." }, { "heading": "5.5 ABLATION STUDY ON ORDER INFORMATION", "text": "Intuitively, the order information in the sequence obtained by Neighbor2Seq indicates the hops between nodes. Hence, we conduct an ablation study to verify the significance of this order information. We remove the positional encoding in Neighbor2Seq+Attn, leading to a model without the ability to capture the order information. The comparison is demonstrated in Table 7. Note that Neighbor2Seq+Attn and Neighbor2Seq+Attn w/o PE have the same number of parameters. Hence, Comparing the results of these two models, we can conclude that the order information is usually necessary. Neighbor2Seq+Conv and Neighbor2Seq+Attn both can capture the order information. There are two possible reasons why Neighbor2Seq+Conv performs better. First, Neighbor2Seq+Conv has more learnable parameters than Neighbor2Se+Attn, which only has a learnable query. Second, the convolutional neural network in Neighbor2Seq+Conv can additionally investigate the dependencies between feature dimensions because each feature dimension of the output depends on every feature dimension of the input." }, { "heading": "6 CONCLUSIONS AND OUTLOOK", "text": "In this work, we propose Neighbor2Seq, for transforming the heirarchical neighborhoods to ordered sequences. Neighbor2Seq enables the subsequent use of powerful general deep learning operations, leading to the proposed Neighbor2Seq+Conv and Neighbor2Seq+Attn. Our models can be deployed on massive graphs and trained efficiently. The extensive expriments demonstrate the scalability and the promising performance of our method. As discussed in Section 4, based on our Neighbor2Seq, several significant directions can be further explored in the future research." }, { "heading": "A APPENDIX", "text": "A.1 DATASET DESCRIPTIONS\nogbn-papers100M (Hu et al., 2020a) is the existing largest benckmark dataset for node classification. It is a directed citation graph of 111 million papers indexed by Microsoft Academic Graph (MAG) (Wang et al., 2020). For simplicity, it is converted to an undirected graph in baselines and our method. Each node is a paper and each directed edge indicates that one paper cites another one. Each node is associated with a 128-dimensional feature vector obtained by averaging the word2vec (Mikolov et al., 2013) embeddings of words in its title and abstract. Among the node set, approximately 1.5 millione of them are ARXIV papers, each of which has a label with one of ARXIV’s subject areas. The rest nodes (i.e., non-ARXIV papers) are not associated with label information. The task is to leverage the entire citation graph to infer the labels of the ARXIV papers. The time-based splitting is used as the splitting strategy. To be more specifical, the training nodes are all ARXIV papers published until 2017, while the validation nodes are the ARXIV papers published in 2018, and the ARXIV papers published since 2019 are treated as test nodes.\nogbn-products (Hu et al., 2020a) is an undirected Amazon product co-purchasing graph (Bhatia et al., 2016). Nodes denote products and edges between two nodes indicate that the corresponding products are purchased together. Node features are derived by extracting bag-of-words representations from the product descriptions. Further, a Principal Component Analysis is applied to these features to reduce the dimension to 100. The task is to predict the category of a product. A realistic splitting scheme is used in this data. Specifically, the products are firstly sorted according to their sales ranking, and then the top 10% products are used for training, next 2% for validation, and the rest for testing. This strategy matches the real-world situation where manual labeling is prioritized to important nodes and models are subsequently deployed to predict the less important ones.\nReddit (Hamilton et al., 2017), Yelp (Zeng et al., 2020), and Flickr (Zeng et al., 2020) are widely used datasets for inductive learning. During training, only the node features of training nodes and the edges between training nodes are visible. Reddit is a social netowork extracted from Reddit forum. Nodes represent posts and edges between two posts indicate the same user comments on both posts. Node features are fromed by GloVe CommonCrawl word vectors Pennington et al. (2014) of the posts. The task is to predict which community different posts belong to. The splitting is also time-based. Yelp is a social netowork constructed from Yelp website. Nodes are users and edges between two nodes indicate they are friends. Node features of users are obtained by the word2vec embeddings of their corresponding reviews. The task is to predict the categories of businesses reviewed by different users, which is multi-label classification task. Flickr is a social network based on Flickr, a photo sharing website. Nodes represent images and there is an edge between two nodes if two images share some common properties. The node features are fromed by the bag-of-words representations of the images. The task is to predict the category each image belongs to.\nA.2 HYPERPARAMETER CONFIGURATIONS\nWe conduct a grid search for hyperparameters. Table 8 summarizes the chosen hyperparameters for our models." } ]
2,020
null
SP:c7f896d15bb66637e8ad0b80f7baa713d9da6c30
[ "The authors address methods to encourage the emergence of the layout expression structures on the frameworks of neural module networks (NMN) for the visual QA problems. The methods are motivated from the works on language emergence for communication between multi-agents and the language acquisition of new-born babies from parents, which achieved with limited data. The methods, ‘iterative learning’ (IL) are designed as forming two agents (program generators and execution engines) to play VQA games. Basic architectures and learning methods seem to be very similar to the approach of semi-supervised learning introduced in [ICCV17]. ", "The authors apply iterated learning - a procedure originating in CogSci analyses of how human languages might develop - to the training of neural module networks. The goal is for iterated learning to encourage these networks to develop compositional structures that support systematic generalization without requiring explicit pressures for compositional structures (in past work, such explicit pressures have generally been necessary). The proposed approach brings substantial improvements in systematic generalization across two datasets, SHAPES and CLEVR." ]
Although neural module networks have an architectural bias towards compositionality, they require gold standard layouts to generalize systematically in practice. When instead learning layouts and modules jointly, compositionality does not arise automatically and an explicit pressure is necessary for the emergence of layouts exhibiting the right structure. We propose to address this problem using iterated learning, a cognitive science theory of the emergence of compositional languages in nature that has primarily been applied to simple referential games in machine learning. Considering the layouts of module networks as samples from an emergent language, we use iterated learning to encourage the development of structure within this language. We show that the resulting layouts support systematic generalization in neural agents solving the more complex task of visual question-answering. Our regularized iterated learning method can outperform baselines without iterated learning on SHAPES-SyGeT (SHAPES Systematic Generalization Test), a new split of the SHAPES dataset we introduce to evaluate systematic generalization, and on CLOSURE, an extension of CLEVR also designed to test systematic generalization. We demonstrate superior performance in recovering ground-truth compositional program structure with limited supervision on both SHAPES-SyGeT and CLEVR.
[ { "affiliations": [], "name": "Ankit Vani" }, { "affiliations": [], "name": "Yuchen Lu" } ]
[ { "authors": [ "Jacob Andreas", "Marcus Rohrbach", "Trevor Darrell", "Dan Klein" ], "title": "Neural module networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Dzmitry Bahdanau", "Harm de Vries", "Timothy J O’Donnell", "Shikhar Murty", "Philippe Beaudoin", "Yoshua Bengio", "Aaron Courville" ], "title": "CLOSURE: assessing systematic generalization of clevr models", "venue": "arXiv preprint arXiv:1912.05783,", "year": 2019 }, { "authors": [ "Dzmitry Bahdanau", "Shikhar Murty", "Michael Noukhovitch", "Thien Huu Nguyen", "Harm de Vries", "Aaron Courville" ], "title": "Systematic generalization: What is required and can it be learned", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Matthew M Botvinick", "David C Plaut" ], "title": "Empirical and computational support for contextdependent representations of serial order: Reply to bowers, damian, and davis", "venue": null, "year": 2009 }, { "authors": [ "Jeffrey S Bowers", "Markus F Damian", "Colin J Davis" ], "title": "A fundamental limitation of the conjunctive codes learned in pdp models of cognition: Comment on botvinick and plaut", "venue": null, "year": 2006 }, { "authors": [ "Philémon Brakel", "Stefan Frank" ], "title": "Strong systematicity in sentence processing by simple recurrent networks", "venue": "In Proceedings of the Annual Meeting of the Cognitive Science Society,", "year": 2009 }, { "authors": [ "Henry Brighton", "Simon Kirby" ], "title": "Understanding linguistic evolution by visualizing the emergence of topographic mappings", "venue": "Artificial life,", "year": 2006 }, { "authors": [ "Paco Calvo", "John Symons" ], "title": "The architecture of cognition: Rethinking Fodor and Pylyshyn’s systematicity challenge", "venue": null, "year": 2014 }, { "authors": [ "Rahma Chaabouni", "Eugene Kharitonov", "Diane Bouchacourt", "Emmanuel Dupoux", "Marco Baroni" ], "title": "Compositionality and generalization in emergent languages", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Franklin Chang" ], "title": "Symbolically speaking: A connectionist model of sentence production", "venue": "Cognitive science,", "year": 2002 }, { "authors": [ "Edward Choi", "Angeliki Lazaridou", "Nando de Freitas" ], "title": "Multi-agent compositional communication learning from raw visual input", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "M Christiansen", "Nick Chater" ], "title": "Generalization and connectionist language learning", "venue": "Mind and Language,", "year": 1994 }, { "authors": [ "Michael Cogswell", "Jiasen Lu", "Stefan Lee", "Devi Parikh", "Dhruv Batra" ], "title": "Emergence of compositional language with deep generational transmission", "venue": null, "year": 1904 }, { "authors": [ "Gautier Dagan", "Dieuwke Hupkes", "Elia Bruni" ], "title": "Co-evolution of language and agents in referential games", "venue": "arXiv preprint arXiv:2001.03361,", "year": 2020 }, { "authors": [ "Katrina Evtimova", "Andrew Drozdov", "Douwe Kiela", "Kyunghyun Cho" ], "title": "Emergent communication in a multi-modal, multi-step referential game", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jerry A Fodor", "Ernest Lepore" ], "title": "The compositionality papers", "venue": null, "year": 2002 }, { "authors": [ "Jerry A Fodor", "Zenon W" ], "title": "Pylyshyn. Connectionism and cognitive architecture: A critical analysis", "venue": null, "year": 1988 }, { "authors": [ "Jakob Foerster", "Ioannis Alexandros Assael", "Nando De Freitas", "Shimon Whiteson" ], "title": "Learning to communicate with deep multi-agent reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Shangmin Guo", "Yi Ren", "Serhii Havrylov", "Stella Frank", "Ivan Titov", "Kenny Smith" ], "title": "The emergence of compositional languages for numeric concepts through iterated learning in neural agents", "venue": null, "year": 1910 }, { "authors": [ "Serhii Havrylov", "Ivan Titov" ], "title": "Emergence of language with multi-agent games: Learning to communicate with sequences of symbols", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Ronghang Hu", "Jacob Andreas", "Marcus Rohrbach", "Trevor Darrell", "Kate Saenko" ], "title": "Learning to reason: End-to-end module networks for visual question answering", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Drew A Hudson", "Christopher D Manning" ], "title": "Compositional attention networks for machine reasoning", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Drew A Hudson", "Christopher D Manning" ], "title": "GQA: a new dataset for real-world visual reasoning and compositional question answering", "venue": "Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Justin Johnson", "Bharath Hariharan", "Laurens van der Maaten", "Li Fei-Fei", "C Lawrence Zitnick", "Ross Girshick" ], "title": "Clevr: A diagnostic dataset for compositional language and elementary visual reasoning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Justin Johnson", "Bharath Hariharan", "Laurens Van Der Maaten", "Judy Hoffman", "Li Fei-Fei", "C Lawrence Zitnick", "Ross Girshick" ], "title": "Inferring and executing programs for visual reasoning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp. 2989–2998,", "year": 2017 }, { "authors": [ "Emilio Jorge", "Mikael Kågebäck", "Fredrik D Johansson", "Emil Gustavsson" ], "title": "Learning to play guess who? and inventing a grounded language as a consequence", "venue": "arXiv preprint arXiv:1611.03218,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Simon Kirby" ], "title": "Spontaneous evolution of linguistic structure-an iterated learning model of the emergence of regularity and irregularity", "venue": "IEEE Transactions on Evolutionary Computation,", "year": 2001 }, { "authors": [ "Simon Kirby", "Hannah Cornish", "Kenny Smith" ], "title": "Cumulative cultural evolution in the laboratory: An experimental approach to the origins of structure in human language", "venue": "Proceedings of the National Academy of Sciences,", "year": 2008 }, { "authors": [ "Simon Kirby", "Tom Griffiths", "Kenny Smith" ], "title": "Iterated learning and the evolution of language", "venue": "Current opinion in neurobiology,", "year": 2014 }, { "authors": [ "Satwik Kottur", "José Moura", "Stefan Lee", "Dhruv Batra" ], "title": "Natural language does not emerge ‘naturally’ in multi-agent dialog", "venue": "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Angeliki Lazaridou", "Alexander Peysakhovich", "Marco Baroni" ], "title": "Multi-agent cooperation and the emergence of (natural) language", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Angeliki Lazaridou", "Karl Moritz Hermann", "Karl Tuyls", "Stephen Clark" ], "title": "Emergence of linguistic communication from referential games with symbolic and pixel input", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "David Lewis" ], "title": "Convention: A philosophical study", "venue": null, "year": 2008 }, { "authors": [ "Fushan Li", "Michael Bowling" ], "title": "Ease-of-teaching and language structure from emergent communication", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ryan Lowe", "Abhinav Gupta", "Jakob Foerster", "Douwe Kiela", "Joelle Pineau" ], "title": "On the interaction between supervision and self-play in emergent communication", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Yuchen Lu", "Soumye Singhal", "Florian Strub", "Aaron Courville", "Olivier Pietquin" ], "title": "Countering language drift with seeded iterated learning", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Gary F Marcus" ], "title": "Rethinking eliminative connectionism", "venue": "Cognitive psychology,", "year": 1998 }, { "authors": [ "Gary F Marcus" ], "title": "The algebraic mind: Integrating connectionism and cognitive science", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Ethan Perez", "Florian Strub", "Harm De Vries", "Vincent Dumoulin", "Aaron Courville" ], "title": "Film: Visual reasoning with a general conditioning layer", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Steven Phillips" ], "title": "Are feedforward and recurrent networks systematic? analysis and implications for a connectionist cognitive architecture", "venue": "Connection Science,", "year": 1998 }, { "authors": [ "Yi Ren", "Shangmin Guo", "Matthieu Labeau", "Shay B Cohen", "Simon Kirby" ], "title": "Compositional languages emerge in a neural iterated learning model", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Adam Santoro", "David Raposo", "David G Barrett", "Mateusz Malinowski", "Razvan Pascanu", "Peter Battaglia", "Timothy Lillicrap" ], "title": "A simple neural network module for relational reasoning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Catriona Silvey", "Simon Kirby", "Kenny Smith" ], "title": "Word meanings evolve to selectively preserve distinctions on salient dimensions", "venue": "Cognitive science,", "year": 2015 }, { "authors": [ "Amanpreet Singh", "Tushar Jain", "Sainbayar Sukhbaatar" ], "title": "Individualized controlled continuous communication model for multiagent cooperative and competitive tasks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Brian Skyrms" ], "title": "Signals: Evolution, learning, and information", "venue": null, "year": 2010 }, { "authors": [ "Luc Steels", "Martin Loetzsch" ], "title": "The grounded naming game", "venue": "Experiments in cultural language evolution,", "year": 2012 }, { "authors": [ "Sainbayar Sukhbaatar", "Rob Fergus" ], "title": "Learning multiagent communication with backpropagation", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Frank van der Velde", "Marc de Kamps" ], "title": "Lack of combinatorial productivity in language processing with simple recurrent networks", "venue": "Connection Science,", "year": 2004 }, { "authors": [ "Ramakrishna Vedantam", "Karan Desai", "Stefan Lee", "Marcus Rohrbach", "Dhruv Batra", "Devi Parikh" ], "title": "Probabilistic neural symbolic models for interpretable visual question answering", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Willem H Zuidema" ], "title": "How the poverty of the stimulus solves the poverty of the stimulus", "venue": "In Advances in neural information processing systems,", "year": 2003 }, { "authors": [ "Johnson" ], "title": "CLEVR. B MODEL ARCHITECTURE DETAILS B.1 PROGRAM GENERATOR The program generator is a seq2seq model that translates input questions to programs", "venue": "We use the Seq2SeqAtt model of Bahdanau et al. (2019a),", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Although great progress has been made in visual question-answering (VQA), recent methods still struggle to generalize systematically to inputs coming from a distribution different from that seen during training (Bahdanau et al., 2019b;a). Neural module networks (NMNs) present a natural solution to improve generalization in VQA, using a symbolic layout or program to arrange neural computational modules into computation graphs. If these modules are learned to be specialized, they can be composed in arbitrary legal layouts to produce different processing flows. However, for modules to learn specialized roles, programs must support this type of compositionality; if programs reuse modules in non-compositional ways, modules are unlikely to become layout-invariant.\nThis poses a substantial challenge for the training of NMNs. Although Bahdanau et al. (2019b) and Bahdanau et al. (2019a) both observe that NMNs can systematically generalize if given humandesigned ground-truth programs, creating these programs imposes substantial practical costs. It becomes natural to jointly learn a program generator alongside the modules (Johnson et al., 2017b; Hu et al., 2017; Vedantam et al., 2019), but the generated programs often fail to generalize systematically and lead to worse performance (Bahdanau et al., 2019b).\nIterated learning (IL) offers one way to address this problem. Originating in cognitive science, IL explains how language evolves to become more compositional and easier-to-acquire in a repeated transmission process, where each new generation acquires the previous generation’s language through a limited number of samples (Kirby et al., 2014). Early works with human participants (Kirby et al., 2008) as well as agent-based simulations (Zuidema, 2003) support this hypoth-\n∗Correspondance at: ankit.vani@umontreal.ca.\nesis. The machine learning community has also recently shown an increasing interest in applying IL towards emergent communication (Guo et al., 2019; Li & Bowling, 2019; Cogswell et al., 2019; Dagan et al., 2020; Ren et al., 2020). Different from previous works, we believe that IL is an algorithmic principle that is equally applicable to recovering compositional structure in more general tasks. We thus propose treating NMN programs as samples from a “layout language” and applying IL to the challenging problem of VQA. Our efforts highlight the potential of IL for broader machine learning applications beyond the previously-explored scope of language emergence and preservation (Lu et al., 2020).\nTo demonstrate our method, we introduce a lightweight benchmark for systematic generalization research based on the popular SHAPES dataset (Andreas et al., 2016) called SHAPES-SyGeT (SHAPES Systematic Generalization Test). Our experiments on SHAPES-SyGeT, CLEVR (Johnson et al., 2017a), and CLOSURE (Bahdanau et al., 2019a) show that our IL algorithm accelerates the learning of compositional program structure, leading to better generalization to both unseen questions from the training question templates and unseen question templates. Using only 100 ground-truth programs for supervision, our method achieves CLEVR performance comparable to Johnson et al. (2017b) and Vedantam et al. (2019), which use 18000 and 1000 programs for supervision respectively." }, { "heading": "2 RELATED WORK", "text": "Systematic generalization. Systematicity was first proposed as a topic of research in neural networks by Fodor & Pylyshyn (1988), who argue that cognitive capabilities exhibit certain symmetries, and that representations of mental states have combinatorial syntactic and semantic structure. Whether or not neural networks can exhibit systematic compositionality has been a subject of much debate in the research community (Fodor & Pylyshyn, 1988; Christiansen & Chater, 1994; Marcus, 1998; Phillips, 1998; Chang, 2002; Marcus, 2018; van der Velde et al., 2004; Botvinick & Plaut, 2009; Bowers et al., 2009; Brakel & Frank, 2009; Fodor & Lepore, 2002; Marcus, 2018; Calvo & Symons, 2014).\nBahdanau et al. (2019b) investigate various VQA architectures such as neural module networks (NMNs) (Andreas et al., 2016), MAC (Hudson & Manning, 2018), FiLM (Perez et al., 2018), and relation networks (Santoro et al., 2017) on their ability to systematically generalize on a new synthetic dataset called SQOOP. They show that only NMNs are able to robustly solve test problems, but succeed only when a fixed tree-structured layout is provided. When learning to infer the module\nnetwork layout, robust tree-structured layouts only emerged if given a strong prior to do so. The authors conclude that explicit regularization and stronger priors are required for the development of the right layout structure.\nCLEVR (Johnson et al., 2017a) is a popular VQA dataset, and various benchmarks achieve almostperfect CLEVR validation scores (Hu et al., 2017; Hudson & Manning, 2018; Perez et al., 2018; Santoro et al., 2017). Bahdanau et al. (2019a) proposed an extension of CLEVR with a new evaluation dataset called CLOSURE, containing novel combinations of linguistic concepts found in CLEVR. The authors found that many of the existing models in the literature fail to systematically generalize to CLOSURE. Moreover, there is a significant gap between the performance achieved with ground-truth layouts and learned layouts on CLOSURE.\nLanguage emergence and compositionality. Agents interacting in a cooperative environment can learn a language to communicate to solve a particular task. The emergence of such a communication protocol has been studied extensively in multi-agent referential games. In these games, one agent must describe what it saw to another agent, which is tasked with figuring out what the first agent saw (Lewis, 2008; Skyrms, 2010; Steels & Loetzsch, 2012). To encourage a dialogue between agents, several multi-stage variants of such a game have also been proposed (Kottur et al., 2017; Evtimova et al., 2018). Most approaches to learning a discrete communication protocol between agents use reinforcement learning (Foerster et al., 2016; Lazaridou et al., 2017; Kottur et al., 2017; Jorge et al., 2016; Havrylov & Titov, 2017). However, the Gumbel straight-through estimator (Jang et al., 2017) can also be used (Havrylov & Titov, 2017), as can backpropagation when the language in question is continuous (Foerster et al., 2016; Sukhbaatar & Fergus, 2016; Singh et al., 2019).\nSeveral works in the literature have found that compositionality only arises in emergent languages if appropriate environmental pressures are present (Kottur et al., 2017; Choi et al., 2018; Lazaridou et al., 2018; Chaabouni et al., 2020). While generalization pressure is not sufficient to guarantee compositionality, compositional languages tend to exhibit better systematic generalization (Bahdanau et al., 2019b; Chaabouni et al., 2020). The community still lacks strong research indicating what general conditions are necessary or sufficient for compositional language emergence.\nIterated learning. The origins of the compositionality of human language, which leads to an astounding open-ended expressive power, have attracted much interest over the years. Kirby (2001) suggests that this phenomenon is a result of a learning bottleneck arising from the need to learn a highly expressive language with only a limited set of supervised learning data. The iterated application of this bottleneck, as instantiated by IL, has been demonstrated to cause artificial languages to develop structure in experiments with human participants (Kirby et al., 2008; Silvey et al., 2015).\nRen et al. (2020) present neural IL following the principles of Kirby (2001), where neural agents play a referential game and evolve a communication protocol through IL. They use topographic similarity (Brighton & Kirby, 2006) to quantify compositionality, and find that high topographic similarity improves the learning speed of neural agents, allows the listener to recognize more objects using less data, and increases validation performance. However, these experiments are limited to domains with extremely simple object and message structure.\nSeveral ablation studies (Li & Bowling, 2019; Ren et al., 2020) have found that re-initializing the speaker and the listener between generations is necessary to reap the benefits of compositionality from IL. However, seeded IL (Lu et al., 2020) proposes to seed a new agent with the previous generation’s parameters at the end of the learning phase of a new generation. Since self-play has not yet fine-tuned this initialization, it has not had the opportunity to develop a non-compositional language to fit the training data. The authors find that seeded IL helps counter language drift in a translation game, and hypothesize that IL maintains the compositionality of natural language." }, { "heading": "3 METHOD", "text": "We are interested in solving the task of visual question-answering (VQA). Let X be the space of images about which our model will be required to answer questions. Next, let Q be the space of natural-language questions and Y the space of all possible answers to the questions. Additionally, we consider a space Z of programs, which represent computation graphs of operations that can be performed on an image in X to produce an output in Y . We consider a question template T to be\na set of tuples (q, z), where q ∈ Q and z ∈ Z . Each question template contains questions with the same structure but varying primitive values. For example, the questions “Is a triangle blue” and “Is a square red” belong to a template “Is a SHAPE COLOR.” The program z corresponding to the question q in a template defines a computation graph of operations that would produce the correct answer in Y to q for any input image x ∈ X . Finally, let T be a finite set of question templates. The dataset for training our model and evaluating VQA performance constitutes tuples of the form (q, z,x, y). First, a template T ∈ T is sampled and a tuple (q, z) is sampled from T . Then, an image x ∈ X is sampled and the answer y is produced by passing x through the program z. These collected variables (q, z,x, y) form a single example in the task dataset. To evaluate our model’s performance on unseen question templates, we define Ttrain ⊂ T to be a subset of training templates and Ttest = T − Ttrain to be the subset of test templates. The training dataset D is prepared from templates in Ttrain and the out-of-distribution test dataset Dtest from templates in Ttest. We allow a program z to be absent in D, in which case it is not used for auxiliary supervision during training. Our goal of systematic generalization is to learn a model p(Y | X,Q) that performs well on the dataset Dtest created using unseen question templates, where Y , X , and Q are random variables taking values in Y , X , and Q. We define our model to be a composition of a program generator PGθ(Z | Q) and an execution engine EEφ(Y |X,Z), parameterized by θ and φ respectively." }, { "heading": "3.1 MODEL ARCHITECTURE", "text": "Our overall model architecture is illustrated in Figure 1. We use the attention-based sequence-tosequence model from Bahdanau et al. (2019a) as the program generator, which translates an input question q into a sampled program ẑ. The execution engine assembles modules into a layout dictated by ẑ to instantiate an NMN that takes the input image x to predict an answer ŷ. In this work, we explore three execution engine architecture choices, including Tensor-NMN, which is the module architecture from Johnson et al. (2017b), and Vector-NMN from Bahdanau et al. (2019a). We also experiment with a novel hybrid of these architectures that performs better on SHAPES-SyGeT, which we dub Tensor-FiLM-NMN. Appendix B elaborates the details of our model architecture.\nThe goal of the Tensor-FiLM-NMN architecture is to combine the minimalist, FiLM-based definition of modules in Vector-NMN with the spatial representational power of Tensor-NMN. As such, the modules in Tensor-FiLM-NMN use FiLM to condition global operations on module-specific embeddings as in Vector-NMN, but the module inputs and outputs are tensor-valued feature maps like in Tensor-NMN. The following equations illustrate the working of a Tensor-FiLM-NMN module:\nh̃1 = γh h1 ⊕ βh, h̃2 = γh h2 ⊕ βh (1) e = [(γx x⊕ βx);max(h̃1, h̃2); (h̃1 − h̃2)] (2) g =W1 ∗ e⊕ b1 (3) y = ReLU(W2 ∗ (γg ReLU([g; cumsum left(g); cumsum down(g)])⊕ βg)⊕ b2 + e). (4)\nHere, x is the input image, h1 and h2 are the module inputs, and y is the module output. γ• and β• are FiLM parameters computed using different 2-layer MLPs per layer on the module-specific embeddings. The weights W• and biases b• are shared across all modules. We further strengthen our model’s ability for spatial reasoning by using cumulative sums of an intermediate representation across locations in left-to-right and top-to-bottom directions. This allows our model, for example, to select regions to the ‘left of’ or ‘below’ objects through appropriate scaling and thresholding." }, { "heading": "3.2 ITERATED LEARNING FOR NMNS", "text": "We can view the program generator and the execution engine as communicating agents in a cooperative VQA game where the programs passed between agents are messages drawn from an emergent language. Introducing more compositional structure to this language, such as reusing low-level concepts using the same tokens and high-level concepts using the same sequence of tokens, helps address the combinatorial complexity of the question space, allowing agents to perform better on new question templates containing previously-unseen combinations of known concepts.\nWe use IL to encourage the emergence of structure in the generated programs. We iteratively spawn new program generator and execution engine agents, train them on the VQA task, and transfer their\nknowledge to the next generation of agents. Limiting this transmission of knowledge between generations imposes a learning bottleneck, where only the easy-to-learn linguistic concepts survive. Since the central hypothesis of neural IL is that structure is easier for neural networks to learn than idiomatic non-compositional rules (Li & Bowling, 2019; Ren et al., 2020), our IL algorithm pushes the language of the programs to be more compositional. The combination of learning a compositional structure and performing well on the VQA training task engenders systematic generalization.\nWe follow Ren et al. (2020) in dividing our method into three stages, an interacting phase, a transmitting phase, and a learning phase, which are cycled through iteratively until the end of training. Figure 2 illustrates the phases of our method for training module networks with compositional emergent layouts and Appendix C presents a formal algorithm for the same.\nInteracting phase. The program generator and the execution engine must work together. Without programs that consistently use the same tokens to mean the same operations, the execution engine cannot assign the correct semantics to modules. Simultaneously, the program generator depends on the language grounding provided by the execution engine in the form of reinforcement signal for beneficial layouts. To make the problem more tractable, it is common to provide the model a small set of ground-truth programs (Johnson et al., 2017b; Vedantam et al., 2019). We find that providing program supervision with a small number of ground-truth programs throughout the interacting phase helps in generalization. This can be seen as a form of supervised self-play (Lowe et al., 2020), and has the effect of simulating an inductive bias towards the desired program language.\nThe agents are jointly trained for Ti steps to minimize answer prediction error on data sampled from the training dataset D, with predictions given by ẑ ∼ PGθ(q); ŷ = EEφ(x, ẑ). We train EEφ by minimizing the cross-entropy loss L between the true answer label y and the predicted distribution ŷ. Additionally, we estimate the gradients for the parameters θ using REINFORCE:\n∇θLPG(θ) = Eẑ∼pθ(·|q) [clip(−L,−5, 5)∇θ log pθ(ẑ | q)] . (5) Here, pθ is the distribution over programs returned by PGθ(q). We use the negative cross-entropy loss −L clipped between −5 and 5 as the reward for the program generator. When a ground-truth program z is available, we additionally minimize a weighted cross-entropy loss between z and the generated program ẑ using teacher forcing. In practice, the model operates on minibatches of data, and a subset of every minibatch is subsampled from the examples with ground-truth programs.\nTransmitting phase. During the transmitting phase, a new datasetD with Tt samples is prepared for use during the learning phase. Questions q, as well as ground-truth programs z if available, are sampled from D. For data examples without ground-truth programs, a program ẑ is sampled from the execution engine using q. Finally, question-program pairs (q, z), if z is available, or (q, ẑ) are added toD. By transmitting the ground-truth programs when available, we continue simulating the inductive bias towards desirable programs during the learning phase.\nLearning phase. Program generators and execution engines are initialized in the learning phase, forming a new generation of agents to play the VQA game. The new program generator then acquires the previous agents’ language from the transmitted data D. However, it does so imperfectly due to a learning bottleneck that exerts pressure towards compositionality (Ren et al., 2020).\nWe implement this learning bottleneck primarily by limiting the number of gradient updates in the learning phase, effectively performing early stopping. A smaller number of program generator gradient updates Tp leads to an underfit program generator that could waste computation during the interacting phase to re-learn global linguistic rules. On the other hand, setting Tp to a high value can lead to overfitting and learning of the previous generation’s idiosyncrasies, losing the benefits of the learning bottleneck. In addition to limiting the number of training iterations, we optionally further regularize the model by applying spectral normalization (Miyato et al., 2018) on the program generator’s decoder LSTM parameters.\nWe train the new program generator PGθ̃ by minimizing the cross-entropy loss between model samples z̃ ∼ PGθ̃(q) and transmitted programs ẑ corresponding to q in D for Tp gradient steps. Then, the new execution engine EEφ̃ is trained with programs from the new program generator PGθ̃, using the entire dataset D for Te steps. The forward pass in the execution engine learning phase is similar to that in the interacting phase, but the backward pass only updates the execution engine parameters φ̃. Adapting the new execution engine to the new program generator ensures stable training in the upcoming interacting phase." }, { "heading": "4 SYSTEMATIC GENERALIZATION TEST FOR SHAPES", "text": "The SHAPES dataset (Andreas et al., 2016) is a popular yet simple VQA dataset. The lower image and question complexities of SHAPES relative to CLEVR make it an attractive choice for experimentation, as training can be faster and require fewer resources. Each of the 15616 data points in SHAPES contains a unique image. However, there are only 244 unique questions. Although the validation and test splits of SHAPES contain unique questions not present in any of the other splits, the questions are of the same form as the ones in the training split.\nTo the best of our knowledge, there is no published systematic generalization evaluation setup for SHAPES. Thus, we present SHAPES-SyGeT, a SHAPES Systematic Generalization Test1. To understand the distribution of question types in SHAPES, we categorized questions into 12 standard templates, out of which 7 are used for training, and 5 to test systematic generalization.\nTo evaluate the in-distribution and out-of-distribution generalization performance of our models, we prepare the SHAPES-SyGeT training set with only a subset of the questions under each train template and use the rest as an in-distribution validation set (Val-IID). Questions belonging to the evaluation templates are used as an out-of-distribution validation set (Val-OOD). Please see Appendix D for further details about the question templates and the dataset splits." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we present results on SHAPES-SyGeT, CLEVR, and CLOSURE. For our preliminary experiments on GQA (Hudson & Manning, 2019), please see Appendix G. The SHAPES-SyGeT experiments are run with 3 different seeds and the CLEVR experiments use 8 seeds. We report the mean and standard deviation of metrics over these trials. CLEVR experiments use representations of images from a pre-trained ResNet-101 (He et al., 2016) as input, whereas the SHAPES-SyGeT runs use standardized raw images. Please see Appendix A for the hyperparameters we use.\nFor our NMN baselines without IL, we still utilize the multi-task objective of using REINFORCE to backpropagate gradients to the program generator and using a cross-entropy loss to provide program supervision when available. We find this method of training NMNs to generalize better than pretraining on the available programs and then interacting without supervision as is done in Johnson et al. (2017b). One can also view our baselines as executing one long interacting phase without any other phases or resetting of the model." }, { "heading": "5.1 SHAPES-SYGET", "text": "Due to its lightweight nature, we use SHAPES-SyGet for a series of experiments designed to illuminate the essential components of our method. For all SHAPES-SyGeT experiments, we record the\n1SHAPES-SyGeT can be downloaded from: https://github.com/ankitkv/SHAPES-SyGeT.\nin-distribution Val-IID and the out-of-distribution Val-OOD accuracies. However, model selection and early stopping are done based on Val-IID. For our IL experiments, we use spectral normalization of the program generator during the learning phase, and we study its utility in Appendix E. We do not report results on Vector-NMN, which performs very poorly. We believe that this is because SHAPES-SyGeT requires intermediate module outputs to contain detailed spatial information, which Vector-NMN has an inductive bias against thanks to its module outputs being vectors.\nWe note that the reported results on SHAPES in the literature make use of NMN architectures with specially designed modules for each operation (Andreas et al., 2016; Hu et al., 2017; Vedantam et al., 2019). In contrast, we use generic modules in order to study compositional layout emergence with minimal semantic priors, making our results hard to compare directly to prior work using SHAPES.\nIn-distribution and out-of-distribution accuracies. Table 1 illustrates the difference in the performance of various configurations of our models. All the reported models achieve perfect accuracy on the training data but generalize differently to Val-IID and Val-OOD. We find that Tensor-FiLMNMN generalizes better than Tensor-NMN trained with supervision of both 20 and 135 ground-truth programs. Moreover, we find that IL improves generalization across all configurations. We further evaluate the relationship between program supervision and performance in Figure 3 and find that IL improves performance at all levels of supervision, effectively improving data efficiency.\nLearning bottleneck. Although all models achieve perfect training accuracy in under 5000 steps, we notice that only IL leads to gradually increasing systematic generalization as training progresses. Figure 4 shows the effect of the learning bottleneck of IL, comparing two module architectures with and without IL. In the presence of some ground-truth program supervision that the emergent language should abide by, there is a stricter notion of correct programs2. Thus, in calculating the program accuracy, we consider a program to be correct if it matches the ground-truth program exactly and incorrect otherwise. We find that the program accuracy increases steadily through the\n2It is unlikely that a diverse set of programs provided for supervision could fit into a compositional structure significantly different from the ground-truth. Furthermore, small errors in the serialized programs could result in drastic differences in the parsed computation graph structure and semantics.\ninfluence of the learning bottleneck in the case of IL as training progresses, indicating increasing structure in the language of the programs.\nAblation tests. We use SHAPES-SyGeT to perform a series of ablations tests and report the full results for these experiments in Appendix E. We first examine the importance of reinitializing the execution engine. We consider two alternatives to reinitializing it from scratch at every generation: seeded IL (Lu et al., 2020), which uses the parameters at the end of the previous execution engine learning phase for re-initialization, and simply retaining the execution engine without any re-initialization. We find that both of these methods harm generalization performance compared to the standard setting.\nNext, we study the effect of spectral normalization applied to the program generator decoder. Without IL, spectral normalization has a marginal effect on generalization, indicating it alone is not sufficient to achieve the generalization improvements we observe. However, it improves the Val-IID and Val-OOD accuracies substantially with IL in the standard execution engine setting. Finally, we observe that retaining the previous program generator in a new generation without re-training greatly harms the performance of our best model, indicating that the learning phase is crucial.\nCuriously, we note that the effects of spectral normalization and not resetting the program generator are small or reversed when the execution engine is not re-initialized from scratch. This can be explained by an empirical observation that a partially trained execution engine finds it easier to overfit to the input images when the programs are noisy in the case of a small dataset like SHAPESSyGeT, instead of waiting for the program generator to catch up to the existing module semantics." }, { "heading": "5.2 CLEVR AND CLOSURE", "text": "CLEVR is significantly larger and more complex than SHAPES-SyGeT. It takes an execution engine over a day to reach 95% task accuracy on the CLEVR validation set on a Nvidia RTX-8000 GPU, even when trained with ground-truth programs without a program generator. Re-training the execution engine from scratch for every generation of IL is thus computationally infeasible. Between using seeded IL and not resetting the execution engine at all, we find not resetting the execution engine generalizes better. CLEVR contains 699989 training questions, and we provide 100 groundtruth programs for program supervision, one-tenth of that used by Vedantam et al. (2019). We find that Tensor-FiLM-NMN does not improve over Vector-NMN for CLEVR. Thus, we report results only on Tensor-NMN and Vector-NMN for conciseness.\nFigure 5 illustrates the training dynamics with and without IL on CLEVR, and Table 2 reports model performance on the CLEVR validation set and the out-of-distribution CLOSURE categories. The CLEVR validation curves through training are presented in Appendix F. Similar to Bahdanau et al. (2019a), we find that Vector-NMN generalizes better than Tensor-NMN on CLEVR without IL.\nHowever, Tensor-NMN systematically generalizes substantially better with IL. Across both models, using IL improves generalization on CLOSURE, greatly increases program accuracy, and achieves validation performance on CLEVR close to ProbNMN (Vedantam et al., 2019) despite using far fewer programs for supervision." }, { "heading": "6 CONCLUSION", "text": "We establish IL as a practical tool for the emergence of structure in machine learning by demonstrating its utility in challenging VQA tasks using NMNs. Our work shows that IL leads to improved performance with less supervision and facilitates systematic generalization. As the study of IL in neural agents is relatively young and has largely been explored in the narrow domain of language emergence in referential games, our setting presents several important new challenges for IL methods. Unlike in simple games (Ren et al., 2020), the emergence of compositional language without any supervision is thus far infeasible in VQA due to the difficult joint optimization of the program generator and the execution engine. However, by exploring learning phase regularization, suitable model architectures, and the learning dynamics of IL, we are able to dramatically reduce the amount of ground-truth supervision necessary; the surprising success of spectral normalization in particular should motivate further research into the role of regularization during the learning phase. We hope that this progress will spur future work to improve IL methods and their theoretical understanding, as well as extend them to other challenging tasks in machine learning." }, { "heading": "ACKNOWLEDGMENTS", "text": "We acknowledge the financial support of Samsung, Microsoft Research, and CIFAR." }, { "heading": "A HYPERPARAMETERS", "text": "Table 3 details the hyperparameters used in our experiments in Section 5 for both SHAPES-SyGeT and CLEVR." }, { "heading": "B MODEL ARCHITECTURE DETAILS", "text": "B.1 PROGRAM GENERATOR\nThe program generator is a seq2seq model that translates input questions to programs. We use the Seq2SeqAtt model of Bahdanau et al. (2019a), where unlike Johnson et al. (2017b), the decoder uses an attention mechanism (Bahdanau et al., 2015) over the encoder hidden states. We found this model to generalize better on held-out in-distribution questions in preliminary experiments. Ideally, the program generator’s target must be tree-structured since the output program represents a computation graph. Since a seq2seq model cannot directly model graphs, we instead model a serialized representation of the programs. Like Johnson et al. (2017b), we chose to use the Polish (or prefix) notation to serialize the syntax tree.\nConsidering the program generator parameterized by θ, the probability distribution over layouts ẑ is given by\npθ(ẑ | q) = t∏ i=1 pθ(ẑi | ẑ1:i−1, q). (6)\nDuring training, we sample programs from this distribution but take the argmax at each timestep when evaluating our model. We use an LSTM (Hochreiter & Schmidhuber, 1997) trained using teacher forcing to represent the distribution pθ(ẑi | ẑ1:i−1, q) at every timestep i. To condition on q for each decoder step, we perform attention on the hidden states of a bidirectional LSTM encoder over the question q.\nB.2 EXECUTION ENGINE\nTo run the program ẑ produced by the program generator, the execution engine assembles neural modules according to ẑ into a neural network that takes x as input to produce an answer ŷ. We consider this execution engine EEφ(x, ẑ) to be parameterized by φ. To illustrate a forward pass through the module network, consider the program ‘and color[green] scene transform[left of] shape[square] scene’ from Figure 1. ‘scene’ is a special module which represents the input image features from a CNN feature extractor. According to the\nannotators, the ‘color[green]’ module should filter green shapes from the scene, and the ‘shape[square]’ module should filter squares. ‘transform[left of]’ should select regions to the left of the filtered squares. In Figure 1, the final ‘and’ module should then compute the intersection of green shapes and the region to the left of squares. A classifier takes this output from the top-level module to produce a distribution ŷ over answers. We consider the final answer ŷ to be the argmax of ŷ.\nEach module corresponds to one token of the program and can be reused in different configurations. Like Johnson et al. (2017b), our modules are not hand-designed to represent certain functions but have a similar architecture for all operators. Other than the ‘scene’ module which has arity 0, all other modules are implemented as a CNN on the module inputs to produce a module output. In this work, we explore three module architecture choices to make up the execution engine: Tensor-NMN, Vector-NMN, and Tensor-FiLM-NMN. Here, the Tensor-NMN module architecture is the module network proposed by Johnson et al. (2017b), Vector-NMN is the architecture from Bahdanau et al. (2019a), and Tensor-FiLM-NMN is described in Section 3.1.\nC ITERATED LEARNING ALGORITHM\nWe present the full IL algorithm described in Section 3.2 in Algorithm 1." }, { "heading": "D SHAPES-SYGET TEMPLATES AND SPLITS", "text": "The images in SHAPES are arranged in a 3 × 3 grid, where each grid cell can contain a triangle, square, or circle that is either red, green, or blue, or the cell can be empty. Table 4a shows the standard splits for training and evaluating SHAPES used in the literature (Andreas et al., 2016; Hu et al., 2017; Vedantam et al., 2019). We categorize SHAPES questions into 12 standard templates, listed in Table 5. We then derive new splits for the SHAPES data, such that each split has the same primitives, but different ways in which the primitives are combined. The final split of the SHAPES-SyGeT dataset is presented in Table 4b." }, { "heading": "E ABLATION EXPERIMENTS", "text": "We perform ablation experiments on SHAPES-SyGeT rather than CLEVR/CLOSURE as the computational requirements of experiments on this dataset are an order of magnitude lighter. Results are presented in Table 6. We focus primarily on Tensor-FiLM-NMN, which we find benefits far more from IL than Tensor-NMN. In all cases, we use 20 ground-truth programs for supervision, a choice which we find to clearly reflect the differences between the various settings considered. We describe the important observations from our ablation study in Section 5.1." }, { "heading": "F CLEVR VALIDATION CURVES", "text": "The validation curves for CLEVR, illustrated in Figure 6, are similar to the training curves in Figure 5. Although CLEVR training exhibits a lower amount of overfitting compared to SHAPESSyGeT, we observe higher systematic generalization performance on CLOSURE with models trained using IL." }, { "heading": "G PRELIMINARY EXPERIMENTS WITH GQA", "text": "To study if IL can offer benefits for larger datasets without synthetic images and questions, we choose to evaluate our IL method on the GQA dataset (Hudson & Manning, 2019). As a preliminary setup, we remain as close as possible to the architectural setup described in Section 3. We only use question text, spatial image features from a pre-trained ResNet-101 (He et al., 2016), and programs, all without object annotations. We serialize the programs according to the Polish notation as we do in the case of SHAPES-SyGeT and CLEVR, which can result in duplicated token sequences as GQA programs do not always form strict directed acyclic graphs. Although these limitations allow us to easily apply the presented method to GQA, they also prevent us from getting competitive results on\nAlgorithm 1: Iterated learning for compositional NMN layout emergence. Data: Nepochs: number of epochs, (Ti, Tt, Tp, Te): length of each phase,D: training dataset.\n1 Initialize PG parameters θ1 and EE parameters φ1; 2 for n = 1 to Nepochs do /* Interacting phase */ 3 for i = 1 to Ti do 4 Sample new tuple (q, z,x, y) from D; 5 ẑ ∼ PGθn(q); 6 L← cross-entropy-loss(y,EEφn(x, ẑ)); 7 if z is available then L← L+ cross-entropy-loss(z, ẑ) ; 8 Update θn and φn to minimize L; 9 end\n/* Transmitting phase */ 10 D′ = ∅; 11 for i = 1 to Tt do 12 sample new tuple (q, z,x, y) from D; 13 if z is available then ẑ ← z ; 14 else ẑ ∼ PGθn(q) ; 15 Add (q, ẑ) toD′; 16 end\n/* Program generator learning phase */ 17 Initialize PG parameters θn+1; 18 for i = 1 to Tp do 19 Sample new tuple (q, ẑ) fromD′; 20 z̃ ∼ PGθn+1(q); 21 L← cross-entropy-loss(ẑ, z̃); 22 Update θn+1 to minimize L with spectral normalization; 23 end\n/* Execution engine learning phase */ 24 Initialize EE parameters φn+1; 25 for i = 1 to Te do 26 Sample new tuple (q, z,x, y) from D; 27 z̃ ∼ PGθn+1(q); 28 L← cross-entropy-loss(y,EEφn+1(x, z̃)); 29 Update φn+1 to minimize L; 30 end 31 end\nthe dataset. However, our goal is not to pursue state-of-the-art performance on GQA, but instead to study if we can still observe the advantage of IL in a limited ground-truth program setting for our NMN setup. Still, due to the significantly higher complexity of GQA, a few modifications and special considerations become necessary.\nG.1 CHANGES TO THE MODEL ARCHITECTURE\nIn both SHAPES-SyGeT and CLEVR, the programs are sequences of tokens where each token represents an operation, corresponding to one module of the assembled NMN. Each possible operation is carried out on the module input, the input image, or a combination of both, and has a fixed arity. However, the structure of programs in GQA has additional levels of granularity. Each timestep has an operation (e.g. exist, query), an optional sub-operation (e.g. material, weather), an integer arity, and a list of argument tuples. Each argument tuple in this list contains an argument (e.g. horse, water) as well as a Boolean value indicating if the argument is negated. We ignore additional details in the programs provided in the GQA dataset. As an illustration, the question “does the lady to the left of the player wear a skirt?” translates to the program “verify.rel(skirt,wearing,o){1} relate(lady,to the left of,s){1} select(player){0}”, containing three tokens with arities 1, 1, and 0 respectively. The operations and the corresponding sub-operations are separated by a period (.), and the parentheses contain the list of arguments, none of which are negated in this example.\nSince every timestep of the program sequence is given by a tuple (Op, Subop, Args, ArgNegs, Arity), we modify the program generator to produce this tuple instead of a single operation. For Args and\nArgNegs, we consider a fixed argument list length of 3, padding with the argument <NULL> and no negation when the argument list is shorter than 3. This allows us to represent the distribution over the tokens at various levels of granularity for every timestep using a fixed number of logits.\nFinally, it is no longer feasible to have dedicated modules for every possible unique program step, as the number of possible (Op, Subop, Args, ArgNegs)3 combinations is very large. Thus, we restrict ourselves to using the Vector-NMN architecture, where instead of using a FiLM embedding for the operation the module represents, we concatenate embeddings for the Op, Subop, Args, and ArgNegs to generate timestep-specific FiLM embeddings.\nG.2 CHANGES TO THE METHOD\nUnlike the question and program structures of SHAPES-SyGeT and CLEVR, the structures in GQA are significantly more diverse and include larger vocabularies as illustrated in Table 7. This makes training the program generator to a decent performance, even when trained on ground-truth programs directly, prohibitively slow. As a result, it becomes impractical to train the program generator\n3Arity is consumed during the assembly of the NMN.\nfrom scratch in every generation of IL. We thus need to explore strategies of resetting the program generator between generations while maintaining the benefits of the learning bottleneck of IL.\nWe hypothesize that due to the larger vocabulary sizes, learning good embeddings for question tokens and the various program tokens at all levels of granularity constitutes a large portion of the program generator training time. Following this intuition, we find it helpful to retain the input and output embeddings of the questions and the programs while resetting other parameters between generations. Experimentally, we verified that this method of resetting generalizes better than not resetting the program generator at all during IL. It also outperforms an alternate strategy of choosing new parameters based on an interpolation of a freshly initialized program generator and the final program generator from a generation.\nFinally, despite a search over the Vector-NMN hyperparameters, we find that the optimal hyperparameters have a tendency to exhibit overfitting on the GQA images. The strategy of retaining embeddings and resetting the rest of the parameters also works well to combat this overfitting for the execution engine, where the embeddings we retain are the various FiLM embeddings. In Section G.3, we compare our IL method with baselines that only implement this partial reset of the execution engine as a regularizer.\nG.3 RESULTS\nFor our GQA experiments, we use the balanced train and validation splits for training and reporting results respectively. For each run, we report the mean and standard deviation across 3 trials. To study the effect of IL with limited ground-truth programs, we run our IL experiments with 4000 ground-truth programs, which constitutes only 0.4% of all available training programs.\nTable 8 presents the results for different configurations of our models. A Vector-NMN execution engine that assembles modules using ground-truth programs during both training and validation provides an upper bound of performance over a learned program generator. When learning the\nprogram generator, we find that using 4000 ground-truth programs with IL performs almost as well as using 943000 ground-truth programs without IL. Due to the tendency of the execution engine to overfit on the GQA images, we also compare our IL model with stronger baselines that perform regularization through iterated partial resetting of the execution engine. While we find this method to generalize better than standard training, we still find it necessary to employ the learning bottleneck through the learning phase of the program generator to achieve the best accuracy in the limitedprogram-supervision setting.\nWe see a clear advantage of IL in learning the correct program structure in Figure 7, which reports program accuracy at various levels of granularity. We notice that this increase in the program accuracy correlates with an improved generalization performance in Figure 8, even though the training accuracy of all the iterated models is similar. These preliminary experiments on GQA indicate that a learning bottleneck can be beneficial even in larger non-synthetic datasets, but may require additional considerations such as partial resetting to make the training time tractable." } ]
2,021
ITERATED LEARNING FOR EMERGENT SYSTEMATICITY IN VQA
SP:ea7daa9dbbcba08e7c094630ef2bb55badc4fde5
[ "This paper describes a new method for normalizing few-shot learning episodes. The authors point out that the statistics of an episode are unreliable when the size of the episode is small or when the data distribution changes from episode to episode. To remedy this, the authors propose a method called ‘MetaNorm’ which uses a meta-learning approach to infer the means and variances to be used in the batch normalization layers that are employed in the feature extractor component. In particular, they meta-learn the parameters for a set of hypernetworks in an amortized fashion that learn to generate the means and variances of the batch normalization layers conditioned on the contents of the episode. The paper focuses entirely on the few-shot image classification scenario where MetaNorm is evaluated in various settings including standard few-shot classification and domain generalization (including a novel few-shot domain generalization setting).", "This paper proposes to replace batch normalization statistics, which are typically computed as the batch moments during training or a fixed training average during testing, with the outputs of learned neural networks. These networks are trained to minimize the KL divergence between their output and the expected or desired batch statistics. In this way, the statistics computation is amortized and can hopefully generalize in the face of small batches and distribution shift." ]
Batch normalization plays a crucial role when training deep neural networks. However, batch statistics become unstable with small batch sizes and are unreliable in the presence of distribution shifts. We propose MetaNorm, a simple yet effective meta-learning normalization. It tackles the aforementioned issues in a unified way by leveraging the meta-learning setting and learns to infer adaptive statistics for batch normalization. MetaNorm is generic, flexible and model-agnostic, making it a simple plug-and-play module that is seamlessly embedded into existing meta-learning approaches. It can be efficiently implemented by lightweight hypernetworks with low computational cost. We verify its effectiveness by extensive evaluation on representative tasks suffering from the small batch and domain shift problems: few-shot learning and domain generalization. We further introduce an even more challenging setting: few-shot domain generalization. Results demonstrate that MetaNorm consistently achieves better, or at least competitive, accuracy compared to existing batch normalization methods.
[ { "affiliations": [], "name": "Yingjun Du" }, { "affiliations": [], "name": "Xiantong Zhen" }, { "affiliations": [], "name": "Ling Shao" }, { "affiliations": [], "name": "Cees G. M. Snoek" } ]
[ { "authors": [ "Kelsey Allen", "Evan Shelhamer", "Hanul Shin", "Joshua Tenenbaum" ], "title": "Infinite mixture prototypes for few-shot learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Yogesh Balaji", "Swami Sankaranarayanan", "Rama Chellappa" ], "title": "MetaReg: Towards domain generalization using meta-regularization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Hakan Bilen", "Andrea Vedaldi" ], "title": "Universal representations: The missing link between faces, text, planktons, and cat breeds", "venue": "arXiv preprint arXiv:1701.07275,", "year": 2017 }, { "authors": [ "Nils Bjorck", "Carla P Gomes", "Bart Selman", "Kilian Q Weinberger" ], "title": "Understanding batch normalization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "John Bronskill", "Jonathan Gordon", "James Requeima", "Sebastian Nowozin", "Richard Turner" ], "title": "TaskNorm: Rethinking batch normalization for meta-learning", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Fabio Maria Carlucci", "Antonio D’Innocente", "Silvia Bucci", "Barbara Caputo", "Tatiana Tommasi" ], "title": "Domain generalization by solving jigsaw puzzles", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Woong-Gi Chang", "Tackgeun You", "Seonguk Seo", "Suha Kwak", "Bohyung Han" ], "title": "Domain-specific batch normalization for unsupervised domain adaptation", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Mircea Cimpoi", "Subhransu Maji", "Iasonas Kokkinos", "Sammy Mohamed", "Andrea Vedaldi" ], "title": "Describing textures in the wild", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Harm de Vries", "Florian Strub", "Jérémie Mary", "Hugo Larochelle", "Olivier Pietquin", "Aaron C Courville" ], "title": "Modulating early visual processing by language", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yingjun Du", "Jun Xu", "Huan Xiong", "Qiang Qiu", "Xiantong Zhen", "Cees G.M. Snoek", "Ling Shao" ], "title": "Learning to learn with variational information bottleneck for domain generalization", "venue": "In European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "Abdul Fatir" ], "title": "A re-implementation of “prototypical networks for few-shot learning", "venue": "https: //github.com/cyvius96/prototypical-network-pytorch,", "year": 2018 }, { "authors": [ "Chelsea Finn" ], "title": "Code for “Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "https://github.com/cbfinn/maml,", "year": 2017 }, { "authors": [ "Chelsea Finn", "Sergey Levine" ], "title": "Meta-learning and universality: Deep representations and gradient descent can approximate any learning algorithm", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Jonathan Gordon" ], "title": "Code for “Meta-learning probabilistic inference for prediction", "venue": "https:// github.com/Gordonjo/versa,", "year": 2019 }, { "authors": [ "Jonathan Gordon", "John Bronskill", "Matthias Bauer", "Sebastian Nowozin", "Richard Turner" ], "title": "Metalearning probabilistic inference for prediction", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yunhui Guo", "Noel C Codella", "Leonid Karlinsky", "James V Codella", "John R Smith", "Kate Saenko", "Tajana Rosing", "Rogerio Feris" ], "title": "A broader study of cross-domain few-shot learning", "venue": "In European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "Sebastian Houben", "Johannes Stallkamp", "Jan Salmen", "Marc Schlipsing", "Christian Igel" ], "title": "Detection of traffic signs in real-world images: The German traffic sign detection benchmark", "venue": "In International joint conference on neural networks,", "year": 2013 }, { "authors": [ "Sergey Ioffe" ], "title": "Batch renormalization: Towards reducing minibatch dependence in batch-normalized models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Songhao Jia", "Ding-Jie Chen", "Hwann-Tzong Chen" ], "title": "Instance-level meta normalization", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Aakash Kaku", "Sreyas Mohan", "Avinash Parnandi", "Heidi Schambra", "Carlos Fernandez-Granda" ], "title": "Be like water: Robustness to extraneous variables via adaptive feature normalization", "venue": "arXiv preprint arXiv:2002.04019,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Brenden M Lake", "Ruslan Salakhutdinov", "Joshua B Tenenbaum" ], "title": "Human-level concept learning through probabilistic program induction", "venue": "Science,", "year": 2015 }, { "authors": [ "Da Li", "Yongxin Yang", "Yi-Zhe Song", "Timothy M Hospedales" ], "title": "Deeper, broader and artier domain generalization", "venue": "In IEEE International Conference on Computer Vision, pp. 5542–5550,", "year": 2017 }, { "authors": [ "Da Li", "Yongxin Yang", "Yi-Zhe Song", "Timothy M Hospedales" ], "title": "Learning to generalize: Metalearning for domain generalization", "venue": "arXiv preprint arXiv:1710.03463,", "year": 2017 }, { "authors": [ "Da Li", "Yongxin Yang", "Yi-Zhe Song", "Timothy M Hospedales" ], "title": "Learning to generalize: Metalearning for domain generalization", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Haoliang Li", "Sinno Jialin Pan", "Shiqi Wang", "Alex C Kot" ], "title": "Domain generalization with adversarial feature learning", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition, pp. 5400–5409,", "year": 2018 }, { "authors": [ "Yanghao Li", "Naiyan Wang", "Jianping Shi", "Jiaying Liu", "Xiaodi Hou" ], "title": "Revisiting batch normalization for practical domain adaptation", "venue": "arXiv preprint arXiv:1603.04779,", "year": 2016 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft COCO: Common objects in context", "venue": "In European Conference on Computer Vision,", "year": 2014 }, { "authors": [ "Ping Luo", "Jiamin Ren", "Zhanglin Peng", "Ruimao Zhang", "Jingyu Li" ], "title": "Differentiable learning-tonormalize via switchable normalization", "venue": "arXiv preprint arXiv:1806.10779,", "year": 2018 }, { "authors": [ "Ping Luo", "Xinjiang Wang", "Wenqi Shao", "Zhanglin Peng" ], "title": "Towards understanding regularization in batch normalization", "venue": "arXiv preprint arXiv:1809.00846,", "year": 2018 }, { "authors": [ "Subhransu Maji", "Esa Rahtu", "Juho Kannala", "Matthew Blaschko", "Andrea Vedaldi" ], "title": "Fine-grained visual classification of aircraft", "venue": "Technical report,", "year": 2013 }, { "authors": [ "Krikamol Muandet", "David Balduzzi", "Bernhard Schölkopf" ], "title": "Domain generalization via invariant feature representation", "venue": "In International Conference on Machine Learning, pp", "year": 2013 }, { "authors": [ "Zachary Nado", "Shreyas Padhy", "D Sculley", "Alexander D’Amour", "Balaji Lakshminarayanan", "Jasper Snoek" ], "title": "Evaluating prediction-time batch normalization for robustness under covariate shift", "venue": "arXiv preprint arXiv:2006.10963,", "year": 2020 }, { "authors": [ "Alex Nichol", "Joshua Achiam", "John Schulman" ], "title": "On first-order meta-learning algorithms", "venue": "arXiv preprint arXiv:1803.02999,", "year": 2018 }, { "authors": [ "Maria-Elena Nilsback", "Andrew Zisserman" ], "title": "Automated flower classification over a large number of classes", "venue": "In Indian Conference on Computer Vision, Graphics & Image Processing,", "year": 2008 }, { "authors": [ "Boris Oreshkin", "Pau Rodríguez López", "Alexandre Lacoste" ], "title": "Tadam: Task dependent adaptive metric for improved few-shot learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Xingang Pan", "Ping Luo", "Jianping Shi", "Xiaoou Tang" ], "title": "Two at once: Enhancing learning and generalization capacities via ibn-net", "venue": "In European Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Xingchao Peng", "Qinxun Bai", "Xide Xia", "Zijun Huang", "Kate Saenko", "Bo Wang" ], "title": "Moment matching for multi-source domain adaptation", "venue": "In IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "James Requeima", "Jonathan Gordon", "John Bronskill", "Sebastian Nowozin", "Richard E Turner" ], "title": "Code for “Fast and flexible multi-task classification using conditional neural adaptive processes”, 2019", "venue": null, "year": 2019 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein", "Alexander Berg", "Li Fei-Fei" ], "title": "ImageNet large scale visual recognition challenge", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "Steffen Schneider", "Evgenia Rusak", "Luisa Eck", "Oliver Bringmann", "Wieland Brendel", "Matthias Bethge" ], "title": "Improving robustness against common corruptions by covariate shift adaptation", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Seonguk Seo", "Yumin Suh", "Dongwan Kim", "Jongwoo Han", "Bohyung Han" ], "title": "Learning to optimize domain specific normalization for domain generalization", "venue": "arXiv preprint arXiv:1907.04275,", "year": 2019 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zeme" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Cecilia Summers", "Michael J Dinneen" ], "title": "Four things everyone should know to improve batch normalization", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Eleni Triantafillou", "Tyler Zhu", "Vincent Dumoulin", "Pascal Lamblin", "Utku Evci", "Kelvin Xu", "Ross Goroshin", "Carles Gelada", "Kevin Swersky", "Pierre-Antoine Manzagol", "Hugo Larochelle" ], "title": "Metadataset: A dataset of datasets for learning to learn from few examples", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Hung-Yu Tseng", "Hsin-Ying Lee", "Jia-Bin Huang", "Ming-Hsuan Yang" ], "title": "Cross-domain few-shot classification via learned feature-wise transformation", "venue": "arXiv preprint arXiv:2001.08735,", "year": 2001 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Instance normalization: The missing ingredient for fast stylization", "venue": "arXiv preprint arXiv:1607.08022,", "year": 2016 }, { "authors": [ "Hemanth Venkateswara", "Jose Eusebio", "Shayok Chakraborty", "Sethuraman Panchanathan" ], "title": "Deep hashing network for unsupervised domain adaptation", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Catherine Wah", "Steve Branson", "Peter Welinder", "Pietro Perona", "Serge Belongie" ], "title": "The CaltechUCSD Birds-200-2011 Dataset", "venue": "Technical report,", "year": 2011 }, { "authors": [ "Ximei Wang", "Ying Jin", "Mingsheng Long", "Jianmin Wang", "Michael I Jordan" ], "title": "Transferable normalization: Towards improving transferability of deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Greg Yang", "Jeffrey Pennington", "Vinay Rao", "Jascha Sohl-Dickstein", "Samuel S" ], "title": "Schoenholz. A mean field theory of batch normalization", "venue": "arXiv preprint arXiv:1902.08129,", "year": 1902 }, { "authors": [ "Xiantong Zhen", "Yingjun Du", "Huan Xiong", "Qiang Qiu", "Cees G.M. Snoek", "Ling Shao" ], "title": "Learning to learn variational semantic memory", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Xiantong Zhen", "Haoliang Sun", "Yingjun Du", "Jun Xu", "Yilong Yin", "Ling Shao", "Cees Snoek" ], "title": "Learning to learn kernels with variational random features", "venue": "In International Conference on Machine Learning,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Batch normalization (Ioffe & Szegedy, 2015) is crucial for training neural networks, and with its variants, e.g., layer normalization (Ba et al., 2016), group normalization (Wu & He, 2018) and instance normalization (Ulyanov et al., 2016), has thus become an essential part of the deep learning toolkit (Bjorck et al., 2018; Luo et al., 2018a; Yang et al., 2019; Jia et al., 2019; Luo et al., 2018b; Summers & Dinneen, 2020). Batch normalization helps stabilize the distribution of internal activations when a model is being trained. Given a mini-batch B, the normalization is conducted along each individual feature channel for 2D convolutional neural networks. During training, the batch normalization moments are calculated as follows:\nµB = 1\nM M∑ i=1 ai, σ 2 B = 1 M M∑ i=1 (ai − µB)2, (1)\nwhere ai indicates the i-th element of the M activations in the batch, M = |B| ×H ×W , in which H and W are the height and width of the feature map in each channel. We can now apply the normalization statistics to each activation:\na′i ← BN(ai) ≡ γâi + β, where, âi = ai − µB√ σ2B + , (2)\nwhere γ and β are parameters learned during training, is a small scalar to prevent division by 0, and operations between vectors are element-wise. At test time, the standard practice is to normalize activations using the moving average over mini-batch means µB and variance σ2B. Batch normalization is based on an implicit assumption that the samples in the dataset are independent and identically distributed. However, this assumption does not hold in challenging settings like few-shot learning and domain generalization. In this paper, we strive for batch normalization when batches are of small size and suffer from distributions shifts between source and target domains.\nBatch normalization for few-shot learning and domain generalization problems have so far been considered separately, predominantly in a meta-learning setting. For few-shot meta-learning (Finn\net al., 2017; Gordon et al., 2019), most existing methods rely critically on transductive batch normalization, except those based on prototypes (Snell et al., 2017; Allen et al., 2019; Zhen et al., 2020a). However, the nature of transductive learning restricts its application due to the requirement to sample from the test set. To address this issue, Bronskill et al. (2020) proposes TaskNorm, which leverages other statistics from both layer and instance normalization. As a non-transductive normalization approach, it achieves impressive performance and outperforms conventional batch normalization (Ioffe & Szegedy, 2015). However, its performance is not always performing better than transductive batch normalization. Meanwhile, domain generalization (Muandet et al., 2013; Balaji et al., 2018; Li et al., 2017a;b) suffers from distribution shifts from training to test, which makes it problematic to directly apply statistics calculated from a seen domain to test data from unseen domains (Wang et al., 2019; Seo et al., 2019). Recent works deal with this problem by learning a domain specific normalization (Chang et al., 2019; Seo et al., 2019) or a transferable normalization in place of existing normalization techniques (Wang et al., 2019). We address the batch normalization challenges for few-shot classification and domain generalization in a unified way by learning a new batch normalization under the meta-learning setting.\nWe propose MetaNorm, a simple but effective meta-learning normalization. We leverage the metalearning setting and learn to infer normalization statistics from data, instead of applying direct calculations or blending various normalization statistics. MetaNorm is a general batch normalization approach, which is model-agnostic and serves as a plug-and-play module that can be seamlessly embedded into existing meta-learning approaches. We demonstrate its effectiveness for few-shot classification and domain generalization, where it learns task-specific statistics from limited data samples in the support set for each few-shot task; and it can also learn to generate domain-specific statistics from the seen source domains for unseen target domains. We verify the effectiveness of MetaNorm by extensive evaluation on few-shot classification and domain generalization tasks. For few-shot classification, we experiment with representative gradient, metric and model-based meta-learning approaches on fourteen benchmark datasets. For domain generalization, we evaluate the model on three widely-used benchmarks for cross-domain visual object classification. Last but not least, we introduce the challenging new task of few-shot domain generalization, which combines the challenges of both few-shot learning and domain generalization. The experimental results demonstrate the benefit of MetaNorm compared to existing batch normalizations." }, { "heading": "2 RELATED WORKS", "text": "Transductive Batch Normalization For conventional batch normalization under supervised settings, i.i.d. assumptions about the data distribution imply that estimating moments from the training set will provide appropriate normalization statistics for test data. However, in the meta-learning scenario data points are only assumed to be i.i.d. within a specific task. Therefore, it is critical to select the moments when batch normalization is applied to support and query set data points during meta training and meta testing. Hence, in the recent meta-learning literature the running moments are no longer used for normalization at meta-test time, but instead replaced with support/query set statistics. These statistics are used for normalization, both at meta-train and meta-test time. This approach is referred to as transductive batch normalization (TBN) (Bronskill et al., 2020). Competitive meta-learning methods (e.g., Gordon et al., 2019; Finn et al., 2017; Zhen et al., 2020b) rely on TBN to achieve state-of-the-art performance. However, there are two critical problems with TBN. First, TBN is sensitive to the distribution over the query set used during meta-training, and as such is less generally applicable than non-transductive learning. Second, TBN uses extra information for multiple test samples, compared to non-transductive batch normalization at prediction time, which could be problematic as we are not guaranteed to have a set of test samples available during training in practical applications. In contrast, MetaNorm is a non-transductive normalization. It generates statistics from the support set only, without relying on query samples, making it more practical.\nMeta Batch Normalization To address the problem of transductive batch normalization and improve conventional batch normalization, meta-batch normalization (MetaBN) was introduced (Triantafillou et al., 2020; Bronskill et al., 2020). In MetaBN, the support set alone is used to compute the normalization statistics for both the support and query sets at both meta-training and meta-test time. MetaBN is non-transductive since the normalization of a test input does not depend on other test inputs in the query set. However, Bronskill et al. (2020) observe that MetaBN performs less well for small-sized support sets. This leads to high variance in moment estimates, which is similar to the\ndifficulty of using batch normalization with small-batch training (Wu & He, 2018). To address this issue, Bronskill et al. (2020) proposed TaskNorm, which learns to combine statistics from both layer normalization and instance normalization, with a lending parameter to be learned at meta-train time. As a non-transductive normalization, TaskNorm achieves impressive performance, outperforming conventional batch normalization. However, it can not always perform better than transductive batch normalization. TaskNorm indicates non-transductive batch normalization estimates proper normalization statistics by involving learning in the normalization process. We also propose to learn batch normalization within the meta-learning framework, but instead of employing a learnable combination of existing normalization statistics, we directly learn to infer statistics from data. At meta-train time, the model learns to acquire the ability to generate statistics only from the support set and at meta-test time we directly apply the model to infer statistics for new tasks.\nBatch Normalization for Domain Adaptation and Domain Generalization Domain adaption suffers from a distribution shift between source and target domains, which makes it sub-optimal to directly apply batch normalization (Bilen & Vedaldi, 2017). Li et al. (2016) proposed adaptive batch normalization to increase the generalization ability of a deep neural network. By modulating the statistical information of all batch normalization layers in the neural network, it achieves deep adaptation effects for domain-adaptive tasks. Nado et al. (2020) noted the possibility of accessing small unlabeled batches of the shifted data just before prediction time. To improve model accuracy and calibration under covariate shift, they proposed prediction-time batch normalization. Since the activation statistics obtained during training do not reflect statistics of the test distribution, when testing in an out-of-distribution environment, Schneider et al. (2020) proposed estimating the batch statistics on the corrupted images. Kaku et al. (2020) demonstrated that standard non-adaptive feature normalization fails to correctly normalize the features of convolutional neural networks on held-out data where extraneous variables take values not seen during training. Learning domain-specific batch normalization has been explored (Chang et al., 2019; Wang et al., 2019). Wang et al. (2019) introduced transferable normalization, TransNorm, which normalizes the feature representations from source and target domain separately using domain-specific statistics. Along a similar vein, Chang et al. (2019) proposed a domain-specific batch normalization layer, which consists of two branches, each in charge of a single domain exclusively. The hope is that, through the normalization, the feature representation will become domain invariant. Nevertheless, these normalization methods are specifically designed for domain adaptation tasks, where data from target domains are available, though often unlabelled. This makes them inapplicable to domain generalization tasks where data from target domains are inaccessible at training time. Seo et al. (2019) proposed learning to optimize domain specific normalization for domain generalization tasks. Under the meta-learning settings, a mixture of different normalization techniques is optimized for each domain, where the mixture weights are learned specifically for different domains. Instead of combining different normalization statistics, MetaNorm learns from data to generate adaptive statistics specific to each domain. Moreover, we introduce an even more challenging setting, i.e., few-shot domain generalization, which combines the challenges of few-shot classification and domain generalization.\nConditional Batch Normalization de Vries et al. (2017) proposed conditional batch normalization to modulate visual processing by predicting the scalars γ and β of the batch normalization conditioned on the language from an early processing stage. Conditional batch normalization has also been applied to align different data distributions for domain adaptation (Li et al., 2016). Oreshkin et al. (2018) applies conditional batch normalization to metric-based models for the few-shot classification task. Tseng et al. (2020) proposed a learning-to-learn method to optimize the hyper-parameters of the feature-wise transformation layers by conditional batch normalization for cross-domain classification. Unlike conditional batch normalization, we use extra data (the query set) to generate normalization statistics under the meta-learning setting, rather than the scalars." }, { "heading": "3 METHODOLOGY", "text": "We view finding appropriate statistics for batch normalization as a density estimation problem. We need to infer the distribution parameters, such as, µ and σ when a Gaussian distribution is presumed, as in existing batch normalization approaches. The motivation behind MetaNorm is to leverage the meta-learning setting and learn from data to generate adaptive normalization statistics. MetaNorm is generic and model-agnostic, addressing batch normalization in a unified way for different settings by minimizing the KL divergence, which is a common metric to measure the difference between two\nprobability distributions: DKL [ qφ(m)|pθ(m) ] , (3)\nwhere m is a random variable that represents the distribution of activations, pθ(m) and qφ(m) are defined as Gaussian distributions with different implementations depending on the task of interest, e.g., few-shot classification or domain generalization. We leverage the amortized inference technique (Kingma & Welling, 2013) and implement this by inference networks. To be more specific, for each individual channel in each ` convolutional layer, we infer the moments µ and σ by f `µ(·) and f `σ(·), respectively, which are realized as multi-layer perceptrons and we call hypernetworks (Ha et al., 2016). Hypernetworks use one network to generate the weights for another network. Our hypernetworks generate the statistics from data by using amortization techniques.\nWe simply incorporate theDKL term into the optimization of the existing model with the cross-entropy loss LCE, resulting in a general loss function as follows:\nL = LCE − λDKL [ qφ(m)|pθ(m) ] (4)\nwhere λ > 0 is a regularization hyper-parameter.\nMetaNorm for Few-Shot Classification In the few-shot classification scenario, we define the C-way K-shot problem using the episodic formulation from (Vinyals et al., 2016). Each task Ti is a classification problem sampled from a task distribution p(T ). The tasks are divided into a training meta-set T tr, validation meta-set T val, and test meta-set T test, each with a disjoint set of target classes (i.e., a class seen during testing is not seen during training). The validation meta-set is used for model selection, and the testing meta-set is used only for final evaluation. Each task instance Ti ∼ p (T ) is composed of a support set S and a query set Q, and only contains N classes randomly selected from the appropriate meta-set.\nWe aim to infer statistics from the support set that better match the query set. Therefore, we adopt a straightforward criterion for the inference:\nDKL [ qφ(m|S)||pθ(m|Q) ] , (5)\nwhere we define q(m|S)=N (µS , σS) and p(m|Q)=N (µQ, σQ), which are the distributions inferred from the support and query sets in a few-shot learning task. By minimizing the KL term in conjunction with the prime objective of a meta-learning algorithm, we are able to find the appropriate statistics from limited data samples for batch normalization. The KL term adheres to a closed form, which makes it easy to implement and computationally efficient. The p(m|Q) can be estimated by directly calculating statistics using the query set, which however performs inferior to inference by optimization. We note the inference from the query set only happens during meta-training time and we use the learned inference network to generate normalization statistics at meta-test time for a test task using its support set.\nTo infer µS , we deploy an inference function f `µ(·) that takes activations of a sample as input, and the outputs from all samples are then averaged as the final µS :\nµS = 1\n|S| |S|∑ i=1 f `µ(ai), (6)\nwhere ai ∈ Rw×h is the flattened vector of the activation map of the i-th sample in the support set, w is the width of activations, and h is the height of the activation map. To infer σS , we use the obtained µS and deploy a separate inference function f `σ(·):\nσS = 1\n|S| |S|∑ i=1 f `σ ( (ai − µS)2 ) . (7)\nIt is worth mentioning that we actually use each sample to infer the statistics and take the average of all inferred statistics as the final normalization statistics. This enables us to fully exploit the samples to generate more accurate statistics.\nNote that the inference functions f `µ(·) and f `σ(·) are shared by different channels in the same layer and we will learn L pairs of those functions if we have L convolutional layers in the meta-learning\nmodel. They are parameterized by feed-forward multiple layer perception networks, which we call hypernetworks. Using these hypernetworks, we generate support moments (µS , σS) and query moments (µQ, σQ) from the support and query sets, which are used for calculating the KL term in Eq. (5) for optimization during meta-training time. At meta-training time, we apply the statistics inferred from the support set for normalization of both support and query samples:\na′ = γ ( a− µS√ σ2S + ) + β, (8)\nwhere γ and β are jointly learned with parameters of the hypernetworks at meta-training time and directly applied at meta-test time, as in conventional batch normalization. At meta-test time, given a test task, we use hypernetworks that take the support set as input to generate normalization statistics directly used for the query set.\nMetaNorm for Domain Generalization In the domain generalization scenario, we adopt the metalearning setting from (Li et al., 2018a; Balaji et al., 2018; Du et al., 2020), and divide a dataset into the source domains used for training and the target domains held out for testing. At meta-training time, data in the source domains is episodically divided into sets of meta-source Ds and meta-target Dt domains. In a similar vein to few-shot classification, we would like to learn to acquire the ability to generate domain-specific statistics from a single example, which can then be applied to unseen domains. We assume we can generate reasonable normalization statistics by using only one sample from the new domain, because, intuitively, a single sample already carries sufficient domain information. We use a single example and all the examples in the same domain to infer the domain-specific statistics and minimize the KL term:\nDKL [ qφ(m|ai)||pθ(m|Ds\\ai) ] , (9)\nwhere we define q(m|ai)=N (µa, σa), and p(m|Ds\\ai)=N (µD, σD), which are implemented in a similar way as Eq. (6) and Eq. (7), and ai is an example from its own domain Ds. In both the meta-source and meta-target domains, each example is normalized using the statistics generated by itself, like in Eq. (8), in which we make γ and β shared across all domains. The minimization of the KL term in Eq. (9) is to encourage the model to generate domain-specific statistics for normalization from only a single example. This enables us to generate domain-specific statistics on target domains that are never seen at meta-training time.\nIn practice, we take the sum of all samples in all source domains as follows:\n|Ds|∑ i J∑ j DKL [ qφ(m|ai)||pθ(m|Dsj\\ai) ] , (10)\nwhere Dsj denotes the j-th of J meta-source domains. The inference networks are first at metatraining time learned and then directly used as examples from the target domain at meta-test time. Note that on the meta-target domain we do not apply the KL term; instead, we simply rely on each example to generate its statistics for normalization.\nMetaNorm for Few-Shot Domain Generalization We introduce an even more challenging setting, i.e., few-shot domain generalization, that combines the challenges of both few-shot classification and domain generalization. Specifically, we aim to learn a model from a set of classification tasks, each of which has only a few samples in a support set for training and test the model on tasks in a query set, which are in a different domain from the support set. Like few-shot classification, the label space is not shared between training and testing. Cross-domain few-shot learning has been explored recently by Tseng et al. (2020) and Guo et al. (2020). However, the setting of our few-shot domain generalization is different and considered to be more challenging, as the support and query set are from different domains in the meta-test stage and the target domain is also unseen throughout the training stage. An example for the few-shot domain generalization setting is provided in Figure 1.\nWe divide a dataset into the source domains S used for training and the target domains T held out for testing. During training time, data in the source domains S is episodically divided into sets of meta-train Ds and meta-test Dt domains. We sample C-way k-shot data as the support set from each meta-source domain Ds, where k is the number of labelled examples for each of the C classes. We\nsample C classes from the meta-test Dt domain as the query set. At test time, we sample C-way k-shot data as the support set from each of the source domains S . The model learned at meta-training time is then fine-tuned on few-shot tasks samples from the source domains and tested on the target domain T . To learn the normalization statistics, we minimize the following KL term:\n|Ds|∑ i DKL[qφ(m|ai)||pθ(m|Ds)], (11)\nwhere ai is the activation associated with each sample from the meta-source domain Ds. Likewise, q(m|ai) and p(m|Ds) are also defined as factorized Gaussian distributions. We also adopt γ and β, which are shared across tasks and jointly learned. MetaNorm learns to acquire the ability to generate proper statistics for itself, and applies it to the samples in the meta-target domain." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "We conduct an extensive set of experiments on a total of 17 datasets containing more than 15 million images. We use three representative approaches to meta-learning as our base models, i.e., MAML (Finn et al., 2017), ProtoNets (Snell et al., 2017), and VERSA (Gordon et al., 2019), which can verify our MetaNorm is generic, flexible and model-agnostic, making it a simple plug-and-play module that is seamlessly embedded into existing meta-learning approaches. We further compare different normalization methods: transductive batch normalization (TBN), “example” that denotes testing with one example at a time by using TBN, “class” that denotes testing with one class at a time by using TBN, w/o BN which is not using batch normalization, CBN which is using conventional batch normalization, RN (Nichol et al., 2018), MetaBN (Bronskill et al., 2020), TaskNorm-L (Bronskill et al., 2020), and TaskNorm-I (Bronskill et al., 2020). All details about datasets and implementation settings are provided in the appendix. More experimental results, including convergence analysis, are also provided in the appendix. Our code will be publicly released. 1\nEffect of KL Term We first conduct ablation studies that measure the effectiveness of MetaNorm. The key of MetaNorm is the introduced KL term for learning to learn statistics. We test the performance of MetaNorm without the KL term by directly using the statistics generated from data.\n1https://github.com/YDU-AI/MetaNorm.\nIn this case, we also use the hypernetworks to generate the moments, µ and σ by simply removing the KL term in the objective function. In Table 1 we present results for few-shot classification on miniImageNet (Vinyals et al., 2016) and for domain generalization on PACS (Li et al., 2017a). The performance of MetaNorm without KL degrades significantly. This is expected, as without the KL term the generation process of normalization statistics lacks direct supervision from the target distribution, resulting in improper statistics.\nImpact of Target Set Size The other key parameter in MetaNorm is the size of the target set; that is, the number |Q| of samples in the query set (in few-shot classification) and the number |Ds| of samples in each domain (in domain generalization). This parameter is important when learning normalization statistics because we use the statistics generated by the target set as the ‘ground truth’. We evaluate its impact on the performance of MetaNorm in Figure 2. The experimental results show that TBN is not affected by the target size, both in the 5-way, 1-shot and 5 way, 5-shot tasks. MetaNorm performance rises as the size of the target set increases and plateaus at a reasonable size. In the few-shot setting, the performance reaches its peak at a size of about 125, which is slightly larger than the standard size of 75, while in the domain generalization setting, the performance plateaus at a size of about 128. This demonstrates that we are able to generate proper statistics with the mini-batch gradient descent optimization. In scenarios demanding a very small target set size, we could leverage image synthesis techniques to generate more samples for the targets sets.\nSensitivity to Algorithm We evaluate MetaNorm using the MAML (Finn et al., 2017), ProtoNets (Snell et al., 2017) and VERSA (Gordon et al., 2019) algorithms, which are representative gradient, metric and model based meta-learning approaches for few-shot classification. These experiments are conducted on the Omniglot and miniImageNet datasets under different settings. The comparison results on miniImageNet are summarized in Table 2 and the results on Omniglot are provided in the appendix. For all three meta-learning approaches under all settings, MetaNorm consistently achieves comparable performance both to the non-transductive and transductive normalization methods. Being non-transductive, TaskNorm can achieve impressive performance on all the tasks, but its performance is not always better than transductive batch normalization. MetaNorm achieves comparable\nTBN 45.9 ± 0.6 65.5 ± 0.9 45.5 ±1.8 59.7 ± 0.9 53.4 ±1.8 67.3 ±0.9 example 43.9 ± 1.9 60.1 ± 0.8 26.9 ± 1.5 30.3 ± 0.7 44.1 ± 1.7 60.3 ± 0.7 class 43.1 ± 1.8 59.8 ± 0.8 26.9 ± 1.5 27.2 ± 0.6 43.8 ± 1.8 59.7 ± 0.6 w/o BN 44.1 ± 0.5 60.1 ± 0.6 34.7 ± 1.5 51.3 ± 0.8 48.1 ± 1.5 63.8 ± 0.6 CBN (Ioffe & Szegedy, 2015) 47.8 ± 0.6 66.7 ± 0.5 20.1 ± 0.0 20.2 ± 0.2 45.7 ± 1.4 60.7 ± 0.8 RN (Nichol et al., 2018) 39.7 ± 0.5 63.1 ± 0.5 40.7 ± 1.7 57.6 ± 0.9 - - MetaBN (Bronskill et al., 2020) 42.6 ± 0.6 64.6 ± 0.5 41.6 ± 1.6 58.6 ± 0.9 50.1 ± 1.7 65.8 ± 0.9 TaskNorm-L (Bronskill et al., 2020) 47.5 ± 0.6 65.3 ± 0.5 42.0 ± 1.7 58.1 ± 0.9 52.1 ± 1.6 66.1 ± 0.7 TaskNorm-I (Bronskill et al., 2020) 43.2 ± 0.6 63.9 ± 0.5 42.4 ± 1.7 58.7 ± 0.9 52.9 ± 1.7 66.5 ±0.8 MetaNorm (|Q| = 75) 47.3 ± 0.6 65.4 ± 0.5 44.7 ± 1.5 59.6 ± 0.8 52.7 ± 1.6 67.5 ± 0.8 MetaNorm (|Q| = 125) 48.1 ± 0.6 65.9 ± 0.9 46.8 ± 1.6 60.1 ± 0.8 53.7 ± 1.6 68.1 ± 0.8 † Results for MAML and ProtoNets (except w/o BN) provided by (Bronskill et al., 2020), and VERSA with TBN provided by (Gordon et al., 2019). All other results based on our re-implementations.\nperformance to transductive batch normalization, especially under the 5-way-1-shot setting, which is challenging since only a few examples are available to generate statistics. Notice that, MetaNorm performs well with the standard query set size |Q| of 75 (15 per category). It is slightly better than non-transductive TaskNorm and comparable with TBN. MetaNorm achieves its best performance with a query size |Q| of 125 (25 per category), only slightly larger than the standard size of 75. This demonstrates the benefit of leveraging meta-learning by MetaNorm for batch normalization. We conclude that MetaNorm is general and serves as a plug-and-play module for existing meta-learning models to improve their performance.\nSensitivity to Dataset We evaluate MetaNorm on a demanding few-shot classification challenge called Meta-Dataset (Triantafillou et al., 2020), which is composed of thirteen image classification datasets (eight for training, five testing). To compare with previous work, we perform experiments with ProtoNets and report the results in Table 3. All thirteen per-dataset results can be found in the appendix. MetaNorm achieves high performance in terms of average rank, with highest accuracy on eight of the thirteen datasets. MetaNorm outperforms transductive batch normalization on eleven datasets. It achieves comparable performance with transductive batch normalization on Omniglot and MNIST, which are relatively less challenging. Moreover, MetaNorm performs better than TaskNorm on seven of the thirteen datasets. We conclude that MetaNorm is effective, outperforming alternative normalizations for most datasets.\nSensitivity to Domains For this experiment we adopt two widely-used benchmarks for domain generalization of visual object recognition, i.e., PACS (Li et al., 2017a) and Office-Home (Venkateswara et al., 2017). Detailed descriptions on the experimental settings and implementations are provided in the appendix. For fair comparison with prior methods (Balaji et al., 2018; Li et al., 2018b; Seo et al., 2019), we employ ResNet-18 as the backbone network in all experiments. As shown in Table 4, MetaNorm achieves the best performance on PACS and Office-Home in terms of average accuracy. On PACS, MetaNorm consistently outperforms other normalization approaches including domain-specific normalization (Seo et al., 2019), on all four domains. It is worth mentioning that the baseline normalization uses the statistics from the source domains for the batch normalization of the target domain. As expected, the baseline method produces relatively poor performance on most domains, since the source domains cannot provide proper statistics for target domains due to the distribution shift. We have also done an experiment using standard batch normalization. In the training stage, we compute the ground truth statistics using all the test data on the meta-target domain Dt instead of using inferred statistics p(m|Ds\\ai). MetaNorm is still better on most domains and on\naverage. This is reasonable because ground truth statistics from the test data do not necessarily reflect the true data distribution. The experimental results demonstrate MetaNorm can generate reasonable normalization statistics from only one sample in its domain. We conclude that MetaNorm is effective for domain generalization.\nFew-Shot Domain Generalization In our final experiment, we adopt the DomainNet dataset (Peng et al., 2019) and introduce a new, more challenging setting to evaluate the performance for few-shot domain generalization. Detailed descriptions on the dataset and experimental settings are provided in the appendix. We conduct the experiments with the MAML and ProtoNets algorithms under both 5-way 1-shot and 5-way 5-shot settings, and the results are reported in Table 5. We implement transductive batch normalization, MetaBN and the variants of TaskNorm for direct comparison. Under both settings, our MetaNorm produces the best performance and surpasses the transductive batch normalization by large margins of up to 4.0% on the challenging 5-way 1-shot setting with MAML. MetaNorm also achieves better results than the non-transductive TaskNorm approaches. At the same time, with ProtoNet our MetaNorm again consistently delivers the best performance and surpasses both transductive and non-transductive normalizations, The performance on the challenging few-shot domain generalization scenario with different meta-learning algorithms again demonstrates the effectiveness of MetaNorm in handling the challenges of batch normalization for small batches and across domains." }, { "heading": "5 CONCLUSION", "text": "In this paper we present MetaNorm, a meta-learning based batch normalization. MetaNorm tackles the challenging scenarios where the batch size is too small to produce sufficient statistics or when training statistics are not directly applicable to test data due to a domain shift. MetaNorm learns to learn adaptive statistics that are specific to tasks or domains. It is generic and model-agnostic, which enables it to be used with various meta-learning algorithms for different applications. We evaluate MetaNorm on two well-known existing tasks, i.e., few-shot classification and domain generalization, and we also introduce the challenging evaluation scenario of few-shot domain generalization that addresses the small batch and distribution shift problems simultaneously. An extensive evaluation on 17 datasets reveals that MetaNorm consistently achieves results that are better, or at least competitive, compared to other normalization approaches, verifying its effectiveness as a new meta-learning based batch normalization approach." }, { "heading": "A ALGORITHMS DESCRIPTIONS", "text": "In this Appendix we provide the detailed MetaNorm algorithm descriptions to conduct batch normalization for few-shot classification (Algorithm 1), domain generalization (Algorithm 2) and few-shot domain generalization (Algorithm 3). The dataflow of the implementation is shown in Figure 3.\nAlgorithm 1 MetaNorm for Few-Shot Classification\nMeta-train: Input values of a over support set aS,i and query set aQ,i; γ, β← Initialize parameters.\nµS = 1 |S| |S|∑ i=1 f `µ(aS,i); µQ = 1|Q| |Q|∑ i=1 f `µ(aQ,i);\nσS = 1 |S| |S|∑ i=1 f `σ ( (aS,i − µS)2 ) ; σQ = 1|Q| |Q|∑ i=1 f `σ ( (aQ,i − µQ)2 ) ;\na′S,i = γ ( aS,i−µS√ σ2 S + ) + β; a′Q,i = γ ( aQ,i−µS√ σ2 S + ) + β;\nLKL = DKL [ N (µS , σS)||N (µQ, σQ) ] return a′S,i = MetaNorm(aS,i); a′Q,i = MetaNorm(aQ,i); LKL\nMeta-test: Input values of a over support set aS,i and query set aQ,i;\nµS = 1 |S| |S|∑ i=1 f `µ(aS,i);\nσS = 1 |S| |S|∑ i=1 f `σ ( (aS,i − µS)2 ) ;\na′Q,i = γ\n( aQ,i−µS√\nσ2 S +\n) + β;\nreturn a′Q,i = MetaNorm(aQ,i)\nAlgorithm 2 MetaNorm for Domain Generalization\nTrain: Input values of a over meta-source domain aS,i and meta-target domain aT,i; γ, β← Initialize parameters.\nµS,i = f ` µ(aS,i); µS = 1|S| |S|∑ i=1 f `µ(aS,i);\nσS,i = f ` σ ( (aS,i − µS,i)2 ) ; σS = 1|S| |S|∑ i=1 f `σ ( (aS,i − µS)2 ) ; µT,i = f ` µ(aT,i); σT,i = f `σ ( (aT,i − µT,i)2 ) ;\na′T,i = γ ( aT,i−µT,i√ σ2 T,i + ) + β;\nLKL = DKL [ N (µS,〉, σS,〉)||N (µS , σS) ] return a′T,i = MetaNorm(aT,i); LKL\nTest: Input values of a over test domain ai; µi = f ` µ(ai); σi = f ` σ ( (ai − µi)2 ) ;\na′i = γ ( ai−µi√ σ2i + ) + β; return a′i = MetaNorm(ai)" }, { "heading": "B DATASETS", "text": "We conduct an extensive set of experiments on a total of 17 datasets containing more than 15 million images. All dataset details and settings are provided in this Appendix.\nminiImageNet. The miniImageNet is originally proposed in (Vinyals et al., 2016) and has been widely used for evaluating few-shot learning algorithms. It consists of 60,000 color images from 100\nAlgorithm 3 MetaNorm for Few-Shot Domain Generalization\nMeta-train: Input values of a over meta-source domain set aS,i and meta-target domain set aQ,i; γ, β← Initialize parameters.\nµS = 1 |S| |S|∑ i=1 f `µ(aS,i); µQ = 1|Q| |Q|∑ i=1 f `µ(aQ,i);\nσS = 1 |S| |S|∑ i=1 f `σ ( (aS,i − µS)2 ) ; σQ = 1|Q| |Q|∑ i=1 f `σ ( (aQ,i − µQ)2 ) ;\na′S,i = γ ( aS,i−µS√ σ2 S + ) + β; a′Q,i = γ ( aQ,i−µS√ σ2 S + ) + β;\nLKL = DKL [ N (µS , σS)||N (µQ, σQ) ] return a′S,i = MetaNorm(aS,i); a′Q,i = MetaNorm(aQ,i); LKL\nclasses with 600 examples per class. The images have dimensions of 84×84 pixels. We follow the train/val/ test split introduced in (Ravi & Larochelle, 2017), which uses 64 classes for meta-training, 16 classes for meta-validation, and the remaining 20 classes for meta-testing.\nOmniglot. Omniglot (Lake et al., 2015) is a few-shot learning dataset consisting of 1,623 handwritten characters (each with 20 instances) derived from 50 alphabets. We follow the pre-processing and training procedure defined in (Vinyals et al., 2016). We resize images to 28×28. The training, validation and test sets consist of a random split of 1,100, 100, and 423 characters.\nPACS (Li et al., 2017a) contains a total of 9,991 images of the size 224×224 from 4 domains, i.e., photo, art-painting, cartoon and sketch, which demonstrate huge domain gaps. Images are from 7 object classes, i.e., dog, elephant, giraffe, guitar, horse, house, and person. We follow the “leave-one-out” protocol in (Li et al., 2017a; 2018b; Carlucci et al., 2019), where the model is trained on any three of the four domains, which we call source domains, and tested on the last (target) domain. The train-val-test splits are the same as in (Li et al., 2017a).\nOffice-Home (Venkateswara et al., 2017) also has 4 domains: art, product, clipart and real-world. For each domain, the dataset contains images of 65 object categories found typically in office and home settings. We use the same experimental protocol as for PACS.\nDomainNet (Peng et al., 2019) contains 6 distinct domains, i.e., clipart, infograph, painting, quickdraw, real, and sketch for 345 categories. The categories are from 24 divisions, which are: Furniture, Mammal, Tool, Cloth, Electricity, Building, Office, Human Baby, Road Transportation, Food, Nature,\nCold Blooded, Music, Fruit, Sport, Tree, Bird, Vegetable, Shape, Kitchen, Water Transportation, Sky Transportation, Insect, Others.\nMeta-Dataset (Triantafillou et al., 2020) is composed of ten (eight train, two test) existing image classification datasets. These are: ILSVRC-2012 (ImageNet, (Russakovsky et al., 2015)), Omniglot (Lake et al., 2015), Aircraft (Maji et al., 2013), CUB-200-2011 (Birds, (Wah et al., 2011)), Describable Textures (Cimpoi et al., 2014), Quick Draw, Fungi, VGG Flowr (Nilsback & Zisserman, 2008), Traffic Signs (Houben et al., 2013) and MSCOCO (Lin et al., 2014). Each episode generated in Meta-Dataset uses classes from a single dataset. Two of these datasets, Traffic Signs and MSCOCO, are fully reserved for evaluation, it means no classes from these sets participate in the training set. Except for Traffic Signs and MSCOCO, the remaining datasets contribute some classes to each of training, validation and test splits of classes. There are about 14 million images in total in Meta-Dataset." }, { "heading": "C FEW-SHOT DOMAINNET", "text": "To construct Few-shot DomainNet, we chose 200 random classes from DomainNet and used 140 for training, 20 for validation and the last 40 for testing. Note that the last 40 object classes were never seen during training. The dataset consists of 200,000 colour images of size 84×84 with each of the 200 classes having 1,000 examples. Please see Table 6, Table 7 and Table 8 for training, validation, and test classes.\nD IMPLEMENTATION DETAILS.\nIn the few-shot learning task, MAML and ProtoNets use a simple CNN containing 4 convolutional layers, each of which is a 3×3 convolution with 32 filters, followed by MetaNorm, a ReLU nonlinearity, and finally a 2×2 max-pooling. VERSA uses a CNN containing 5 convolutional layers, each of which is a 3×3 convolution with 64 filters, followed by MetaNorm, a ReLU non-linearity, and finally a 2×2 max-pooling. In the domain generalization task, we rely on ResNet-18 as backbone for fair comparison with previous work. Each convolutional layer is followed by MetaNorm. The hypernetwork is a 3-layer MLP with 128 units per layer and rectifier nonlinearities. We implemented all models in the Tensorflow framework and tested on an NVIDIA Tesla V100. All code will be available at: https://github.com/YDU-AI/MetaNorm.\nD.1 MAML EXPERIMENTS\nFor MAML experiments, we used the codebase by Finn (Finn, 2017). We use the Adam optimizer with default parameters, and a meta batch size of 4 tasks. The number of test episodes is set as 600. The number of training iterations is 60,000. We set λ=0.001. The other hyper-parameters we use are the default MAML parameters. No early stopping was used. We used the first-order approximation of MAML for the experiments.\nD.2 PROTONETS EXPERIMENTS\nFor ProtoNets, we used the codebase by Fatir (Fatir, 2018). For miniImageNet, we used the following ProtoNets options: a learning rate of 0.001, 60,000 training iterations, 200 validation episodes, 600 test episodes and λ=0.0001. We choose the units of hidden layers and λ by cross-validation. For Meta-Dataset, we reproduce the code provided by CNAPS (Requeima et al., 2019) with TensorFlow. We simply replace its normalization method with our MetaNorm method and add the KL term to the final loss. We are consistent with the dataset configuration and follow the training process as specified in (Triantafillou et al., 2020). The number of training iterations is 80,000. We use a constant learning rate of 0.0001. We set λ=0.001. We follow TaskNorm’s (Bronskill et al., 2020) options: they do not use feature adaptation, and allow updates pre-trained feature extractor weights during meta-training stage.\nD.3 VERSA EXPERIMENTS\nFor VERSA, we used the codebase by Gordon (Gordon, 2019). For the 5-way 5-shot model, we train using the setting of 8 tasks per batch for 100,000 iterations and use a constant learning rate of 0.0001, λ= 0.001. For the 5-way 1-shot model, we train with the setting of 8 tasks per batch for 150,000 iterations and use a constant learning rate of 0.00025, λ=0.01. We set validation episodes as 200, and test episodes as 600. The units of hidden layers and λ were chosen by cross-validation." }, { "heading": "E EXTRA RESULTS FOR EFFECT OF KL TERM", "text": "In this Appendix we consider extra results for the ablation on measuring the effect of the KL term. We report results for few-shot classification on miniImageNet with ProtoNets (Snell et al., 2017) and VERSA (Gordon et al., 2019) in Table 11. We also report domain generalization results on Office-Home in Table 12. In all cases the KL term is crucial." }, { "heading": "F SENSITIVITY TO ALGORITHM ON OMNIGLOT", "text": "The experiments on Omniglot for few-shot classification under the meta-learning settings of MAML, VERSA and ProtoNets are reported in Tables 13, 14 and 15. MetaNorm consistently outperforms both transductive and non-transductive normalization approaches." }, { "heading": "G SENSITIVITY TO DATASET", "text": "The complete set of results for each of the thirteen datasets in Meta-Dataset are provided in Table 16." }, { "heading": "H TRAINING SPEED", "text": "We plot the training loss versus training iterations by using the ProtoNets algorithm in Figure 4. MetaNorm achieves fastest training convergence. From Table 2 and Figure 4, MetaNorm achieves best classification accuracy and training efficiency, which demonstrates the benefit of leveraging meta-learning by MetaNorm for batch normalization.\nTa bl\ne 16\n:S en\nsi tiv\nity to\nD at\nas et\n.F ew\n-s ho\ntc la\nss ifi\nca tio\nn re\nsu lts\non M\net a-\nD at\nas et\nus in\ng Pr\not oN\net s.\nTh e ±\nsi gn\nin di\nca te\ns th\ne 95\n% co\nnfi de\nnc e\nin te\nrv al\nov er\nta sk\ns. B\nes t\npe rf\nor m\nin g\nm et\nho ds\nan d\nan y\not he\nrr un\ns w\nith in\nth e\n95 %\nco nfi\nde nc\ne m\nar gi\nn in\nbo ld\n.R es\nul ts\nof ot\nhe rm\net ho\nds pr\nov id\ned by\nB ro\nns ki\nll et\nal .(\n20 20\n).\nIL SV\nR C\nO m\nni gl\not A\nir cr\naf t\nB ir\nds Te\nxt ur\nes Q\nui ck\nD ra\nw Fu\nng i\nV G\nG Fl\now er\nTr af\nfic Si\ngn s\nM SC\nO C\nO M\nN IS\nT C\nIF A\nR 10\nC IF\nA R\n10 0\nR an\nk\nT B\nN 44\n.7 ±\n1. 0\n90 .7 ±\n0. 6\n83 .3 ±\n0. 6\n69 .6 ±\n0. 9\n61 .2 ±\n0. 7\n75 .0 ±\n0. 8\n46 .4 ±\n1. 0\n83 .1 ±\n0. 6\n64 .0 ±\n0. 8\n38 .2 ±\n1. 0\n93 .4 ±\n0. 4\n64 .7 ±\n0. 8\n48 .0 ±\n1. 1\n4. 81\nC B\nN (I\nof fe\n& Sz\neg ed\ny, 20\n15 )\n43 .6 ±\n1. 0\n77 .5 ±\n1. 1\n77 .0 ±\n0. 7\n67 .5 ±\n0. 9\n57 .7 ±\n0. 7\n62 .1 ±\n1. 0\n43 .6 ±\n1. 0\n82 .3 ±\n0. 6\n59 .5 ±\n0. 8\n36 .6 ±\n1. 0\n86 .5 ±\n0. 6\n57 .3 ±\n0. 8\n43 .1 ±\n1. 0\n9. 11\nB R\nN (I\nof fe\n,2 01\n7) 43\n.0 ±\n1. 0\n89 .1 ±\n0. 7\n84 .4 ±\n0. 5\n69 .0 ±\n0. 9\n58 .0 ±\n0. 7\n74 .3 ±\n0. 8\n46 .5 ±\n1. 0\n84 .5 ±\n0. 6\n65 .7 ±\n0. 8\n38 .4 ±\n1. 0\n91 .9 ±\n0. 4\n60 .1 ±\n0. 8\n43 .9 ±\n1. 0\n6. 23\nL N\n(B a\net al\n., 20\n16 )\n33 .9 ±\n0. 9\n90 .8 ±\n0. 6\n73 .9 ±\n0. 7\n54 .1 ±\n1. 0\n55 .8 ±\n0. 7\n72 .5 ±\n0. 8\n33 .2 ±\n1. 1\n78 .3 ±\n0. 8\n69 .1 ±\n0. 7\n30 .1 ±\n0. 9\n94 .0 ±\n0. 4\n51 .5 ±\n0. 8\n34 .0 ±\n0. 9\n8. 19\nIN (U\nly an\nov et\nal .,\n20 16\n) 32\n.5 ±\n0. 9\n83 .4 ±\n0. 8\n75 .0 ±\n0. 6\n50 .2 ±\n1. 0\n45 .3 ±\n0. 7\n70 .8 ±\n0. 8\n29 .8 ±\n1. 0\n69 .4 ±\n0. 8\n60 .7 ±\n0. 8\n27 .7 ±\n0. 9\n87 .4 ±\n0. 5\n50 .5 ±\n0. 8\n32 .1 ±\n1. 0\n10 .6 1 R N (N ic ho le ta l., 20 18 ) 45 .1 ± 1. 0 90 .8 ± 0. 6 80 .9 ± 0. 6 68 .6 ± 0. 9 64 .1 ± 0. 7 75 .4 ± 0. 7 46 .7 ± 1. 0 84 .4 ± 0. 7 66 .0 ± 0. 8 37 .3 ± 1. 0 93 .9 ± 0. 4 62 .3 ± 0. 8 47 .2 ± 1. 1 4. 73 M et aB N (B ro ns ki ll et al ., 20 20 ) 44 .2 ± 1. 0 90 .4 ± 0. 6 82 .3 ± 0. 6 68 .6 ± 0. 8 60 .5 ± 0. 7 74 .2 ± 0. 7 46 .5 ± 1. 0 86 .0 ± 0. 6 63 .2 ± 0. 8 38 .6 ± 1. 1 93 .9 ± 0. 4 63 .0 ± 0. 8 47 .0 ± 1. 0 4. 78 Ta sk N or m -r (B ro ns ki ll et al ., 20 20 ) 42 .7 ± 1. 0 88 .6 ± 0. 7 79 .6 ± 0. 6 64 .2 ± 0. 9 60 .8 ± 0. 7 73 .2 ± 0. 8 42 .3 ± 1. 1 81 .1 ± 0. 7 64 .9 ± 0. 8 35 .4 ± 1. 0 92 .5 ± 0. 4 61 .4 ± 0. 8 45 .2 ± 1. 0 7. 58 Ta sk N or m -L (B ro ns ki ll et al ., 20 20 ) 45 .1 ± 1. 1 90 .2 ± 0. 6 81 .2 ± 0. 6 68 .8 ± 0. 9 63 .4 ± 0. 8 75 .4 ± 0. 7 46 .5 ± 1. 0 82 .9 ± 0. 7 67 .0 ± 0. 7 39 .2 ± 1. 0 91 .9 ± 0. 4 66 .9 ± 0. 8 51 .3 ± 1. 1 4. 09 Ta sk N or m -I (B ro ns ki ll et al ., 20 20 ) 44 .9 ± 1. 0 90 .6 ± 0. 6 84 .7 ± 0. 5 71 .0 ± 0. 9 65 .9 ± 0. 7 77 .5 ± 0. 7 49 .6 ± 1. 1 83 .2 ± 0. 6 65 .8 ± 0. 7 38 .5 ± 1. 0 93 .3 ± 0. 4 67 .6 ± 0. 8 50 .0 ± 1. 0 3. 07 M et aN or m 45 .3 ± 1. 0 90 .8 ± 0. 5 83 .3 ± 0. 6 70 .6 ± 0. 8 66 .7 ± 0. 6 77 .6 ± 0. 6 51 .1 ± 0. 6 86 .3 ± 0. 7 68 .1 ± 0. 6 39 .7 ± 0. 9 92 .1 ± 0. 5 67 .1 ± 0. 7 52 .7 ± 0. 9 2. 35" } ]
2,021
METANORM: LEARNING TO NORMALIZE FEW-SHOT BATCHES ACROSS DOMAINS
SP:a81ee1b76201649dc0d0653db304c7297befee33
[ "The paper extends an existing proof for the sufficiency of polylogarithmic width for sharp learning guarantees of ReLU networks trained by (stochastic) gradient descent from shallow networks to deep networks. The theoretical analysis links the convergence of GD and SGD to the width of the network. The paper shows that polylogarithmic width is enough to give reasonable guarantees also for deep neural networks. It furthermore provides a generalisation bound in terms of network width.", "The paper studies optimization and generalization properties of deep relu networks trained with (stochastic) gradient descent on the logistic loss in the neural tangent kernel (NTK) regime. By using a new analysis that makes the \"linearized\" approximation as well as the L2 norm of the model in the approximate \"random feature\" kernel more explicit, the authors obtain results where the width only depends poly-logarithmically on the number of samples and 1/epsilon, for a test 0-1 loss of epsilon. This improves on previous analysis for deep networks, although it is similar to the two-layer result of Ji & Telgarsky." ]
A recent line of research on deep learning focuses on the extremely overparameterized setting, and shows that when the network width is larger than a high degree polynomial of the training sample size n and the inverse of the target error ́1, deep neural networks learned by (stochastic) gradient descent enjoy nice optimization and generalization guarantees. Very recently, it is shown that under certain margin assumptions on the training data, a polylogarithmic width condition suffices for two-layer ReLU networks to converge and generalize (Ji and Telgarsky, 2020). However, whether deep neural networks can be learned with such a mild over-parameterization is still an open question. In this work, we answer this question affirmatively and establish sharper learning guarantees for deep ReLU networks trained by (stochastic) gradient descent. In specific, under certain assumptions made in previous work, our optimization and generalization guarantees hold with network width polylogarithmic in n and ́1. Our results push the study of over-parameterized deep neural networks towards more practical settings.
[ { "affiliations": [], "name": "CIENT TO" }, { "affiliations": [], "name": "LEARN DEEP" }, { "affiliations": [], "name": "RELU NETWORKS" }, { "affiliations": [], "name": "Zixiang Chen" }, { "affiliations": [], "name": "Yuan Cao" }, { "affiliations": [], "name": "Difan Zou" }, { "affiliations": [], "name": "Quanquan Gu" } ]
[ { "authors": [ "Z. ALLEN-ZHU", "Y. LI", "Y. LIANG" ], "title": "Learning and generalization in overparameterized neural networks, going beyond two layers", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Z. ALLEN-ZHU", "Y. LI", "Z. SONG" ], "title": "A convergence theory for deep learning via over-parameterization", "venue": "In International Conference on Machine Learning", "year": 2019 }, { "authors": [ "Z. ALLEN-ZHU", "Y. LI", "Z. SONG" ], "title": "On the convergence rate of training recurrent neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "ARORA S", "DU S", "HU W", "LI Z", "WANG R" ], "title": "Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks", "venue": "In International Conference on Machine Learning", "year": 2019 }, { "authors": [ "S. ARORA", "S.S. DU", "W. HU", "Z. LI", "R. SALAKHUTDINOV", "R. WANG" ], "title": "On exact computation with an infinitely wide neural net", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "BAI Y", "LEE J. D" ], "title": "Beyond linearization: On quadratic and higher-order approximation of wide neural networks", "venue": "In International Conference on Learning Representations", "year": 2019 }, { "authors": [ "P.L. BARTLETT", "D.J. FOSTER", "M.J. TELGARSKY" ], "title": "Spectrally-normalized margin bounds for neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "P.L. BARTLETT", "S. MENDELSON" ], "title": "Rademacher and Gaussian complexities: Risk bounds and structural results", "venue": "Journal of Machine Learning Research", "year": 2002 }, { "authors": [ "CAO Y", "GU Q" ], "title": "Generalization bounds of stochastic gradient descent for wide and deep neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "CAO Y", "GU Q" ], "title": "Generalization error bounds of gradient descent for learning overparameterized deep relu networks", "venue": "In the Thirty-Fourth AAAI Conference on Artificial Intelligence", "year": 2020 }, { "authors": [ "N. CESA-BIANCHI", "A. CONCONI", "C. GENTILE" ], "title": "On the generalization ability of on-line learning algorithms", "venue": "IEEE Transactions on Information Theory", "year": 2004 }, { "authors": [ "L. CHIZAT", "E. OYALLON", "F. BACH" ], "title": "On lazy training in differentiable programming", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "DU S", "LEE J", "LI H", "WANG L", "ZHAI X" ], "title": "Gradient descent finds global minima of deep neural networks", "venue": "In International Conference on Machine Learning", "year": 2019 }, { "authors": [ "S.S. DU", "X. ZHAI", "B. POCZOS", "A. SINGH" ], "title": "Gradient descent provably optimizes over-parameterized neural networks", "venue": "In International Conference on Learning Representations", "year": 2019 }, { "authors": [ "FREI S", "CAO Y", "GU Q" ], "title": "Algorithm-dependent generalization bounds for overparameterized deep residual networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "HE K", "ZHANG X", "REN S", "SUN J" ], "title": "Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification", "venue": "In Proceedings of the IEEE international conference on computer vision", "year": 2015 }, { "authors": [ "A. JACOT", "F. GABRIEL", "C. HONGLER" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks. In Advances in neural information processing systems", "venue": null, "year": 2018 }, { "authors": [ "Z. JI", "M. TELGARSKY" ], "title": "Risk and parameter convergence of logistic regression", "venue": null, "year": 2018 }, { "authors": [ "Z. JI", "M. TELGARSKY" ], "title": "Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow relu networks", "venue": "In International Conference on Learning Representations", "year": 2020 }, { "authors": [ "K. KAWAGUCHI", "J. HUANG" ], "title": "Gradient descent finds global minima for generalizable deep neural networks of practical sizes", "venue": "57th Annual Allerton Conference on Communication,", "year": 2019 }, { "authors": [ "A KRIZHEVSKY" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "J. LEE", "L. XIAO", "S.S. SCHOENHOLZ", "Y. BAHRI", "J. SOHL-DICKSTEIN", "J. PENNINGTON" ], "title": "Wide neural networks of any depth evolve as linear models under gradient descent", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "M. MOHRI", "A. ROSTAMIZADEH", "A. TALWALKAR" ], "title": "Foundations of machine learning", "venue": null, "year": 2018 }, { "authors": [ "A. NITANDA", "T. SUZUKI" ], "title": "Refined generalization analysis of gradient descent for over-parameterized two-layer neural networks with smooth activations on classification problems", "venue": null, "year": 2019 }, { "authors": [ "S. OYMAK", "M. SOLTANOLKOTABI" ], "title": "Towards moderate overparameterization: global convergence guarantees for training shallow neural networks. arXiv preprint arXiv:1902.04674", "venue": null, "year": 2019 }, { "authors": [ "S. SHALEV-SHWARTZ", "S. BEN-DAVID" ], "title": "Understanding machine learning: From theory to algorithms", "venue": null, "year": 2014 }, { "authors": [ "O. SHAMIR" ], "title": "Gradient methods never overfit on separable data", "venue": "arXiv preprint arXiv:2007.00028", "year": 2020 }, { "authors": [ "R. VERSHYNIN" ], "title": "Introduction to the non-asymptotic analysis of random matrices", "venue": null, "year": 2010 }, { "authors": [ "ZOU D", "CAO Y", "ZHOU D", "GU Q" ], "title": "Gradient descent optimizes over-parameterized deep ReLU networks. Machine Learning", "venue": null, "year": 2019 }, { "authors": [ "ZOU D", "GU Q" ], "title": "An improved analysis of training over-parameterized deep neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Cao", "Gu" ], "title": "There exists an absolute constant κ such that, with probability at least 1 ́ OpnL2q expr ́Ωpmτ2{3Lqs, for any τ ď κL ́6rlogpmqs ́3{2", "venue": null, "year": 2019 }, { "authors": [ "Cao", "Gu" ], "title": "Under Assumption 3.1, if m ě C̄L logpnL{δq with some absolute constant C̄, with probability at least 1 ́ δ, we have |fWp0qpxiq| ď C", "venue": null, "year": 2019 }, { "authors": [ "¶Bartlett" ], "title": "Rademacher complexity bound for the composition of the ramp loss and the neural network function. In our setting essentially the ramp loss is replaced with the ́`1p ̈q function, which is bounded and 1-Lipschitz continuous. The proof in our setting is therefore exactly the same as the proof given in (Bartlett et al., 2017)", "venue": "(Bartlett et al.,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks have become one of the most important and prevalent machine learning models due to their remarkable power in many real-world applications. However, the success of deep learning has not been well-explained in theory. It remains mysterious why standard optimization algorithms tend to find a globally optimal solution, despite the highly non-convex landscape of the training loss function. Moreover, despite the extremely large amount of parameters, deep neural networks rarely over-fit, and can often generalize well to unseen data and achieve good test accuracy. Understanding these mysterious phenomena on the optimization and generalization of deep neural networks is one of the most fundamental problems in deep learning theory.\nRecent breakthroughs have shed light on the optimization and generalization of deep neural networks (DNNs) under the over-parameterized setting, where the hidden layer width is extremely large (much larger than the number of training examples). It has been shown that with the standard random initialization, the training of over-parameterized deep neural networks can be characterized by a kernel function called neural tangent kernel (NTK) (Jacot et al., 2018; Arora et al., 2019b). In the neural tangent kernel regime (or lazy training regime (Chizat et al., 2019)), the neural network function behaves similarly as its first-order Taylor expansion at initialization (Jacot et al., 2018; Lee et al., 2019; Arora et al., 2019b; Cao and Gu, 2019), which enables feasible optimization and generalization analysis. In terms of optimization, a line of work (Du et al., 2019b; Allen-Zhu et al., 2019b; Zou et al., 2019; Zou and Gu, 2019) proved that for sufficiently wide neural networks, (stochastic) gradient descent (GD/SGD) can successfully find a global optimum of the training loss function. For generalization, Allen-Zhu et al. (2019a); Arora et al. (2019a); Cao and Gu (2019) established generalization bounds of neural networks trained with (stochastic) gradient descent, and showed that the neural networks can learn target functions in certain reproducing kernel Hilbert space (RKHS) or the corresponding random feature function class.\nAlthough existing results in the neural tangent kernel regime have provided important insights into the learning of deep neural networks, they require the neural network to be extremely wide. ∗Equal contribution.\nThe typical requirement on the network width is a high degree polynomial of the training sample size n and the inverse of the target error ´1. As there still remains a huge gap between such network width requirement and the practice, many attempts have been made to improve the overparameterization condition under various conditions on the training data and model initialization (Oymak and Soltanolkotabi, 2019; Zou and Gu, 2019; Kawaguchi and Huang, 2019; Bai and Lee, 2019). For two-layer ReLU networks, a recent work (Ji and Telgarsky, 2020) showed that when the training data are well separated, polylogarithmic width is sufficient to guarantee good optimization and generalization performances. However, their results cannot be extended to deep ReLU networks since their proof technique largely relies on the fact that the network model is 1-homogeneous, which cannot be satisfied by DNNs. Therefore, whether deep neural networks can be learned with such a mild over-parameterization is still an open problem.\nIn this paper, we resolve this open problem by showing that polylogarithmic network width is sufficient to learn DNNs. In particular, unlike the existing works that require the DNNs to behave very close to a linear model (up to some small approximation error), we show that a constant linear approximation error is sufficient to establish nice optimization and generalization guarantees for DNNs. Thanks to the relaxed requirement on the linear approximation error, a milder condition on the network width and tighter bounds on the convergence rate and generalization error can be proved. We summarize our contributions as follows:\n• We establish the global convergence guarantee of GD for training deep ReLU networks based on the so-called NTRF function class (Cao and Gu, 2019), a set of linear functions over random features. Specifically, we prove that GD can learn deep ReLU networks with width m “ polypRq to compete with the best function in NTRF function class, where R is the radius of the NTRF function class.\n• We also establish the generalization guarantees for both GD and SGD in the same setting. Specifically, we prove a diminishing statistical error for a wide range of network width m P prΩp1q,8q, while most of the previous generalization bounds in the NTK regime only works in the setting where the network widthm is much greater than the sample size n. Moreover, we establish rOp ´2q rOp ´1q sample complexities for GD and SGD respectively, which are tighter than existing bounds for learning deep ReLU networks (Cao and Gu, 2019), and match the best results when reduced to the two-layer cases (Arora et al., 2019b; Ji and Telgarsky, 2020).\n• We further generalize our theoretical analysis to the scenarios with different data separability assumptions in the literature. We show if a large fraction of the training data are well separated, the best function in the NTRF function class with radius R “ rOp1q can learn the training data with error up to . This together with our optimization and generalization guarantees immediately suggests that deep ReLU networks can be learned with network width m “ rΩp1q, which has a logarithmic dependence on the target error and sample size n. Compared with existing results (Cao and Gu, 2020; Ji and Telgarsky, 2020) which require all training data points to be separated in the NTK regime, our result is stronger since it allows the NTRF function class to misclassify a small proportion of the training data.\nFor the ease of comparison, we summarize our results along with the most related previous results in Table 1, in terms of data assumption, the over-parameterization condition and sample complexity. It can be seen that under data separation assumption (See Sections 4.1, 4.2), our result improves existing results for learning deep neural networks by only requiring a polylogpn, ´1q network width. Notation. For two scalars a and b, we denote a^ b “ minta, bu. For a vector x P Rd we use }x}2 to denote its Euclidean norm. For a matrix X, we use }X}2 and }X}F to denote its spectral norm and Frobenius norm respectively, and denote by Xij the entry of X at the i-th row and j-th column. Given two matrices X and Y with the same dimension, we denote xX,Yy “ ř\ni,jXijYij .\nGiven a collection of matrices W “ tW1, ¨ ¨ ¨ ,WLu P bLl“1Rmlˆm 1 l and a function fpWq over bLl“1Rmlˆm 1 l , we define by∇WlfpWq the partial gradient of fpWq with respect to Wl and denote ∇WfpWq “ t∇WlfpWquLl“1. We also denote BpW, τq “ W1 : maxlPrLs }W1l ´Wl}F ď τ ( for τ ě 0. For two collection of matrices A “ tA1, ¨ ¨ ¨ ,Anu, B “ tB1, ¨ ¨ ¨ ,Bnu, we denote xA,By “\nřn i“1xAi,Biy and }A}2F “ řn i“1 }Ai}2F .\nAlgorithm 1 Gradient descent with random initialization Input: Number of iterations T , step size η, training set S “ tpxi, yiqni“1u, initialization Wp0q for t “ 1, 2, . . . , T do\nUpdate Wptq “Wpt´1q ´ η ¨∇WLSpWpt´1qq. end for Output: Wp0q, . . . ,WpT q.\nGiven two sequences txnu and tynu, we denote xn “ Opynq if |xn| ď C1|yn| for some absolute positive constant C1, xn “ Ωpynq if |xn| ě C2|yn| for some absolute positive constant C2, and xn “ Θpynq if C3|yn| ď |xn| ď C4|yn| for some absolute constants C3, C4 ą 0. We also use rOp¨q, rΩp¨q to hide logarithmic factors inOp¨q and Ωp¨q respectively. Additionally, we denote xn “ polypynq if xn “ OpyDn q for some positive constant D, and xn “ polylogpynq if xn “ polyplogpynqq." }, { "heading": "2 PRELIMINARIES ON LEARNING NEURAL NETWORKS", "text": "In this section, we introduce the problem setting in this paper, including definitions of the neural network and loss functions, and the training algorithms, i.e., GD and SGD with random initialization.\nNeural network function. Given an input x P Rd, the output of deep fully-connected ReLU network is defined as follows,\nfWpxq “ m1{2WLσpWL´1 ¨ ¨ ¨σpW1xq ¨ ¨ ¨ q, where W1 P Rmˆd, W2, ¨ ¨ ¨ ,WL´1 P Rmˆm, WL P R1ˆm, and σpxq “ maxt0, xu is the ReLU activation function. Here, without loss of generality, we assume the width of each layer is equal to m. Yet our theoretical results can be easily generalized to the setting with unequal width layers, as long as the smallest width satisfies our overparameterization condition. We denote the collection of all weight matrices as W “ tW1, . . . ,WLu. Loss function. Given training dataset txi, yiui“1,...,n with input xi P Rd and output yi P t´1,`1u, we define the training loss function as\nLSpWq “ 1\nn\nn ÿ i“1 LipWq,\nwhere LipWq “ ` ` yifWpxiq ˘ “ log ` 1` expp´yifWpxiqq ˘ is defined as the cross-entropy loss.\nAlgorithms. We consider both GD and SGD with Gaussian random initialization. These two algorithms are displayed in Algorithms 1 and 2 respectively. Specifically, the entries in Wp0q1 , ¨ ¨ ¨ ,W p0q L´1 are generated independently from univariate Gaussian distributionNp0, 2{mq and the entries in Wp0qL are generated independently from Np0, 1{mq. For GD, we consider using the full gradient to update the model parameters. For SGD, we use a new training data point in each iteration.\nNote that our initialization method in Algorithms 1, 2 is the same as the widely used He initialization (He et al., 2015). Our neural network parameterization is also consistent with the parameterization used in prior work on NTK (Jacot et al., 2018; Allen-Zhu et al., 2019b; Du et al., 2019a; Arora et al., 2019b; Cao and Gu, 2019).\nAlgorithm 2 Stochastic gradient desecent (SGD) with random initialization Input: Number of iterations n, step size η, initialization Wp0q for i “ 1, 2, . . . , n do\nDraw pxi, yiq from D and compute the corresponding gradient∇WLipWpi´1qq. Update Wpiq “Wpi´1q ´ η ¨∇WLipWpi´1qq.\nend for Output: Randomly choose xW uniformly from tWp0q, . . . ,Wpn´1qu." }, { "heading": "3 MAIN THEORY", "text": "In this section, we present the optimization and generalization guarantees of GD and SGD for learning deep ReLU networks. We first make the following assumption on the training data points. Assumption 3.1. All training data points satisfy }xi}2 “ 1, i “ 1, . . . , n.\nThis assumption has been widely made in many previous works (Allen-Zhu et al., 2019b;c; Du et al., 2019b;a; Zou et al., 2019) in order to simplify the theoretical analysis. This assumption can be relaxed to be upper bounded and lower bounded by some constant.\nIn the following, we give the definition of Neural Tangent Random Feature (NTRF) (Cao and Gu, 2019), which characterizes the functions learnable by over-parameterized ReLU networks.\nDefinition 3.2 (Neural Tangent Random Feature, (Cao and Gu, 2019)). Let Wp0q be the initialization weights, and FWp0q,Wpxq “ fWp0qpxq ` x∇fWp0qpxq,W ´Wp0qy be a function with respect to the input x. Then the NTRF function class is defined as follows\nFpWp0q, Rq “ FWp0q,Wp¨q : W P BpWp0q, R ¨m´1{2q ( .\nThe function class FWp0q,Wpxq consists of linear models over random features defined based on the network gradients at the initialization. Therefore it captures the key “almost linear” property of wide neural networks in the NTK regime (Lee et al., 2019; Cao and Gu, 2019). In this paper, we use the NTRF function class as a reference class to measure the difficulty of a learning problem. In what follows, we deliver our main theoretical results regarding the optimization and generalization guarantees of learning deep ReLU networks. We study both GD and SGD with random initialization (presented in Algorithms 1 and 2)." }, { "heading": "3.1 GRADIENT DESCENT", "text": "The following theorem establishes the optimization guarantee of GD for training deep ReLU networks for binary classification. Theorem 3.3. For δ,R ą 0, let NTRF “ infFPFpWp0q,Rq n´1 řn i“1 `ryiF pxiqs be the minimum training loss achievable by functions in FpWp0q, Rq. Then there exists\nm˚pδ,R, Lq “ rO ` polypR,Lq ¨ log4{3pn{δq ˘ ,\nsuch that if m ě m˚pδ,R, Lq, with probability at least 1 ´ δ over the initialization, GD with step size η “ ΘpL´1m´1q can train a neural network to achieve at most 3 NTRF training loss within T “ O `\nL2R2 ´1NTRF ˘ iterations.\nTheorem 3.3 shows that the deep ReLU network trained by GD can compete with the best function in the NTRF function class FpWp0q, Rq if the network width has a polynomial dependency in R and L and a logarithmic dependency in n and 1{δ. Moreover, if the NTRF function class with R “ rOp1q can learn the training data well (i.e., NTRF is less than a small target error ), a polylogarithmic (in terms of n and ´1) network width suffices to guarantee the global convergence of GD, which directly improves over-paramterization condition in the most related work (Cao and Gu, 2019). Besides, we remark here that this assumption on the NTRF function class can be easily satisfied when the training data admits certain separability conditions, which we discuss in detail in Section 4.\nCompared with the results in (Ji and Telgarsky, 2020) which give similar network width requirements for two-layer networks, our result works for deep networks. Moreover, while Ji and Telgarsky (2020)\nessentially required all training data to be separable by a function in the NTRF function class with a constant margin, our result does not require such data separation assumptions, and allows the NTRF function class to misclassify a small proportion of the training data points∗.\nWe now characterize the generalization performance of neural networks trained by GD. We denote L0´1D pWq “ Epx,yq„Dr1tfWpxq ¨ y ă 0us as the expected 0-1 loss (i.e., expected error) of fWpxq. Theorem 3.4. Under the same assumptions as Theorem 3.3, with probability at least 1´δ, the iterate Wptq of Algorithm 1 satisfies that\nL0´1D pW ptqq ď 2LSpWptqq ` rO\n˜\n4LL2R\nc\nm n ^ ˜ L3{2R? n ` L 11{3R4{3 m1{6 ¸¸ `O ˜ c logp1{δq n ¸\nfor all t “ 0, . . . , T .\nTheorem 3.4 shows that the test error of the trained neural network can be bounded by its training error plus statistical error terms. Note that the statistical error terms is in the form of a minimum between two terms 4LL2R a m{n and L3{2R{ ? n` L11{3R4{3{m1{6. Depending on the network width m, one of these two terms will be the dominating term and diminishes for large n: (1) if m “ opnq, the statistical error will be 4LL2R a\nm{n, and diminishes as n increases; and (2) if m “ Ωpnq, the statistical error is L3{2R{ ? n` L11{3R4{3{m1{6, and again goes to zero as n increases. Moreover,\nin this paper we have a specific focus on the setting m “ rOp1q, under which Theorem 3.4 gives a statistical error of order rOpn´1{2q. This distinguishes our result from previous generalization bounds for deep networks (Cao and Gu, 2020; 2019), which cannot be applied to the setting m “ rOp1q. We note that for two-layer ReLU networks (i.e., L “ 2) Ji and Telgarsky (2020) proves a tighter rOp1{n1{2q generalization error bound regardless of the neural networks width m, while our result (Theorem 3.4), in the two-layer case, can only give rOp1{n1{2q generalization error bound when m “ rOp1q or m “ rΩpn3q. However, different from our proof technique that basically uses the (approximated) linearity of the neural network function, their proof technique largely relies on the 1-homogeneous property of the neural network, which restricted their theory in two-layer cases. An interesting research direction is to explore whether a rOp1{n1{2q generalization error bound can be also established for deep networks (regardless of the network width), which we will leave it as a future work." }, { "heading": "3.2 STOCHASTIC GRADIENT DESCENT", "text": "Here we study the performance of SGD for training deep ReLU networks. The following theorem establishes a generalization error bound for the output of SGD. Theorem 3.5. For δ,R ą 0, let NTRF “ infFPFpWp0q,Rq n´1 řn i“1 `ryiF pxiqs be the minimum training loss achievable by functions in FpWp0q, Rq. Then there exists\nm˚pδ,R, Lq “ rO ` polypR,Lq ¨ log4{3pn{δq ˘ ,\nsuch that if m ě m˚pδ,R, Lq, with probability at least 1 ´ δ, SGD with step size η “ Θ ` m´1 ¨ pLR2n´1 ´1NTRF ^ L´1q ˘ achieves\nErL0´1D pxWqs ď 8L2R2 n ` 8 logp2{δq n ` 24 NTRF,\nwhere the expectation is taken over the uniform draw of xW from tWp0q, . . . ,Wpn´1qu.\nFor any ą 0, Theorem 3.5 gives a rOp ´1q sample complexity for deep ReLU networks trained with SGD to achieveOp NTRF` q test error. Our result extends the result for two-layer networks proved in (Ji and Telgarsky, 2020) to multi-layer networks. Theorem 3.5 also provides sharper results compared with Allen-Zhu et al. (2019a); Cao and Gu (2019) in two aspects: (1) the sample complexity is improved from n “ rOp ´2q to n “ rOp ´1q; and (2) the overparamterization condition is improved from m ě polyp ´1q to m “ rΩp1q. ∗A detailed discussion is given in Section 4.2." }, { "heading": "4 DISCUSSION ON THE NTRF CLASS", "text": "Our theoretical results in Section 3 rely on the radius (i.e.,R) of the NTRF function classFpWp0q, Rq and the minimum training loss achievable by functions in FpWp0q, Rq, i.e., NTRF. Note that a larger R naturally implies a smaller NTRF, but also leads to worse conditions on m. In this section, for any (arbitrarily small) target error rate ą 0, we discuss various data assumptions studied in the literature under which our results can lead to Op q training/test errors, and specify the network width requirement." }, { "heading": "4.1 DATA SEPARABILITY BY NEURAL TANGENT RANDOM FEATURE", "text": "In this subsection, we consider the setting where a large fraction of the training data can be linearly separated by the neural tangent random features. The assumption is stated as follows.\nAssumption 4.1. There exists a collection of matrices U˚ “ tU˚1 , ¨ ¨ ¨ ,U˚Lu satisfying řL l“1 }U˚l }2F “ 1, such that for at least p1´ ρq fraction of training data we have\nyix∇fWp0qpxiq,U˚y ě m1{2γ,\nwhere γ is an absolute positive constant† and ρ P r0, 1q.\nThe following corollary provides an upper bound of NTRF under Assumption 4.1 for some R.\nProposition 4.2. Under Assumption 4.1, for any , δ ą 0, if R ě C “ log1{2pn{δq ` logp1{ q ‰\n{γ for some absolute constant C, then with probability at least 1´ δ,\nNTRF :“ inf FPFpWp0q,Rq\nn´1 n ÿ\ni“1 ` ` yiF pxiq ˘ ď ` ρ ¨OpRq.\nProposition 4.2 covers the setting where the NTRF function class is allowed to misclassify training data, while most of existing work typically assumes that all training data can be perfectly separated with constant margin (i.e., ρ “ 0) (Ji and Telgarsky, 2020; Shamir, 2020). Our results show that for sufficiently small misclassification ratio ρ “ Op q, we have NTRF “ rOp q by choosing the radius parameter R logarithimic in n, δ´1, and ´1. Substituting this result into Theorems 3.3, 3.4 and 3.5, it can be shown that a neural network with width m “ polypL, logpn{δq, logp1{ qq ˘ suffices to guarantee good optimization and generalization performances for both GD and SGD. Consequently, we can obtain that the bounds on the test error for GD and SGD are rOpn´1{2q and rOpn´1q respectively." }, { "heading": "4.2 DATA SEPARABILITY BY SHALLOW NEURAL TANGENT MODEL", "text": "In this subsection, we study the data separation assumption made in Ji and Telgarsky (2020) and show that our results cover this particular setting. We first restate the assumption as follows.\nAssumption 4.3. There exists up¨q : Rd Ñ Rd and γ ě 0 such that }upzq}2 ď 1 for all z P Rd, and\nyi\nż\nRd σ1pxz,xiyq ¨ xupzq,xiydµNpzq ě γ\nfor all i P rns, where µN p¨q denotes the standard normal distribution.\nAssumption 4.3 is related to the linear separability of the gradients of the first layer parameters at random initialization, where the randomness is replaced with an integral by taking the infinite width limit. Note that similar assumptions have also been studied in (Cao and Gu, 2020; Nitanda and Suzuki, 2019; Frei et al., 2019). The assumption made in (Cao and Gu, 2020; Frei et al., 2019) uses gradients with respect to the second layer weights instead of the first layer ones. In the following, we mainly focus on Assumption 4.3, while our result can also be generalized to cover the setting in (Cao and Gu, 2020; Frei et al., 2019).\n†The factor m1{2 is introduced here since }∇Wp0qfpxiq}F is typically of order Opm 1{2q.\nIn order to make a fair comparison, we reduce our results for multilayer networks to the two-layer setting. In this case, the neural network function takes form\nfWpxq “ m1{2W2σpW1xq. Then we provide the following proposition, which states that Assumption 4.3 implies a certain choice of R “ rOp1q such the the minimum training loss achieved by the function in the NTRF function class FpWp0q, Rq satisfies NTRF “ Op q, where is the target error. Proposition 4.4. Suppose the training data satisfies Assumption 4.3. For any , δ ą 0, let R “ C “ logpn{δq ` logp1{ q ‰\n{γ for some large enough absolute constant C. If the neural network width satisfies m “ Ω ` logpn{δq{γ2 ˘\n, then with probability at least 1 ´ δ, there exist FWp0q,Wpxiq P FpWp0q, Rq such that ` `\nyi ¨ FWp0q,Wpxiq ˘ ď ,@i P rns.\nProposition 4.4 shows that under Assumption 4.3, there exists FWp0q,Wp¨q P FpWp0q, Rq with R “ rOp1{γq such that the cross-entropy loss of FWp0q,Wp¨q at each training data point is bounded by . This implies that NTRF ď . Moreover, by applying Theorem 3.3 with L “ 2, the condition on the neural network width becomes m “ rΩp1{γ8q‡, which matches the results proved in Ji and Telgarsky (2020). Moreover, plugging these results on m and NTRF into Theorems 3.4 and 3.5, we can conclude that the bounds on the test error for GD and SGD are rOpn´1{2q and rOpn´1q respectively." }, { "heading": "4.3 CLASS-DEPENDENT DATA NONDEGENERATION", "text": "In previous subsections, we have shown that under certain data separation conditions NTRF can be sufficiently small while the corresponding NTRF function class has R of order rOp1q. Thus neural networks with polylogarithmic width enjoy nice optimization and generalization guarantees. In this part, we consider the following much milder data separability assumption made in Zou et al. (2019). Assumption 4.5. For all i ‰ i1 if yi ‰ yi1 , then }xi ´ xj}2 ě φ for some absolute constant φ.\nIn contrast to the conventional data nondegeneration assumption (i.e., no duplicate data points) made in Allen-Zhu et al. (2019b); Du et al. (2019b;a); Zou and Gu (2019)§, Assumption 4.5 only requires that the data points from different classes are nondegenerate, thus we call it class-dependent data nondegeneration.\nWe have the following proposition which shows that Assumption 4.5 also implies the existence of a good function that achieves training error, in the NTRF function class with a certain choice of R. Proposition 4.6. Under Assumption 4.5, if\nR “ Ω ` n3{2φ´1{2 logpnδ´1 ´1q ˘ , m “ rΩ ` L22n12φ´4 ˘ ,\nwe have NTRF ď with probability at least 1´ δ.\nProposition 4.6 suggests that under Assumption 4.5, in order to guarantee NTRF ď , the size of NTRF function class needs to be Ωpn3{2q. Plugging this into Theorems 3.4 and 3.5 leads to vacuous bounds on the test error. This makes sense since Assumption 4.5 basically covers the “random label” setting, which is impossible to be learned with small generalization error. Moreover, we would like to point out our theoretical analysis leads to a sharper over-parameterization condition than that proved in Zou et al. (2019), i.e., m “ rΩ ` n14L16φ´4`n12L16φ´4 ´1 ˘\n, if the network depth satisfies L ď rOpn1{3 _ ´1{6q." }, { "heading": "5 PROOF SKETCH OF THE MAIN THEORY", "text": "In this section, we introduce a key technical lemma in Section 5.1, based on which we provide a proof sketch of Theorems 3.3. The full proof of all our results can be found in the appendix. ‡We have shown in the proof of Theorem 3.3 that m “ rΩpR8q (see (A.1) for more detail). §Specifically, Allen-Zhu et al. (2019b); Zou and Gu (2019) require that any two data points (rather than data points from different classes) are separated by a positive distance. Zou and Gu (2019) shows that this assumption is equivalent to those made in Du et al. (2019b;a), which require that the composite kernel matrix is strictly positive definite." }, { "heading": "5.1 A KEY TECHNICAL LEMMA", "text": "Here we introduce a key technical lemma used in the proof of Theorem 3.3.\nOur proof is based on the key observation that near initialization, the neural network function can be approximated by its first-order Taylor expansion. In the following, we first give the definition of the linear approximation error in a τ -neighborhood around initialization.\napppτq :“ sup i“1,...,n sup W1,WPBpWp0q,τq\nˇ ˇfW1pxiq ´ fWpxiq ´ x∇fWpxiq,W1 ´Wy ˇ ˇ.\nIf all the iterates of GD stay inside a neighborhood around initialization with small linear approximation error, then we may expect that the training of neural networks should be similar to the training of the corresponding linear model, where standard optimization techniques can be applied. Motivated by this, we also give the following definition on the gradient upper bound of neural networks around initialization, which is related to the Lipschitz constant of the optimization objective function.\nMpτq :“ sup i“1,...,n sup l“1,...,L sup WPBpWp0q,τq }∇WlfWpxiq}F .\nBy definition, we can choose W˚ P BpWp0q, Rm´1{2q such that n´1 řn i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF. Then we have the following lemma. Lemma 5.1. Set η “ OpL´1Mpτq´2q. Suppose that W˚ P BpWp0q, τq and Wptq P BpWp0q, τq for all 0 ď t ď t1 ´ 1. Then it holds that\n1\nt1\nt1´1 ÿ t“0 LSpWptqq ď }Wp0q ´W˚}2F ´ }Wpt 1q ´W˚}2F ` 2t1η NTRF t1η ` 3 2 ´ 4 apppτq ˘ .\nLemma 5.1 plays a central role in our proof. In specific, if Wptq P BpWp0q, τq for all t ď t1, then Lemma 5.1 implies that the average training loss is in the same order of NTRF as long as the linear approximation error apppτq is bounded by a positive constant. This is in contrast to the proof in Cao and Gu (2019), where apppτq appears as an additive term in the upper bound of the training loss, thus requiring apppτq “ Op NTRFq to achieve the same error bound as in Lemma 5.1. Since we can show that app “ rOpm´1{6q (See Section A.1), this suggests that m “ rΩp1q is sufficient to make the average training loss in the same order of NTRF.\nCompared with the recent results for two-layer networks by Ji and Telgarsky (2020), Lemma 5.1 is proved with different techniques. In specific, the proof by Ji and Telgarsky (2020) relies on the 1-homogeneous property of the ReLU activation function, which limits their analysis to two-layer networks with fixed second layer weights. In comparison, our proof does not rely on homogeneity, and is purely based on the linear approximation property of neural networks and some specific properties of the loss function. Therefore, our proof technique can handle deep networks, and is potentially applicable to non-ReLU activation functions and other network architectures (e.g, Convolutional neural networks and Residual networks)." }, { "heading": "5.2 PROOF SKETCH OF THEOREM 3.3", "text": "Here we provide a proof sketch of Theorem 3.3. The proof consists of two steps: (i) showing that all T iterates stay close to initialization, and (ii) bounding the empirical loss achieved by gradient descent. Both of these steps are proved based on Lemma 5.1.\nProof sketch of Theorem 3.3. Recall that we choose W˚ P BpWp0q, Rm´1{2q such that n´1\nřn i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF. We set τ “ rOpL1{2m´1{2Rq, which is chosen slightly larger than m´1{2R since Lemma 5.1 requires the region BpWp0q, τq to include both W˚ and tWptqut“0,...,t1 . Then by Lemmas 4.1 and B.3 in Cao and Gu (2019) we know that apppτq “ rOpτ4{3m1{2L3q “ rOpR4{3L11{3m´1{6q. Therefore, we can set m “ rΩpR8L22q to ensure that apppτq ď 1{8.\nThen we proceed to show that all iterates stay inside the region BpWp0q, τq. Since the L.H.S. of Lemma 5.1 is strictly positive and apppτq ď 1{8, we have for all t ď T ,\n}Wp0q ´W˚}2F ´ }Wptq ´W˚}2F ě ´2tη NTRF,\nwhich gives an upper bound of }Wptq ´W˚}F . Then by the choice of η, T , triangle inequality, and a simple induction argument, we see that }Wptq ´Wp0q}F ď m´1{2R ` ? 2Tη NTRF “ rOpL1{2m´1{2Rq, which verifies that Wptq P BpWp0q, τq for t “ 0, . . . , T ´ 1. The second step is to show that GD can find a neural network with at most 3 NTRF training loss within T iterations. To show this, by the bound given in Lemma 5.1 with app ď 1{8, we drop the terms }Wptq ´W˚}2F and rearrange the inequality to obtain\n1\nT\nT´1 ÿ t“0 LSpWptqq ď 1 ηT }Wp0q ´W˚}2F ` 2 NTRF.\nWe see that T is large enough to ensure that the first term in the bound above is smaller than NTRF. This implies that the best iterate among Wp0q, . . . ,WpT´1q achieves an empirical loss at most 3 NTRF." }, { "heading": "6 CONCLUSION", "text": "In this paper, we established the global convergence and generalization error bounds of GD and SGD for training deep ReLU networks for the binary classification problem. We show that a network width condition that is polylogarithmic in the sample size n and the inverse of target error ´1 is sufficient to guarantee the learning of deep ReLU networks. Our results resolve an open question raised in Ji and Telgarsky (2020)." }, { "heading": "ACKNOWLEDGEMENT", "text": "We would like to thank the anonymous reviewers for their helpful comments. ZC, YC and QG are partially supported by the National Science Foundation CAREER Award 1906169, IIS-2008981 and Salesforce Deep Learning Research Award. DZ is supported by the Bloomberg Data Science Ph.D. Fellowship. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies." }, { "heading": "A PROOF OF MAIN THEOREMS", "text": "In this section we provide the full proof of Theorems 3.3, 3.4 and 3.5." }, { "heading": "A.1 PROOF OF THEOREM 3.3", "text": "We first provide the following lemma which is useful in the subsequent proof. Lemma A.1 (Lemmas 4.1 and B.3 in Cao and Gu (2019)). There exists an absolute constant κ such that, with probability at least 1 ´ OpnL2q expr´Ωpmτ2{3Lqs, for any τ ď κL´6rlogpmqs´3{2, it holds that\napppτq ď rO ` τ4{3L3m1{2 ˘ , Mpτq ď rOp ? mq.\nProof of Theorem 3.3. Recall that W˚ is chosen such that\n1\nn\nn ÿ i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF\nand W˚ P BpWp0q, Rm´1{2q. Note that to apply Lemma 5.1, we need the region BpWp0q, τq to include both W˚ and tWptqut“0,...,t1 . This motivates us to set τ “ rOpL1{2m´1{2Rq, which is slightly larger than m´1{2R. With this choice of τ , by Lemma A.1 we have apppτq “ rOpτ4{3m1{2L3q “ rOpR4{3L11{3m´1{6q. Therefore, we can set\nm “ rΩpR8L22q (A.1)\nto ensure that apppτq ď 1{8, where rΩp¨q hides polylogarithmic dependencies on network depth L, NTRF function class size R, and failure probability parameter δ. Then by Lemma 5.1, we have with probability at least 1´ δ, we have\n}Wp0q ´W˚}2F ´ }Wpt 1q ´W˚}2F ě η\nt1´1 ÿ t“0 LSpWptqq ´ 2t1η NTRF (A.2)\nas long as Wp0q, . . . ,Wpt 1´1q P BpWp0q, τq. In the following proof we choose η “ ΘpL´1m´1q and T “ rLR2m´1η´1 ´1NTRFs.\nWe prove the theorem by two steps: 1) we show that all iterates tWp0q, ¨ ¨ ¨ ,WpT qu will stay inside the region BpWp0q, τq; and 2) we show that GD can find a neural network with at most 3 NTRF training loss within T iterations.\nAll iterates stay inside BpWp0q, τq. We prove this part by induction. Specifically, given t1 ď T , we assume the hypothesis Wptq P BpWp0q, τq holds for all t ă t1 and prove that Wpt1q P BpWp0q, τq. First, it is clear that Wp0q P BpWp0q, τq. Then by (A.2) and the fact that LSpWq ě 0, we have\n}Wpt 1q ´W˚}2F ď }Wp0q ´W˚}2F ` 2ηt1 NTRF\nNote that T “ rLR2m´1η´1 ´1NTRFs and W˚ P BpWp0q, R ¨m´1{2q, we have L ÿ\nl“1 }Wpt 1q l ´W ˚ l }2F “ }Wpt 1q ´W˚}2F ď CLR2m´1,\nwhere C ě 4 is an absolute constant. Therefore, by triangle inequality, we further have the following for all l P rLs,\n}Wpt 1q l ´W p0q l }F ď }W pt1q l ´W ˚ l }F ` }W p0q l ´W ˚ l }F\nď ? CLRm´1{2 `Rm´1{2 ď 2 ? CLRm´1{2. (A.3)\nTherefore, it is clear that }Wpt 1q l ´W p0q l }F ď 2\n? CLRm´1{2 ď τ based on our choice of τ\npreviously. This completes the proof of the first part.\nConvergence of gradient descent. (A.2) implies\n}Wp0q ´W˚}2F ´ }WpT q ´W˚}2F ě η ˆ T´1 ÿ\nt“0 LSpWptqq ´ 2T NTRF\n˙\n.\nDividing by ηT on the both sides, we get\n1\nT\nT´1 ÿ t“0 LSpWptqq ď }Wp0q ´W˚}2F ηT ` 2 NTRF ď LR2m´1 ηT ` 2 NTRF ď 3 NTRF,\nwhere the second inequality is by the fact that W˚ P BpWp0q, R ¨ m´1{2q and the last inequality is by our choices of T and η which ensure that Tη ě LR2m´1 ´1NTRF. Notice that T “ rLR2m´1η´1 ´1NTRFs “ OpL2R2 ´1 NTRFq. This completes the proof of the second part, and we are able to complete the proof." }, { "heading": "A.2 PROOF OF THEOREM 3.4", "text": "Following Cao and Gu (2020), we first introduce the definition of surrogate loss of the network, which is defined by the derivative of the loss function.\nDefinition A.2. We define the empirical surrogate error ESpWq and population surrogate error EDpWq as follows:\nESpWq :“ ´ 1\nn\nn ÿ i“1 `1 “ yi ¨ fWpxiq ‰ , EDpWq :“ Epx,yq„D ´ `1 “ y ¨ fWpxq ‰( .\nThe following lemma gives uniform-convergence type of results for ESpWq utilizing the fact that ´`1p¨q is bounded and Lipschitz continuous.\nLemma A.3. For any rR, δ ą 0, suppose that m “ rΩpL12 rR2q ¨ rlogp1{δqs3{2. Then with probability at least 1´ δ, it holds that\n|EDpWq ´ ESpWq| ď rO ˜ min # 4LL3{2 rR c m\nn , L rR? n ` L\n3 rR4{3\nm1{6\n+¸ `O ˜ c\nlogp1{δq n\n¸\nfor all W P BpWp0q, rR ¨m´1{2q\nWe are now ready to prove Theorem 3.4, which combines the trajectory distance analysis in the proof of Theorem 3.3 with Lemma A.3.\nProof of Theorem 3.4. With exactly the same proof as Theorem 3.3, by (A.3) and induction we have Wp0q,Wp1q, . . . ,WpT q P BpWp0q, rRm´1{2q with rR “ Op ? LRq. Therefore by Lemma A.3, we have\n|EDpWptqq ´ ESpWptqq| ď rO ˜ min # 4LL2R c m n , L3{2R? n ` L 11{3R4{3 m1{6 +¸ `O ˜ c logp1{δq n ¸\nfor all t “ 0, 1, . . . , T . Note that we have 1tz ă 0u ď ´2`1pzq. Therefore,\nEL0´1D pW ptqq ď 2EDpWptqq\nď 2LSpWptqq ` rO ˜ min # 4LL2R c m n , L3{2R? n ` L 11{3R4{3 m1{6 +¸ `O ˜ c logp1{δq n ¸\nfor t “ 0, 1, . . . , T , where the last inequality is by ESpWq ď LSpWq because ´`1pzq ď `pzq for all z P R. This finishes the proof." }, { "heading": "A.3 PROOF OF THEOREM 3.5", "text": "In this section we provide the full proof of Theorem 3.5. We first give the following result, which is the counterpart of Lemma 5.1 for SGD. Again we pick W˚ P BpWp0q, Rm´1{2q such that the loss of the corresponding NTRF model FWp0q,W˚pxq achieves NTRF. Lemma A.4. Set η “ OpL´1Mpτq´2q. Suppose that W˚ P BpWp0q, τq and Wpn1q P BpWp0q, τq for all 0 ď n1 ď n´ 1. Then it holds that\n}Wp0q ´W˚}2F ´ }Wpn 1q ´W˚}2F ě\n´3\n2 ´ 4 apppτq\n¯ η n1 ÿ\ni“1 LipWpi´1qq ´ 2nη NTRF.\nWe introduce a surrogate loss EipWq “ ´`1ryi ¨ fWpxiqs and its population version EDpWq “ Epx,yq„Dr´`1ry ¨ fWpxqss, which have been used in (Ji and Telgarsky, 2018; Cao and Gu, 2019; Ji and Telgarsky, 2020). Our proof is based on the application of Lemma A.4 and an online-tobatch conversion argument (Cesa-Bianchi et al., 2004; Cao and Gu, 2019; Ji and Telgarsky, 2020). We introduce a surrogate loss EipWq “ ´`1ryi ¨ fWpxiqs and its population version EDpWq “ Epx,yq„Dr´`1py ¨ fWpxqqs, which have been used in (Ji and Telgarsky, 2018; Cao and Gu, 2019; Nitanda and Suzuki, 2019; Ji and Telgarsky, 2020).\nProof of Theorem 3.5. Recall that W˚ is chosen such that\n1\nn\nn ÿ i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF\nand W˚ P BpWp0q, Rm´1{2q. To apply Lemma A.4, we need the region BpWp0q, τq to include both W˚ and tWptqut“0,...,t1 . This motivates us to set τ “ rOpL1{2m´1{2Rq, which is slightly larger than m´1{2R. With this choice of τ , by Lemma A.1 we have apppτq “ rOpτ4{3m1{2L3q “ rOpR4{3L11{3m´1{6q. Therefore, we can set\nm “ rΩpR8L22q\nto ensure that apppτq ď 1{8, where rΩp¨q hides polylogarithmic dependencies on network depth L, NTRF function class size R, and failure probability parameter δ.\nThen by Lemma A.4, we have with probability at least 1´ δ,\n}Wp0q ´W˚}2F ´ }Wpn 1q ´W˚}2F ě η\nn1 ÿ i“1 LipWpi´1qq ´ 2nη NTRF (A.4)\nas long as Wp0q, . . . ,Wpn 1´1q P BpWp0q, τq.\nWe then prove Theorem 3.5 in two steps: 1) all iterates stay inside BpWp0q, τq; and 2) convergence of online SGD.\nAll iterates stay inside BpWp0q, τq. Similar to the proof of Theorem 3.3, we prove this part by induction. Assuming Wpiq satisfies Wpiq P BpWp0q, τq for all i ď n1 ´ 1, by (A.4), we have\n}Wpn 1q ´W˚}2F ď }Wp0q ´W˚}2F ` 2nη NTRF\nď LR2 ¨m´1 ` 2nη NTRF,\nwhere the last inequality is by W˚ P BpWp0q, Rm´1{2q. Then by triangle inequality, we further get\n}Wpn 1q l ´W p0q l }F ď }W pn1q l ´W ˚ l }F ` }W˚l ´W p0q l }F\nď }Wpn 1q ´W˚}F ` }W˚l ´W p0q l }F ď Op ? LRm´1{2 `?nη NTRFq.\nThen by our choices of η “ Θ ` m´1 ¨ pLR2n´1 ´1NTRF ^ L´1q ˘ , we have }Wpn1q ´Wp0q}F ď 2 ? LRm´1{2 ď τ . This completes the proof of the first part.\nConvergence of online SGD. By (A.4), we have\n}Wp0q ´W˚}2F ´ }Wpnq ´W˚}2F ě η ˆ n ÿ\ni“1 LipWpi´1qq ´ 2n NTRF\n˙\n.\nDividing by ηn on the both sides and rearranging terms, we get\n1\nn\nn ÿ i“1 LipWpi´1qq ď }Wp0q ´W˚}2F ´ }Wpnq ´W˚}2F ηn ` 2 NTRF ď L2R2 n ` 3 NTRF,\nwhere the second inequality follows from facts that W˚ P BpWp0q, R ¨m´1{2q and η “ Θ ` m´1 ¨ pLR2n´1 ´1NTRF ^L´1q ˘\n. By Lemma 4.3 in (Ji and Telgarsky, 2020) and the fact that EipWpi´1qq ď LipWpi´1qq, we have\n1\nn\nn ÿ i“1 L0´1D pW pi´1qq ď 2 n n ÿ i“1 EDpWpi´1qq\nď 8 n\nn ÿ i“1 EipWpi´1qq ` 8 logp1{δq n\nď 8L 2R2 n ` 8 logp1{δq n ` 24 NTRF.\nThis completes the proof of the second part." }, { "heading": "B PROOF OF RESULTS IN SECTION 4", "text": "" }, { "heading": "B.1 PROOF OF PROPOSITION 4.2", "text": "We first provide the following lemma which gives an upper bound of the neural network output at the initialization. Lemma B.1 (Lemma 4.4 in Cao and Gu (2019)). Under Assumption 3.1, if m ě C̄L logpnL{δq with some absolute constant C̄, with probability at least 1´ δ, we have\n|fWp0qpxiq| ď C a logpn{δq\nfor some absolute constant C.\nProof of Proposition 4.2. Under Assumption 4.1, we can find a collection of matrices U˚ “ tU˚1 , ¨ ¨ ¨ ,U˚Lu with řL l“1 }U˚l }2F “ 1 such that yix∇fWp0qpxiq,U˚y ě m1{2γ for at least 1 ´ ρ fraction of the training data. By Lemma B.1, for all i P rns we have |fWp0qpxiq| ď C a\nlogpn{δq for some absolute constant C. Then for any positive constant λ, we have for at least 1´ ρ portion of the data,\nyi ` fWp0qpxiq ` x∇fWp0q , λU˚y ˘ ě m1{2λγ ´ C a logpn{δq.\nFor this fraction of data, we can set\nλ “ C 1 “ log1{2pn{δq ` logp1{ q ‰\nm1{2γ ,\nwhere C 1 is an absolute constant, and get\nm1{2λγ ´ C a logpn{δq ě logp1{ q.\nNow we let W˚ “ Wp0q ` λU˚. By the choice of R in Proposition 4.2, we have W˚ P BpWp0q, R ¨ m´1{2q. The above inequality implies that for at least 1 ´ ρ fraction of data, we have ` `\nyiFWp0q,W˚pxiq ˘ ď . For the rest data, we have\nyi ` fWp0qpxiq ` x∇fWp0q , λU˚y ˘ ě ´C a logpn{δq ´ λ}∇fWp0q}22 ě ´C1R\nfor some absolute positive constant C1, where the last inequality follows from fact that }∇fWp0q}2 “ rOpm1{2q (see Lemma A.1 for detail). Then note that we use cross-entropy loss, it follows that for this fraction of training data, we have ` `\nyiFWp0q,W˚pxiq ˘ ď C2R for some constant C2. Combining the results of these two fractions of training data, we can conclude\nNTRF ď n´1 n ÿ\ni“1 ` ` yiFWp0q,W˚pxiq ˘ ď p1´ ρq ` ρ ¨OpRq\nThis completes the proof." }, { "heading": "B.2 PROOF OF PROPOSITION 4.4", "text": "Proof of Proposition 4.4. We are going to prove that Assumption 4.3 implies the existence of a good function in the NTRF function class.\nBy Definition 3.2 and the definition of cross-entropy loss, our goal is to prove that there exists a collection of matrices W “ tW1,W2u satisfying maxt}W1 ´Wp0q1 }F , }W2 ´W p0q 2 }2u ď R ¨m´1{2 such that yi ¨ “ fWp0qpxiq ` x∇W1fWp0q ,W1 ´W p0q 1 y ` x∇W2fWp0q ,W2 ´W p0q 2 y ‰\ně logp2{ q. We first consider∇W1fWp0qpxiq, which has the form\np∇W1fWp0qpxiq ˘ j “ m1{2 ¨ wp0q2,j ¨ σ 1pxwp0q1,j ,xiyq ¨ xi.\nNote that wp0q2,j and w p0q 1,j are independently generated from N p0, 1{mq and N p0, 2I{mq respectively, thus we have Pp|wp0q2,j | ě 0.47m´1{2q ě 1{2. By Hoeffeding’s inequality, we know that with probability at least 1 ´ expp´m{8q, there are at least m{4 nodes, whose union is denoted by S, satisfying |wp0q2,j | ě 0.47m´1{2. Then we only focus on the nodes in the set S. Note that W p0q 1 and W p0q 2 are independently generated. Then by Assumption 4.3 and Hoeffeding’s inequality, there exists a function up¨q : Rd Ñ Rd such that with probability at least 1´ δ1,\n1\n|S| ÿ jPS yi ¨ xupwp0q1,j q,xiy ¨ σ 1pxwp0q1,j ,xiyq ě γ ´\nd\n2 logp1{δ1q |S| .\nDefine vj “ upwp0q1,j q{w2,j if |w2,j | ě 0.47m´1{2 and vj “ 0 otherwise. Then we have m ÿ\nj“1 yi ¨ wp0q2,j ¨ xvj ,xiy ¨ σ 1pxwp0q1,j ,xiyq “ ÿ jPS yi ¨ xupwp0q1,j q,xiy ¨ σ 1pxwp0q1,j ,xiyq\ně |S|γ ´ a 2|S| logp1{δ1q. Set δ “ 2nδ1 and apply union bound, we have with probability at least 1´ δ{2,\nm ÿ j“1 yi ¨ wp0q2,j ¨ xvj ,xiy ¨ σ 1pxwp0q1,j ,xiyq ě |S|γ ´ a 2|S| logp2n{δq.\nTherefore, note that with probability at least 1 ´ expp´m{8q, we have |S| ě m{4. Moreover, in Assumption 4.3, by yi P t˘1u and |σ1p¨q|, }up¨q}2, }xi}2 ď 1 for i “ 1, . . . , n, we see that γ ď 1. Then if m ě 32 logpn{δq{γ2, with probability at least 1´ δ{2´ exp ` ´ 4 logpn{δq{γ2 ˘\ně 1´ δ, m ÿ j“1 yi ¨ wp0q2,j ¨ xvj ,xiy ¨ σ 1pxwp0q1,j ,xiyq ě |S|γ{2.\nLet U “ pv1,v2, ¨ ¨ ¨ ,vmqJ{ a m|S|, we have\nyix∇W1fWp0qpxiq,Uy “ 1 a\n|S|\nm ÿ j“1 yi ¨ wp0q2,j ¨ xvj ,xiy ¨ σ 1pxwp0q1,j ,xiyq ě a |S|γ 2 ě m 1{2γ 4 ,\nwhere the last inequality is by the fact that |S| ě m{4. Besides, note that by concentration and Gaussian tail bound, we have |fWp0qpxiq| ď C logpn{δq for some absolute constant C. Therefore, let W1 “Wp0q1 ` 4 ` logp2{ q ` C logpn{δq ˘ m´1{2U{γ and W2 “Wp0q2 , we have\nyi ¨ “ fWp0qpxiq ` x∇W1fWp0q ,W1 ´W p0q 1 y ` x∇W2fWp0q ,W2 ´W p0q 2 y ‰ ě logp2{ q. (B.1)\nNote that }up¨q}2 ď 1, we have }U}F ď 1{0.47 ď 2.2. Therefore, we further have }W1 ´ W p0q 1 }F ď 8.8γ´1 ` logp2{ q ` C logpn{δq ˘\n¨ m´1{2. This implies that W P BpWp0q, Rq with R “ O ` log ` n{pδ q ˘ {γ ˘\n. Applying the inequality `plogp2{ qq ď on (B.1) gives `pyi ¨ FWp0q,Wpxiqq ď\nfor all i “ 1, . . . , n. This completes the proof." }, { "heading": "B.3 PROOF OF PROPOSITION 4.6", "text": "Based on our theoretical analysis, the major goal is to show that there exist certain choices of R and m such that the best NTRF model in the function class FpWp0q, Rq can achieve training error. In this proof, we will prove a stronger results by showing that given the quantities of R and m specificed in Proposition 4.6, there exists a NTRF model with parameter W˚ that satisfies n´1\nřn i“1 ` ` yiFWp0q,W˚pxiq ˘ ď . In order to do so, we consider training the NTRF model via a different surrogate loss function. Specifically, we consider squared hinge loss r`pxq “ ` maxtλ´ x, 0u ˘2\n, where λ denotes the target margin. In the later proof, we choose λ “ logp1{ q ` 1 such that the condition r`pxq ď 1 can guarantee that x ě logp q. Moreover, we consider using gradient flow, i.e., gradient descent with infinitesimal step size, to train the NTRF model. Therefore, in the remaining part of the proof, we consider optimizing the NTRF parameter W with the loss function\nrLSpWq “ 1\nn\nn ÿ\ni“1\nr` ` yiFWp0q,Wpxiq ˘ .\nMoreover, for simplicity, we only consider optimizing parameter in the last hidden layer (i.e., WL´1). Then the gradient flow can be formulated as\ndWL´1ptq dt “ ´∇WL´1 rLSpWptqq, dWlptq dt “ 0 for any l ‰ L´ 1.\nNote that the NTRF model is a linear model, thus by Definition 3.2, we have\n∇WL´1 rLSpWptqq “ yir`1 ` yiFWp0q,Wptqpxiq ˘ ¨∇WL´1FWp0q,Wptqpxiq\n“ yir`1 ` yiFWp0q,Wptqpxiq ˘ ¨∇ W p0q L´1 fWp0qpxiq. (B.2)\nThen it is clear that∇WL´1 rLSpWptqq has fixed direction throughout the optimization. In order to prove the convergence of gradient flow and characterize the quantity of R, We first provide the following lemma which gives an upper bound of the NTRF model output at the initialization.\nThen we provide the following lemma which characterizes a lower bound of the Frobenius norm of the partial gradient∇WL´1 rLSpWq.\nLemma B.2 (Lemma B.5 in Zou et al. (2019)). Under Assumptions 3.1 and 4.5, if m “ rΩpn2φ´1q, then for all t ě 0, with probability at least 1´ exp ` ´Opmφ{nq ˘\n, there exist a positive constant C such that\n}∇WL´1 rLSpWptqq}2F ě Cmφ\nn5\n„ n ÿ\ni“1\nr`1 ` yiFWp0q,Wptqpxiq ˘\n2\n.\nWe slightly modified the original version of this lemma since we use different models (we consider NTRF model while Zou et al. (2019) considers neural network model). However, by (B.2), it is clear that the gradient ∇rLSpWq can be regarded as a type of the gradient for neural network model at the initialization (i.e.,∇WL´1LSpWp0qq) is valid. Now we are ready to present the proof.\nProof of Proposition 4.6. Recall that we only consider training the last hidden weights, i.e., WL´1, via gradient flow with squared hinge loss, and our goal is to prove that gradient flow is able to find a NTRF model within the function class FpWp0q, Rq around the initialization, i.e., achieving n´1\nřn i“1 ` ` yiFWp0q,W˚pxiq ˘ ď . Let Wptq be the weights at time t, gradient flow implies that\ndrLSpWptqq dt “ ´}∇WL´1 rLSpWptqq}2F ď ´ Cmφ n5\nˆ n ÿ\ni“1\nr`1 ` yiFWp0q,Wptqpxiq ˘\n˙2\n“ 4Cmφ rLSpWptqq n3 ,\nwhere the first equality is due to the fact that we only train the last hidden layer, the first inequality is by Lemma B.2 and the second equality follows from the fact that r`1p¨q “ ´2 b\nr`p¨q. Solving the above inequality gives\nrLSpWptqq ď rLSpWp0qq ¨ exp ˆ ´ 4Cmφt n3 ˙ . (B.3)\nThen, set T “ O ` n3m´1φ´1 ¨ logprLSpWp0qq{ 1q ˘ and 1 “ 1{n, we have rLSpWptqq ď 1. Then it follows that r` `\nyiFWp0q,Wptqpxiq ˘ ď 1, which implies that yiFWp0q,Wptqpxiq ě logp q and thus n´1\nřn i“1 ` ` yiFWp0q,W˚pxiq ˘ ď . Therefore, WpT q is exactly the NTRF model we are looking for.\nThe next step is to characterize the distance between WpT q and Wp0q in order to characterize the quantity of R. Note that }∇WL´1 rLSpWptqq}2F ě 4CmφrLSpWptqq{n3, we have\nd\nb\nrLSpWptqq dt “ ´ }∇WL´1 rLSpWptqq}2F\n2\nb rLSpWptqq ď ´}∇WL´1 rLSpWptqq}F ¨\nC1{2m1{2φ1{2\nn3{2 .\nTaking integral on both sides and rearranging terms, we have ż T\nt“0 }∇WL´1 rLSpWptqq}Fdt ď\nn3{2 C1{2m1{2φ1{2 ¨ ˆ b rLSpWp0qq ´ b rLSpWptqq ˙ .\nNote that the L.H.S. of the above inequality is an upper bound of }Wptq ´Wp0q}F , we have for any t ě 0,\n}Wptq ´Wp0q}F ď n3{2 C1{2m1{2φ1{2 ¨ b rLSpWp0qq “ O ˆ n3{2 log ` n{pδ q ˘ m1{2φ1{2 ˙ ,\nwhere the second inequality is by Lemma B.1 and our choice of λ “ logp1{ q ` 1. This implies that there exists a point W˚ within the class FpWp0q, Rq with\nR “ O ˆ n3{2 log ` n{pδ q ˘\nφ1{2\n˙\nsuch that\nNTRF :“ n´1 n ÿ\ni“1 ` ` yiFWp0q,W˚pxiq ˘ ď .\nThen by Theorem 3.3, and, more specifically, (A.1), we can compute the minimal required neural network width as follows,\nm “ rΩpR8L22q “ rΩ ˜ L22n12\nφ4\n¸\n.\nThis completes the proof." }, { "heading": "C PROOF OF TECHNICAL LEMMAS", "text": "Here we provide the proof of Lemmas 5.1, A.3 and A.4." }, { "heading": "C.1 PROOF OF LEMMA 5.1", "text": "The detailed proof of Lemma 5.1 is given as follows.\nProof of Lemma 5.1. Based on the update rule of gradient descent, i.e., Wpt`1q “ Wptq ´ η∇WLSpWptqq, we have the following calculation.\n}Wptq ´W˚}2F ´ }Wpt`1q ´W˚}2F\n“ 2η n\nn ÿ i“1 xWptq ´W˚,∇WLipWptqqy loooooooooooooooooooooomoooooooooooooooooooooon\nI1\n´ η2 L ÿ\nl“1 }∇WlLSpWptqq}2F loooooooooooooomoooooooooooooon\nI2\n, (C.1)\nwhere the equation follows from the fact that LSpWptqq “ n´1 řn i“1 LipWptqq. In what follows, we first bound the term I1 on the R.H.S. of (C.1) by approximating the neural network functions with linear models. By assumption, for t “ 0, . . . , t1 ´ 1, Wptq,W˚ P BpWp0q, τq. Therefore by the definition of apppτq,\nyi ¨ x∇fWptqpxiq,Wptq ´W˚y ď yi ¨ ` fWptqpxiq ´ fW˚pxiq ˘ ` apppτq (C.2)\nMoreover, we also have\n0 ď yi ¨ ` fW˚pxiq ´ fWp0qpxiq ´ x∇fWp0qpxiq,W˚ ´Wp0qy ˘ ` apppτq “ yi ¨ ` fW˚pxiq ´ FWp0q,W˚pxiq ˘ ` apppτq, (C.3)\nwhere the equation follows by the definition of FWp0q,W˚pxq. Adding (C.3) to (C.2) and canceling the terms yi ¨ fW˚pxiq, we obtain that\nyi ¨ x∇fWptqpxiq,Wptq ´W˚y ď yi ¨ ` fWptqpxiq ´ FWp0q,W˚pxiq ˘ ` 2 apppτq. (C.4)\nWe can now give a lower bound on first term on the R.H.S. of (C.1). For i “ 1, . . . , n, applying the chain rule on the loss function gradients and utilizing (C.4), we have\nxWptq ´W˚,∇WLipWptqqy “ `1 ` yifWptqpxiq ˘ ¨ yi ¨ xWptq ´W˚,∇WfWptqpxiqy ě `1 `\nyifWptqpxiq ˘ ¨ ` yifWptqpxiq ´ yifW˚pxiq ` 2 apppτq ˘\ně p1´ 2 apppτqq` ` yifWptqpxiq ˘ ´ ` ` yiFWp0q,W˚pxiq ˘ , (C.5)\nwhere the first inequality is by the fact that `1 ` yifWptqpxiq ˘ ă 0, the second inequality is by convexity of `p¨q and the fact that ´`1 `\nyifWptqpxiq ˘ ď ` ` yifWptqpxiq ˘ .\nWe now proceed to bound the term I2 on the R.H.S. of (C.1). Note that we have `1p¨q ă 0, and therefore the Frobenius norm of the gradient∇WlLSpWptqq can be upper bounded as follows,\n}∇WlLSpWptqq}F “ › › ›\n›\n1\nn\nn ÿ i“1 `1 ` yifWptqpxiq ˘ ∇WlfWptqpxiq › › › › F\nď 1 n\nn ÿ i“1 ´`1 ` yifWptqpxiq ˘ ¨ }∇WlfWptqpxiq}F ,\nwhere the inequality follows by triangle inequality. We now utilize the fact that cross-entropy loss satisfies the inequalities ´`1p¨q ď `p¨q and ´`1p¨q ď 1. Therefore by definition of Mpτq, we have\nL ÿ l“1 }∇WlLSpWptqq}2F ď O ` LMpτq2 ˘ ¨ ˆ 1 n n ÿ i“1 ´`1 ` yifWptqpxiq ˘ ˙2\nď O ` LMpτq2 ˘ ¨ LSpWptqq. (C.6)\nThen we can plug (C.5) and (C.6) into (C.1) and obtain\n}Wptq ´W˚}2F ´ }Wpt`1q ´W˚}2F\ně 2η n\nn ÿ\ni“1\n”\np1´ 2 apppτqq` ` yifWptqpxiq ˘ ´ ` ` yiFWp0q,W˚pxiq ˘\nı\n´O ` η2LMpτq2 ˘ ¨ LSpWptqq\ně „ 3\n2 ´ 4 apppτq\n\nηLSpWptqq ´ 2η\nn\nn ÿ i“1 ` ` yiFWp0q,W˚pxiq ˘ ,\nwhere the last inequality is by η “ OpL´1Mpτq´2q and merging the third term on the second line into the first term. Taking telescope sum from t “ 0 to t “ t1 ´ 1 and plugging in the definition 1 n řn i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF completes the proof." }, { "heading": "C.2 PROOF OF LEMMA A.3", "text": "Proof of Lemma A.3. We first denote W “ BpWp0q, rR ¨ m´1{2q, and define the corresponding neural network function class and surrogate loss function class as F “ tfWpxq : W P Wu and G “ t´`ry ¨ fWpxqs : W PWu respectively. By standard uniform convergence results in terms of empirical Rademacher complexity (Bartlett and Mendelson, 2002; Mohri et al., 2018; Shalev-Shwartz and Ben-David, 2014), with probability at least 1´ δ we have\nsup WPW |ESpWq ´ EDpWq| “ sup WPW\nˇ\nˇ\nˇ\nˇ\nˇ ´ 1 n\nn ÿ i“1 `1 “ yi ¨ fWpxiq ‰ ` Epx,yq„D`1 “ y ¨ fWpxq ‰\nˇ\nˇ\nˇ\nˇ\nˇ\nď 2pRnpGq ` C1\nc\nlogp1{δq n ,\nwhere C1 is an absolute constant, and\npRnpGq “ Eξi„Unifpt˘1uq\n#\nsup WPW\n1\nn\nn ÿ i“1 ξi` 1“yi ¨ fWpxiq ‰\n+\nis the empirical Rademacher complexity of the function class G. We now provide two bounds on pRnpGq, whose combination gives the final result of Lemma A.3. First, by Corollary 5.35 in (Vershynin, 2010), with probability at least 1 ´ L ¨ expp´Ωpmqq, }Wp0ql }2 ď 3 for all l P rLs. Therefore for all W PW , we have }Wl}2 ď 4. Moreover, standard concentration inequalities on the norm of the first row of Wp0ql also implies that }Wl}2 ě 0.5 for all W PW and l P rLs. Therefore, an adaptation of the bound in (Bartlett et al., 2017)¶ gives\npRnpFq ď rO ˜\nsup WPW\n# m1{2? n ¨ « L ź\nl“1 }Wl}2\nff ¨ « L ÿ\nl“1\n}WJl ´W p0qJ l } 2{3 2,1\n}Wl}2{32\nff3{2+¸\nď rO ˜\nsup WPW\n#\n4Lm1{2? n ¨ « L ÿ l“1 p ? m ¨ }WJl ´W p0qJ l }F q 2{3\nff3{2+¸\nď rO ˜ 4LL3{2 rR ¨ c m\nn\n¸\n. (C.7)\nWe now derive the second bound on pRnpGq, which is inspired by the proof provided in (Cao and Gu, 2020). Since y P t`1, 1u, |`1pzq| ď 1 and `1pzq is 1-Lipschitz continuous, by standard empirical Rademacher complexity bounds (Bartlett and Mendelson, 2002; Mohri et al., 2018; Shalev-Shwartz and Ben-David, 2014), we have\npRnpGq ď pRnpFq “ Eξi„Unifpt˘1uq\n«\nsup WPW\n1\nn\nn ÿ i“1 ξifWpxiq\nff\n,\n¶Bartlett et al. (2017) only proved the Rademacher complexity bound for the composition of the ramp loss and the neural network function. In our setting essentially the ramp loss is replaced with the ´`1p¨q function, which is bounded and 1-Lipschitz continuous. The proof in our setting is therefore exactly the same as the proof given in (Bartlett et al., 2017), and we can apply Theorem 3.3 and Lemma A.5 in (Bartlett et al., 2017) to obtain the desired bound we present here.\nwhere pRnpFq is the empirical Rademacher complexity of the function class F . We have\npRnrFs ď Eξ\n#\nsup WPW\n1\nn\nn ÿ i“1 ξi “ fWpxiq ´ FWp0q,Wpxiq ‰\n+\nlooooooooooooooooooooooooooooomooooooooooooooooooooooooooooon\nI1\n`Eξ\n#\nsup WPW\n1\nn\nn ÿ i“1 ξiFWp0q,Wpxiq\n+\nloooooooooooooooooooomoooooooooooooooooooon\nI2\n,\n(C.8)\nwhere FWp0q,Wpxq “ fWp0qpxq ` @ ∇WfWp0qpxq,W´Wp0q D\n. For I1, by Lemma 4.1 in (Cao and Gu, 2019), with probability at least 1´ δ{2 we have\nI1 ď max iPrns\nˇ ˇfWpxiq ´ FWp0q,Wpxiq ˇ ˇ ď O ` L3 rR4{3m´1{6 a logpmq ˘ ,\nFor I2, note that Eξ “ supWPW řn i“1 ξifWp0qpxiq ‰ “ 0. By Cauchy-Schwarz inequality we have\nI2 “ 1\nn\nL ÿ l“1 Eξ\n#\nsup }ĂWl}Fď rRm´1{2 Tr\n«\nĂWJl\nn ÿ i“1 ξi∇WlfWp0qpxiq\nff+\nď rRm´1{2\nn\nL ÿ l“1 Eξ\n« ›\n›\n›\n›\n›\nn ÿ i“1 ξi∇WlfWp0qpxiq\n›\n›\n›\n›\n›\nF\nff\n.\nTherefore\nI2 ď rRm´1{2\nn\nL ÿ\nl“1\ng f\nf eEξ\n« ›\n›\n›\n›\n›\nn ÿ i“1 ξi∇WlfWp0qpxiq\n›\n›\n›\n›\n›\n2\nF\nff\n“ rRm´1{2\nn\nL ÿ\nl“1\ng f\nf\ne\nn ÿ\ni“1\n› ›∇WlfWp0qpxiq › ›\n2\nF\nď O ˆ L ¨ rR? n ˙ ,\nwhere we apply Jensen’s inequality to obtain the first inequality, and the last inequality follows by Lemma B.3 in (Cao and Gu, 2019). Combining the bounds of I1 and I2 gives\npRnrFs ď rO ˆ L rR? n ` L 3 rR4{3 m1{6 ˙ .\nFurther combining this bound with (C.7) and recaling δ completes the proof." }, { "heading": "C.3 PROOF OF LEMMA A.4", "text": "Proof of Lemma A.4. Different from the proof of Lemma 5.1, online SGD only queries one data to update the model parameters in each iteration, i.e., Wi`1 “Wi ´ η∇Li`1pWpiqq. By this update rule, we have\n}Wpiq ´W˚}2F ´ }Wpi`1q ´W˚}2F\n“ 2ηxWpiq ´W˚,∇WLi`1pWpiqqy ´ η2 L ÿ\nl“1 }∇WlLi`1pWpiqq}2F . (C.9)\nWith exactly the same proof as (C.5) in the proof of Lemma 5.1, we have\nxWptq ´W˚,∇WLipWptqqy ě p1´ 2 apppτqq` ` yifWptqpxiq ˘ ´ ` ` yiFWp0q,W˚pxiq ˘ , (C.10)\nfor all i “ 0, . . . , n1 ´ 1. By the fact that ´`1p¨q ď `p¨q and ´`1p¨q ď 1, we have L ÿ\nl“1 }∇WlLi`1pWpiqq}2F ď\nL ÿ l“1 ` ` yi`1fWtpxi`1q ˘ ¨ }∇WlfWpiqpxi`1q}2F\nď O ` LMpτq2 ˘ ¨ Li`1pWpiqq. (C.11)\nThen plugging (C.10) and (C.11) into (C.9) gives\n}Wpiq ´W˚}2F ´ }Wpi`1q ´W˚}2F ě p2´ 4 apppτqqηLi`1pWpiqq ´ 2η` ` yiFWp0q,W˚pxiq ˘ ´O ` η2LMpτq2 ˘ Li`1pWpiqq\ně p3 2 ´ 4 apppτqqηLi`1pWpiqq ´ 2η` ` yiFWp0q,W˚pxiq ˘ ,\nwhere the last inequality is by η “ OpL´1Mpτq´2q and merging the third term on the second line into the first term. Taking telescope sum over i “ 0, . . . , n1 ´ 1, we obtain\n}Wp0q ´W˚}2F ´ }Wpn 1q ´W˚}2F\ně ´3\n2 ´ 4 apppτq\n¯ η n1 ÿ\ni“1 LipWpi´1qq ´ 2η\nn1 ÿ i“1 ` ` yiFWp0q,W˚pxiq ˘ .\ně ´3\n2 ´ 4 apppτq\n¯ η n1 ÿ\ni“1 LipWpi´1qq ´ 2η\nn ÿ i“1 ` ` yiFWp0q,W˚pxiq ˘ .\ně ´3\n2 ´ 4 apppτq\n¯ η n1 ÿ\ni“1 LipWpi´1qq ´ 2nη NTRF.\nThis finishes the proof." }, { "heading": "D EXPERIMENTS", "text": "In this section, we conduct some simple experiments to validate our theory. Since our paper mainly focuses on binary classification, we use a subset of the original CIFAR10 dataset (Krizhevsky et al., 2009), which only has two classes of images. We train a 5-layer fullyconnected ReLU network on this binary classification dataset with different sample sizes (n P t100, 200, 500, 1000, 2000, 5000, 10000u), and plot the minimal neural network width that is required to achieve zero training error in Figure 1 (solid line). We also plot Opnq,Oplog3pnqq,Oplog2pnqq and Oplogpnqq in dashed line for reference. It is evident that the required network width to achieve zero training error is polylogarithmic on the sample size n, which is consistent with our theory." } ]
2,021
HOW MUCH OVER-PARAMETERIZATION IS SUFFI-
SP:e7caebe84a63ae1f2e8eda175eec514684a7a2ee
[ "This paper introduces a method for pruning during the training process in order to filter out unimportant/redundant components of the network continuously to speed up training and perform gradual pruning over the training process. The proposed approach is novel in the sense that the vast amount of prior work on pruning has focused on either (i) pruning on network initialization (e.g., SNIP, etc.) or (ii) pruning after the network has been fully trained (e.g., Magnitude Pruning, among many others). The introduced method uses the Taylor-series based saliency criterion (of Molchanov et al., 2017) and uses a multi-output Gaussian process to predict future saliencies and to determine whether a parameter can be safely removed early on during training.", "This paper introduces a new method to accelerate training by saliency-based pruning. The method predicts future saliency for neurons based on observed saliency with a multi-output Gaussian process (MOGP), then greedily prunes neurons with least saliency at fixed intervals during training. The authors provide extensive mathematical analysis to show that the algorithm produces pruning mask solutions that are close to the optimum of the formulated optimization (the reviewer is unable to verify). The experimental results showed improvements in task accuracies of trained models but with longer training times. " ]
Pruning is an approach to alleviate overparameterization of deep neural network (DNN) by zeroing out or pruning DNN elements with little to no efficacy at a given task. In contrast to related works that do pruning before or after training, this paper presents a novel method to perform early pruning of DNN elements (e.g., neurons or convolutional filters) during the training process while preserving performance upon convergence. To achieve this, we model the future efficacy of DNN elements in a Bayesian manner conditioned upon efficacy data collected during the training and prune DNN elements which are predicted to have low efficacy after training completion. Empirical evaluations show that the proposed Bayesian early pruning improves the computational efficiency of DNN training. Using our approach we are able to achieve a 48.6% faster training time for ResNet-50 on ImageNet to achieve a validation accuracy of 72.5%.
[]
[ { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning and generalization in overparameterized neural networks, going beyond two layers", "venue": "In Proc. NeurIPS,", "year": 2019 }, { "authors": [ "Mauricio A. Álvarez", "Neil D. Lawrence" ], "title": "Computationally efficient convolved multiple output", "venue": "Gaussian processes. JMLR,", "year": 2011 }, { "authors": [ "Karl Johan Åström", "Tore Hägglund", "Chang C Hang", "Weng K Ho" ], "title": "Automatic tuning and adaptation for pid controllers-a survey", "venue": "Control Engineering Practice,", "year": 1993 }, { "authors": [ "Guillaume Bellec", "David Kappel", "Wolfgang Maass", "Robert A. Legenstein" ], "title": "Deep rewiring: Training very sparse deep networks", "venue": "In Proc. ICLR,", "year": 2018 }, { "authors": [ "Richard E Bellman" ], "title": "Adaptive control processes: a guided tour", "venue": "Princeton university press,", "year": 2015 }, { "authors": [ "Aydin Buluç", "John R. Gilbert" ], "title": "Challenges and advances in parallel sparse matrix-matrix multiplication", "venue": "In Proc. ICCP, pp", "year": 2008 }, { "authors": [ "Matthieu Courbariaux", "Yoshua Bengio", "Jean-Pierre David" ], "title": "Binaryconnect: Training deep neural networks with binary weights during propagations", "venue": null, "year": 2015 }, { "authors": [ "Xiaoliang Dai", "Hongxu Yin", "Niraj K. Jha" ], "title": "Nest: A neural network synthesis tool based on a grow-and-prune paradigm", "venue": "IEEE Trans. Computers,", "year": 2019 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Fei-Fei Li" ], "title": "ImageNet: A large-scale hierarchical image database", "venue": "In Proc. CVPR,", "year": 2009 }, { "authors": [ "Emily L. Denton", "Wojciech Zaremba", "Joan Bruna", "Yann LeCun", "Rob Fergus" ], "title": "Exploiting linear structure within convolutional networks for efficient evaluation", "venue": "In Proc. NeurIPS,", "year": 2014 }, { "authors": [ "Tim Dettmers", "Luke Zettlemoyer" ], "title": "Sparse networks from scratch: Faster training without losing performance", "venue": null, "year": 1907 }, { "authors": [ "Xin Dong", "Shangyu Chen", "Sinno Jialin Pan" ], "title": "Learning to prune deep neural networks via layer-wise optimal brain surgeon", "venue": "In Proc. NeurIPS,", "year": 2017 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In Proc. ICLR,", "year": 2019 }, { "authors": [ "Trevor Gale", "Erich Elsen", "Sara Hooker" ], "title": "The state of sparsity in deep neural networks", "venue": null, "year": 1902 }, { "authors": [ "Yiwen Guo", "Anbang Yao", "Yurong Chen" ], "title": "Dynamic network surgery for efficient DNNs", "venue": "In Proc. NeurIPS,", "year": 2016 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural networks", "venue": "In Proc. NeurIPS,", "year": 2015 }, { "authors": [ "Babak Hassibi", "David G. Stork" ], "title": "Second order derivatives for network pruning: Optimal brain surgeon", "venue": "In Proc. NeurIPS, pp", "year": 1992 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proc. CVPR, pp", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In Proc. ECCV, pp", "year": 2016 }, { "authors": [ "James Hensman", "Alexander Matthews", "Zoubin Ghahramani" ], "title": "Scalable variational Gaussian process classification", "venue": "In Proc. AISTATS, pp", "year": 2015 }, { "authors": [ "Geoffrey E. Hinton", "Nitish Srivastava", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan R. Salakhutdinov" ], "title": "Improving neural networks by preventing co-adaptation of feature detectors", "venue": null, "year": 2012 }, { "authors": [ "Geoffrey E. Hinton", "Oriol Vinyals", "Jeffrey Dean" ], "title": "Distilling the knowledge in a neural network", "venue": null, "year": 2015 }, { "authors": [ "Itay Hubara", "Matthieu Courbariaux", "Daniel Soudry", "Ran El-Yaniv", "Yoshua Bengio" ], "title": "Quantized neural networks: Training neural networks with low precision weights and activations", "venue": null, "year": 2017 }, { "authors": [ "Max Jaderberg", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Speeding up convolutional neural networks with low rank expansions", "venue": "In Proc. BMVC,", "year": 2014 }, { "authors": [ "Ehud D. Karnin" ], "title": "A simple procedure for pruning back-propagation trained neural networks", "venue": "IEEE Trans. Neural Networks,", "year": 1990 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Proc. ICLR,", "year": 2015 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Master’s thesis, Univ. Toronto,", "year": 2009 }, { "authors": [ "Yann LeCun", "John S. Denker", "Sara A. Solla" ], "title": "Optimal brain damage", "venue": "In Proc. NeurIPS, pp", "year": 1989 }, { "authors": [ "Namhoon Lee", "Thalaiyasingam Ajanthan", "Philip H.S. Torr" ], "title": "Snip: Single-shot network pruning based on connection sensitivity", "venue": "In Proc. ICLR,", "year": 2019 }, { "authors": [ "Hao Li", "Asim Kadav", "Igor Durdanovic", "Hanan Samet", "Hans Peter Graf" ], "title": "Pruning filters for efficient convnets", "venue": "In Proc. ICLR,", "year": 2017 }, { "authors": [ "Liang Lu", "Michelle Guo", "Steve Renals" ], "title": "Knowledge distillation for small-footprint highway networks", "venue": "In Proc. ICASSP,", "year": 2017 }, { "authors": [ "Sangkug Lym", "Esha Choukse", "Siavash Zangeneh", "Wei Wen", "Sujay Sanghavi", "Mattan Erez" ], "title": "Prunetrain: fast neural network training by dynamic sparse model reconfiguration", "venue": "In Proc. SC,", "year": 2019 }, { "authors": [ "Alexander Matthews", "Mark van der Wilk", "Tom Nickson", "Keisuke Fujii", "Alexis Boukouvalas", "Pablo León-Villagrá", "Zoubin Ghahramani", "James Hensman" ], "title": "GPflow: A Gaussian process library using tensorflow", "venue": null, "year": 2017 }, { "authors": [ "Paulius Micikevicius", "Sharan Narang", "Jonah Alben", "Gregory F. Diamos", "Erich Elsen", "David Garcı́a", "Boris Ginsburg", "Michael Houston", "Oleksii Kuchaiev", "Ganesh Venkatesh", "Hao Wu" ], "title": "Mixed precision training", "venue": "In Proc. ICLR,", "year": 2018 }, { "authors": [ "Decebal Constantin Mocanu", "Elena Mocanu", "Peter Stone", "Phuong H. Nguyen", "Madeleine Gibescu", "Antonio Liotta" ], "title": "Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science", "venue": null, "year": 2018 }, { "authors": [ "Pavlo Molchanov", "Stephen Tyree", "Tero Karras", "Timo Aila", "Jan Kautz" ], "title": "Pruning convolutional neural networks for resource efficient inference", "venue": "In Proc. ICLR,", "year": 2017 }, { "authors": [ "Hesham Mostafa", "Xin Wang" ], "title": "Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization", "venue": "In Proc. ICML,", "year": 2019 }, { "authors": [ "Michael Mozer", "Paul Smolensky" ], "title": "Skeletonization: A technique for trimming the fat from a network via relevance assessment", "venue": "In Proc. NeurIPS,", "year": 1988 }, { "authors": [ "Saralees Nadarajah", "Samuel Kotz" ], "title": "Exact distribution of the max/min of two gaussian random variables", "venue": "Trans. VLSI,", "year": 2008 }, { "authors": [ "Sharan Narang", "Greg Diamos", "Shubho Sengupta", "Erich Elsen" ], "title": "Exploring sparsity in recurrent neural networks", "venue": "In Proc. ICLR,", "year": 2017 }, { "authors": [ "Steven J. Nowlan", "Geoffrey E. Hinton" ], "title": "Simplifying neural networks by soft weight-sharing", "venue": "Neural Computation,", "year": 1992 }, { "authors": [ "Adam Polyak", "Lior Wolf" ], "title": "Channel-level acceleration of deep face representations", "venue": "IEEE Access,", "year": 2015 }, { "authors": [ "Nitish Srivastava", "Geoffrey E. Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": null, "year": 1929 }, { "authors": [ "Frederick Tung", "Greg Mori" ], "title": "Similarity-preserving knowledge distillation", "venue": "In Proc. ICCV,", "year": 2019 }, { "authors": [ "Karen Ullrich", "Edward Meeds", "Max Welling" ], "title": "Soft weight-sharing for neural network compression", "venue": "In Proc. ICLR,", "year": 2017 }, { "authors": [ "Chaoqi Wang", "Guodong Zhang", "Roger B. Grosse" ], "title": "Picking winning tickets before training by preserving gradient flow", "venue": "In Proc. ICLR,", "year": 2020 }, { "authors": [ "Wei Wen", "Chunpeng Wu", "Yandan Wang", "Yiran Chen", "Hai Li" ], "title": "Learning structured sparsity in deep neural networks", "venue": "In Proc. NeurIPS,", "year": 2016 }, { "authors": [ "Carl Yang", "Aydin Buluç", "John D. Owens" ], "title": "Design principles for sparse matrix multiplication on the GPU", "venue": "In Proc. Euro-Par,", "year": 2018 }, { "authors": [ "Junho Yim", "Donggyu Joo", "Ji-Hoon Bae", "Junmo Kim" ], "title": "A gift from knowledge distillation: Fast optimization, network minimization and transfer learning", "venue": "In Proc. CVPR,", "year": 2017 }, { "authors": [], "title": "s̃1:t] does not constrain the trained network size (3b), we train ResNet-50 under equivalent inference cost as a network trained by PruneTrain. To compute train/inference cost (FLOPs) for a convolutional layer, we used a formula defined in Molchanov et al", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) are known to be overparameterized (Allen-Zhu et al., 2019) as they usually have more learnable parameters than needed for a given learning task. So, a trained DNN contains many ineffectual parameters that can be safely pruned or zeroed out with little/no effect on its predictive accuracy. Pruning (LeCun et al., 1989) is an approach to alleviating overparameterization of a DNN by identifying and removing its ineffectual parameters while preserving its predictive accuracy on the validation/test dataset. Pruning is typically applied to the DNN after training to speed up testing-time evaluation. For standard image classification tasks with MNIST, CIFAR-10, and ImageNet datasets, it can reduce the number of learnable parameters by up to 50% or more while maintaining test accuracy (Han et al., 2015; Li et al., 2017; Molchanov et al., 2017).\nIn particular, the overparameterization of a DNN also leads to considerable training time being wasted on those DNN elements (e.g., connection weights, neurons, or convolutional filters) which are eventually ineffectual after training and can thus be safely pruned. Our work in this paper considers early pruning of such DNN elements by identifying and removing them throughout the training process instead of after training.1 As a result, this can significantly reduce the time incurred by the training process without compromising the final test accuracy (upon convergence) much.\nRecent work (Section 5) in foresight pruning (Lee et al., 2019; Wang et al., 2020) show that pruning heuristics applied at initialization work well to prune connection weights without significantly degrading performance. In contrast to these work, we prune throughout the training procedure, which improves performance after convergence of DNNs, albeit with somewhat longer training times.\nIn this work, we pose early pruning as a constrained optimization problem (Section 3.1). A key challenge in the optimization is accurately modeling the future efficacy of DNN elements. We achieve this through the use of multi-output Gaussian process which models the belief of future efficacy conditioned upon efficacy measurements collected during training (Section 3.2). Although the posed optimization problem is NP-hard, we derive an efficient Bayesian early pruning (BEP) approximation algorithm, which appropriately balances the inherent training time vs. performance tradeoff in pruning prior to convergence (Section 3.3). Our algorithm relies on a measure of network element efficacy, termed saliency (LeCun et al., 1989). The development of saliency functions is an active area of research with no clear optimal choice. To accomodate this, our algorithm is agnostic, and therefore\n1In contrast, foresight pruning (Wang et al., 2020) removes DNN elements prior to the training process.\nflexible, to changes in saliency function. We use BEP to prune neurons and convolutional filters to achieve practical speedup during training (Section 4).2 Our approach also compares favorably to state-of-the-art works such as SNIP (Lee et al., 2019), GraSP (Wang et al., 2020), and momentum based dynamic sparse reparameterization (Dettmers & Zettlemoyer, 2019)." }, { "heading": "2 PRUNING", "text": "Consider a dataset of D training examples X = {x1, . . . ,xD},Y = {y1, . . . , yD} and a neural network Nvt parameterized by a vector of M pruneable network elements (e.g. weight parameters, neurons, or convolutional filters) vt , [vat ]a=1,...,M , where vt represent the network elements after t iterations of stochastic gradient descent (SGD) for t = 1, . . . , T . Let L(X ,Y;Nvt) be the loss function for the neural network Nvt . Pruning aims at refining the network elements vt given some sparsity budget B and preserving the accuracy of the neural network after convergence (i.e., NvT ), which can be stated as a constrained optimization problem (Molchanov et al., 2017):\nminm∈{0,1}M |L(X ,Y;Nm vT )− L(X ,Y;NvT )| s.t. ||m||0 ≤ B (1)\nwhere is the Hadamard product and m is a pruning mask. Note that we abuse the Hadamard product for notation simplicity: for a = 1, ..,M , ma × vaT corresponds to pruning vaT if ma = 0, and keeping vaT otherwise. Pruning a network element refers to zeroing the network element or the weight parameters which compute the network element. Any weight parameters which reference the output of the pruned network element are also zeroed since the element outputs a constant 0.\nThe above optimization problem is difficult due to the NP-hardness of combinatorial optimization. This leads to the approach of using saliency function s which measures efficacy of network elements at minimizing the loss function. A network element with small saliency can be pruned since it’s not salient in minimizing the loss function. Consequently, pruning can be done by maximizing the saliency of the network elements given the sparsity budget B:\nmaxm∈{0,1}M ∑M a=1m as(a;X ,Y,NvT ,L) s.t. ||m||0 ≤ B (2)\nwhere s(a;X ,Y,NvT ,L) measures the saliency of vaT at minimizing L after convergence through T iterations of SGD. The above optimization problem can be efficienctly solved by selecting the B most salient network elements in vT .\nThe construction of the saliency function has been discussed in many existing works: Some approaches derived the saliency function from first-order (LeCun et al., 1989; Molchanov et al., 2017) and second-order (Hassibi & Stork, 1992; Wang et al., 2020) Taylor series approximations of L. Other common saliency functions include L1 (Li et al., 2017) or L2 (Wen et al., 2016) norm of the network element weights, as well as mean activation (Polyak & Wolf, 2015). In this work, we use a first-order Taylor series approximation saliency function defined for neurons and convolutional filters3 (Molchanov et al., 2017), however our approach remains flexible to arbitrary choice of saliency function on a plug-n-play basis." }, { "heading": "3 BAYESIAN EARLY PRUNING", "text": "" }, { "heading": "3.1 PROBLEM STATEMENT", "text": "As has been mentioned before, existing pruning works based on the saliency function are typically done after the training convergence (i.e., (2)) to speed up the testing-time evaluation, which waste considerable time on training these network elements which will eventually be pruned. To resolve this issue, We extend the pruning problem definition (2) along the temporal dimension, allowing network elements to be pruned during the training process consisting of T iterations of SGD.\n2Popular deep learning libaries do not accelerate sparse matrix operations over dense matrix operations. Thus, pruning network connections cannot be easily capitalized upon with performance improvements. It is also unclear whether moderately sparse matrix operations (i.e., operations on matrices generated by connection pruning) can be significantly accelerated on massively parallel architectures such as GPUs (see Yang et al. (2018) Fig. 7). See Section 5 in Buluç & Gilbert (2008) for challenges in parallel sparse matrix multiplication.\n3Implementation details of this saliency function can be found in Appendix A.1.\nLet sat , s(a;X ,Y,Nvt ,L) be a random variable which denotes the saliency of network element vat after t iterations of SGD, st , [sat ]a=1,...,M for t = 1, . . . , T , and sτ1:τ2 , [st]t=τ1,...,τ2 be a vector of saliency of all the network elements between iterations τ1 and τ2. Our early pruning algorithm is designed with the goal of maximizing the saliency of the unpruned elements after iteration T , yet allowing for pruning at each iteration t given some computational budget Bt,c for t = 1, . . . , T :\nρT (mT−1, BT,c, Bs) , maxmT mT · sT (3a) s.t. ||mT ||0 ≤ Bs (3b) mT ≤̇mT−1 (3c) BT,c ≥ 0 (3d)\nρt(mt−1, Bt,c, Bs) , maxmtEp(st+1|s̃1:t) [ ρt+1(mt, Bt,c−||mt||0, Bs) ] (4a)\ns.t. mt ≤̇mt−1 (4b)\nwhere Bs is the trained network sparsity budget, s̃1:t is a vector of observed values for s1:t, m0 is an M -dimensional 1’s vector, andmt ≤̇mt−1 represents an element-wise comparison between mt and mt−1: mat ≤ mat−1 for a = 1, . . . ,M . At each iteration t, the saliency st is observed and mt ∈ {0, 1}M in ρt represents a pruning decision performed to maximize the expectectation of ρt+1 conditioned upon saliency measurements s1:t collected up to and including iteration t. This recursive structure terminates with base case ρT where the saliency of the unpruned elements is maximized after T iterations of training.\nIn the above early pruning formulation4, constraints (3c) and (4b) ensure pruning is performed in a practical manner whereby once a network element is pruned, it can no longer be recovered in a later training iteration. We define a trained network sparsity budget, Bs (3b), which may differ significantly from initial network size ||m0||0 (e.g., in the case where the network is trained on GPUs, but deployed on resource constrained edge or mobile devices). We also constrain a total computational effort budget Bt,c which is reduced per training iteration t by the number of unpruned network elements ||mt||0. We constrain BT,c ≥ 0 (3d) to ensure training completion within the specified computational budget. Here we assume that a more sparse pruning maskmt corresponds to lower computational effort during training iteration t due to updating fewer network elements. Finally, (3a) maximizes the saliency with a pruning maskmT constrained by a sparsity budget Bs (3b). Our early pruning formulation balances the saliency of network elements after convergence against the total computational effort to train such network (i.e., mT · sT vs. ∑T t=1||mt||0). This appropriately captures the balancing act of training-time early pruning whereby the computational effort is saved by early pruning network elements while preserving the saliency of the remaining network elements after convergence." }, { "heading": "3.2 MODELING THE SALIENCY WITH MULTI-OUTPUT GAUSSIAN PROCESS", "text": "To solve the above early pruning problem, we need to model the belief p(s1:T ) of the saliency for computing the predictive belief p(st+1:T |s̃1:t) of the future saliency in (4a). At the first glance, one may consider to decompose the belief: p(s1:T ) , ∏M a=1 p(s a 1:T ) and model the saliency s a 1:T , [sat ]t=1,...,T of each network element independently. Such independent models, however, ignore the co-adaptation and co-evolution of the network elements which have been shown to be a common occurrence in DNN (Hinton et al., 2012; Srivastava et al., 2014; Wang et al., 2020). Also, modeling the correlations between the saliency of different network elements explicitly is non-trivial since considerable feature engineering is needed for representing diverse network elements such as neurons, connections, or convolutional filters.\nTo resolve such issues, we use multi-output Gaussian process (MOGP) to jointly model the belief p(s1:T ) of all saliency measurements. To be specific, we assume that the saliency sat of the a-th network element at iteration t is a linear mixture5 of Q independent latent functions {uq(t)}Qq=1: sat , ∑Q q=1 γ a q uq(t). As shown in (Álvarez & Lawrence, 2011), if each uq(t) is an independent GP with prior zero mean and covariance kq(t, t′), then the resulting distribution over p(s1:T ) is a multivariate Gaussian distribution with prior zero mean and covariance determined by the mixing\n4In contrast to PruneTrain (Lym et al., 2019), our problem definition balances training time vs. performance under an additional constraint on the trained network size (3b). We discuss this further in Section 5.\n5Among the various types of MOGPs (see Álvarez & Lawrence (2011) for a detailed review.), we choose this linear model such that the correlations between sat and sa ′ t′ can be computed analytically.\nweights: cov[sat , s a′ t′ ] = ∑Q q=1 γ a q γ a′ q kq(t, t ′). This explicit covariance between sat and s a′\nt′ helps to exploit the co-evolution and co-adaptation of network elements within the neural networks.\nTo capture the horizontal asymptote trend of sa1 , . . . , s a T as visualized in Appendix A.2, we turn to a kernel used for modeling decaying exponential curves known as the “exponential kernel” (Swersky et al., 2014) and set kq(t, t′) , βq αq\n(t+t′+βq) αq where αq and βq are hyperparameters of MOGP and can\nbe learned via maximum likelihood estimation (Álvarez & Lawrence, 2011). Then, given a vector of observed saliency s̃1:t, the MOGP regression model can provide a Gaussian predictive distribution for any future saliency st′ . Thus, the predictive mean µat′|1:t , E[s a t′ | s̃1:t] of the saliency sat′ and the predictive (co)variance σaa ′\nt′|1:t , cov[s a t′ , s\na′ t′ | s̃1:t] between the saliency sat′ and sa ′\nt′ can be computed analytically, as detailed in Appendix A.3." }, { "heading": "3.3 EARLY PRUNING ALGORITHM", "text": "Solving the above optimizing problem (3) and (4) is difficult due to the interplay between [mt′ ]t′=t,...,T , [Bt′,c]t′=t,...,T , and mT · sT . Instead, we consider a simplification of the above problem by only considering solutions of the formmT−1 = mT−2 = . . . = mt which yields6:\nρ̂t(mt−1, Bt,c, Bs) , maxmt Ep(sT |s̃1:t) [ρT (mt, Bt,c − (T − t)||mt||0, Bs)] (5) This approach allows us to lift (3d) from (3), to which we add a Lagrange multiplier and achieve:\nρ̂t(mt−1, Bt,c, Bs) , maxmt Ep(sT |s̃1:t) [ρ̂T (mt, Bs)] + λ (Bt,c − (T − t)||mt||0) (6) for t = 1, . . . , T − 1 and ρ̂T is defined as ρT without constraint (3d). Consequently, such a ρ̂T can be solved in a greedy manner as in (2). Afterwards, we will omit Bt,c as a parameter of ρ̂T as it no longer constrains the solution of ρ̂T . Note that the presence of an additive penalty in a maximization problem is due to the constraint BT,c ≥ 0 ⇔ −BT,c ≤ 0 which is typically expected prior to Lagrangian reformulation. The above optimization problem remains NP-hard as Ep(sT |s̃1:t)[ρ̂T (mt, Bs)] is submodular inmt (see Appendix B). Although greedy approximations exist for submodular optimization, their running time of O(||mt−1||20) remains far too slow due to the large number of network elements in DNNs. Fortunately, it can be significantly simplified by exploiting the following lemma (its proof is in Appendix C.):\nLemma 1. Let e(i) be an M -dimensional one-hot vectors with the i-th element be 1. ∀ 1 ≤ a, b ≤ M ; m ∈ {0, 1}M s.t.m ∧ (e(a) ∨ e(b)) = 0. Given a vector of observed saliency s̃1:t , if µaT |1:t ≥ µ b T |1:t and µ a T |1:t ≥ 0, then\nEp(sT |s̃1:t)[ρT (m ∨ e(b))]− Ep(sT |s̃1:t)[ρT (m ∨ e(a))] ≤ µbT |1:t Φ(ν/θ) + θ φ(ν/θ) where θ , √ σaaT |1:t + σ bb T |1:t − 2σ ab T |1:t , ν , µ b T |1:t − µ a T |1:t , and Φ and φ are standard normal CDF and PDF, respectively.\nHere, ‘∨’ and ‘∧’ represent bitwise OR and AND operations, respectively. The bitwise OR operation is used to denote the inclusion of e(a) or e(b) in mt. Due to the strong tail decay7 of φ and Φ, Lemma 1 indicates at most marginal possible improvement provided by opting formt = m ∨ e(b) as opposed tomt = m ∨ e(a) given µaT |1:t ≥ µ b T |1:t.\nLemma 1 admits the following approach to optimize ρ̂t: starting with mt = 0M , we consider the inclusion of network elements inmt by the descending order of {µaT |1:t} M a=1 which can be computed analytically using MOGP. A network element denoted by e(a) is included in mt if it improves the objective in (5). The algorithm terminates once the highest not-yet-included element does not improve the objective function as a consequence of the penalty term outweighing the improvement in Ep(sT |s̃1:t)[ρT ]. The remaining excluded elements are then pruned.\nFollowing the algorithm sketch above, we define the utility of network element vat with respect to candidate pruning mask mt≤̇mt−1 which measures the improvement in Ep(sT |s̃1:t)[ρT ] as a\n6We omit (4b) as it is automatically satisfied due to our simplification. 7Note as µaT |1:t ≥ µbT |1:t, Φ(·) ≤ 0.5 and experiences tail decay proportional to µaT |1:t − µbT |1:t.\nconsequence of inclusion of e(a) inmt: ∆(a,mt, s̃1:t, Bs) , Ep(sT |s̃1:t) [ ρT (e (a) ∨mt, Bs)− ρT (mt, Bs) ] . (7) We can now take a Lagrangian approach to pruning decisions during iteration t by balancing the utility of network element vat against the change of the penalty (i.e., λ(T − t)) in Algorithm 1. Due to the relatively expensive cost of performing early pruning, we chose to early prune every Tstep iterations of SGD. Typically Tstep was chosen to correspond to 10-20 epochs of training. To compute ∆(·) we sampled from p(sT |s̃1:t) and used a greedy selection algorithm per sample as in (2). During implementation, we also enforced an additional hard constraint ||mt||0 ≥ Bs which we believe is desirable for practicality reasons. We used a fixed value of B1,c = ||m0||0T0 +Bs(T − T0) in all our experiments.\nAlgorithm 1 Bayesian Early Pruning Require: N , v1, T0, Tstep, T , B1,c, Bs, λ . DNN N , Lagrangian penalty λ\n1: S1:T0 ← train(Nv1 , T0) . Train for T0 iterations to create seed dataset. 2: BT0,c ← B1,c − T0 dim(v1) . Track computational effort expenditure. 3: for k ← 0, . . . , T−T0Tstep ; t← T0 + kTstep do . Early prune every Tstep iterations from T0. 4: µT |1:t, σT |1:t ←MOGP (S1:t) . Train and perform inference. 5: sT ← argsort(−µT |1:t) . Sort descending. 6: mt ← 0dim(vt) . Initial pruning mask. 7: for a← s1T , . . . , s dim(vt) T do . Consider each network element. 8: if Bt,c − (T − t)||mt||0 > 0 then . Remaining Bt,c budget can support training vat . 9: mt = mt ∨ e(a) 10: else if ∆(a,mt, s̃1:t, Bs) ≥ λ(T − t) then . Balance utility against change of penalty. 11: mt = mt ∨ e(a) 12: else 13: break 14: prune(vt,mt) . dim(vt) is reduced here. 15: Bt+Tstep,c ← Bt,c − Tstep||mt||0 16: St+1:t+Tstep ← train(Nvt , Tstep) . Continue training with pruned network. 17: return N" }, { "heading": "4 EXPERIMENTS AND DISCUSSION", "text": "We evaluate our modeling approach as well as our BEP algorithm on the CIFAR-10, CIFAR100 (Krizhevsky, 2009), and ImageNet (Deng et al., 2009) datasets. For CIFAR-10/CIFAR-100 we used a benchmark Convolutional Neural Network (CNN) with 4 convolutional layers, and 1 dense layer.8 For ImageNet we validated on the ResNet-50 architecture (He et al., 2016a).\nDue to the cubic time complexity of MOGPs, we used a variational approximation (Hensman et al., 2015). In all of our models, we used 60 variational inducing points per latent function. We used GPFlow library (Matthews et al., 2017) to build our models." }, { "heading": "4.1 MODELING EVALUATION", "text": "A key assertion in our approach is the importance of capturing co-adaptation and co-evolution effects in network elements. To verify our MOGP approach captures these effects, we compare MOGP vs. GP belief modeling where GP assumes independence in saliency measurements across network elements (i.e., p(s1:T ) , ∏M a=1 p(s a 1:T )).\nA dataset of saliency measurements of convolutional filters and neurons was constructed by instrumenting the training process of our 5-layer CNN on the CIFAR-10/CIFAR-100 dataset. Keras (Chollet, 2015) was used to train this model over 150 epochs.9\n8Code available at https://github.com/keras-team/keras/blob/master/examples/ cifar10_cnn.py\n9Complete experimental setup details found in Appendix G.2.\nGP 0.75(0.06) 5.7(5.7)e4 5.6(5.6)e4 0.64(0.04) 0.70(0.04) 2.13(0.05) 3.4(3.4)e3 0.31(0.02) 1.06(0.02) 4-MOGP 0.79(0.05) 0.98(0.12) 3.13(0.10) 0.44(0.04) 0.60(0.10) 2.29(0.06) 0.12(0.01) 0.24(0.03) 1.07(0.03) 8-MOGP 0.65(0.05) 0.89(0.11) 3.00(0.09) 0.38(0.04) 0.60(0.10) 2.20(0.06) 0.10(0.01) 0.18(0.01) 1.02(0.03) 18-MOGP 0.62(0.05) 0.84(0.11) 2.93(0.10) 0.36(0.03) 0.56(0.10) 2.22(0.07) 0.09(0.01) 0.18(0.01) 1.01(0.03) 32-MOGP 0.65(0.05) 0.85(0.09) 2.89(0.10) 0.36(0.03) 0.59(0.10) 2.16(0.06) 0.09(0.02) 0.18(0.01) 1.00(0.03)\nWe trained belief models with small (t = [0, 26] epochs), medium (t = [0, 40] epochs), and large (t = [0, 75] epochs) training dataset of saliency measurements. For GPs, a separate model was trained per network element (convolutional filter, or neuron). For MOGP, all network elements in a single layer10 shared one MOGP model. We evaluated these models using log likelihood of the remainder of the saliency measurements. We present the performance of the models in Table 1 for CIFAR-100.11 Our MOGP approach better captures the saliency of network elements than a GP approach. Furthermore, using additional latent functions improves MOGP modeling with diminishing returns. We visualize the qualitative differences between GP and MOGP prediction in Figure 1. We observe that MOGP is able to capture the long term trend of saliency curves with significantly less data than GP." }, { "heading": "4.2 SMALL-SCALE EXPERIMENTS", "text": "We applied the early pruning algorithm on the aforementioned architecture, and training regimen. We investigated the behavior of the penalty parameter, λ. We observed that the penalty parameter was difficult to tune properly, either being too aggressive at pruning, or too passive. To rectify this issue, we used a feedback loop to determine the penalty at iteration t, λt dynamically. Dynamic penalty scaling12 uses feedback from earlier pruning iterations to increase or decrease the iteration penalty at time t: λt = λ [(1/λ)∧ ((T − t)||mt||0/Bt,c − 1)]. The dynamic penalty is increased if the anticipated compute required to complete training, (T − t)||mt||0 begins to exceed the amount of compute budget remaining, Bt,c. In such case, a higher penalty is needed to satisfy the computational budget constraint as per (6). We compare dynamic penalty scaling, and penalty without scaling in Fig. 2 using T0 = 20 epochs, Tstep = 10 epochs for the first convolutional layer of our CNN. Going forward, we use dynamic penalty scaling in our experiments.\n10In our observations, jointly modeling the belief of multiple layers’ saliency measurements using MOGP yielded no measurable improvement in log-likelihood.\n11For CIFAR-10 see Appendix G.1. 12Further details can be found in Appendix D.\nDSR 74.1(0.2)% 65.3(0.4)% 52.8(0.5)% 22.1(5.0)% 37.9(0.7)% 29.7(0.1)% 17.5(0.3)% 4.4(1.4)% SNIP 75.4(4.7)% 67.7(0.7)% 50.8(0.8)% 29.4(4.9)% 22.9(9.0)% 15.7(6.1)% 9.9(3.7)% 2.2(1.2)% GraSP 74.6(0.6)% 66.5(0.9)% 50.7(0.6)% 32.9(1.0)% 28.4(7.0)% 22.6(5.4)% 13.9(3.2)% 1.0(0.0)% BEP 1e−2 75.9(0.3)% 69.7(0.4)% 54.8(1.0)% 18.9(5.4)% 40.6(0.2)% 32.2(0.6)% 19.1(0.5)% 7.1(1.6)% BEP 1e−4 75.4(1.7)% 70.5(3.2)% 55.7(0.9)% 36.1(1.1)% 41.3(0.3)% 32.4(0.3)% 19.7(0.8)% 8.5(0.8)% BEP 1e−7 76.0(0.1)% 70.6(0.2)% 56.2(0.4)% 30.4(5.1)% 40.6(0.2)% 33.0(0.5)% 19.5(0.5)% 6.6(1.5)%\nWe compare our work with SNIP (Lee et al., 2019), GraSP (Wang et al., 2020), and momentum-based dynamic sparse remaparameterization (DSR) (Dettmers & Zettlemoyer, 2019). To compare against DSR, we instantiate a smaller network of the size BEP yields after training has completed as it is a prune and regrow method. The SNIP and GraSP approaches are extended to neurons/filters by averaging the saliencies of the constituent weight parameters. We experimented with various degrees of sparsity, using BEP to prune a portion of filters/neurons of each layer.13 We present the results in Table 2. Our approach better preserves performance at equivalent sparsity. A lower penalty yields higher performing results showing λ serves well at balancing performance vs. computational budget.\nWe investigate the robustness of BEP and MOGP hyperparameters. We vary the number of MOGP variational inducing points, MOGP latent functions, and Tstep and observe the performance of BEP 1e−4 on CIFAR-10/CIFAR-100 at 80%, 90%, and 95% sparsity. We present these results in Table 3. We observe that in general, all hyperparameters are robust to changes. Mild degradation is observed in the extremal hyperparameter settings." }, { "heading": "4.3 SPEEDING UP RESNET TRAINING ON IMAGENET", "text": "Our chief goal in this work is to speed up training of large-scale DNNs such as ResNet (He et al., 2016a;b) on the ImageNet dataset. Pruning ResNet requires a careful definition of network element saliency to allow pruning of all layers. ResNet contains long sequences of residual units with matching number of input/output channels. The inputs of residual units are connected with shortcut connections (i.e., through addition) to the output of the residual unit.14 Due to shortcut connections, this structure requires that within a sequence of residual units, the number of inputs/output channels of all residual units must match exactly. This requires group pruning of residual unit channels for\n13In our observations saliency measurements don’t well capture network element efficacy when comparing across layers. Thus pruning whole networks using network element saliency yields poor performing networks with bottlenecks. This limitation of saliency functions is well known (See Molchanov et al. (2017) Appendix A.1 and A.2; Wang et al. (2020) Section 3 last paragraph). Development of saliency functions which overcome this shortcoming while remaining performant is a difficult open problem outside the scope of this work.\n14Precise details of the ResNet architecture may be found in He et al. (2016a) Section 3.\na sequence of residual units, where group pruning an output channel of a residual unit sequence requires pruning it from the inputs/outputs of all residual units within the sequence.15\nWe trained ResNet-50 with BEP as well as SNIP and GraSP.16 We group pruned less aggressively as residual unit channels feed into a large number of residual units, thus making aggressive pruning likely to degrade performance. We ran BEP iterations at t = [15, 20, 25, 35, 45, 55, 75] epochs. We trained for 100 epochs on 4× Nvidia Geforce GTX 1080 Ti GPUs. More experimental details found in Appendix G.2. We present our results in Table 4. We achieve higher performance than related techniques, albeit at longer wall time. Our approach captures the training time vs. performance tradeoff present in DNNs, unlike competing approaches." }, { "heading": "5 RELATED WORK", "text": "Pruning and related techniques. Initial works in DNN pruning center around saliency based pruning after training including Skeletonization (Mozer & Smolensky, 1988), Optimal Brain Damage and followup work (Hassibi & Stork, 1992; LeCun et al., 1989) as well as sensitivity based pruning (Karnin, 1990). In recent years, saliency functions been adapted to pruning neurons or convolutional filters. Li et al. (2017) define a saliency function on convolutional filters by using the L1 norm. Molchanov et al. (2017) propose using a first-order Taylor-series approximation on the objective function as a saliency measure. Dong et al. (2017) propose layer wise pruning of weight parameters using a Hessian based saliency measure. Several variants of pruning after training exist. Han et al. (2015) propose iterative pruning where pruning is performed in stages alternating with fine tune training. Guo et al. (2016) suggest dynamic network surgery, where pruning is performed on-the-fly during evaluation time. Li et al. (2017) and He et al. propose reinforcement learning for pruning decisions. A comprehensive overview may be found in Gale et al. (2019).\nKnowledge distillation (Hinton et al., 2015; Lu et al., 2017; Tung & Mori, 2019; Yim et al., 2017) aim to transfer the capabilities of a trained network into a smaller network. Weight sharing (Nowlan & Hinton, 1992; Ullrich et al., 2017) and low rank matrix factorization (Denton et al., 2014; Jaderberg et al., 2014) aim to compress the parameterization of neural networks. Network quantization (Courbariaux et al., 2015; Hubara et al., 2017; Micikevicius et al., 2018) use lower fidelity representation of network elements (e.g. 16-bit) to speed up training and evaluation. Although speedup during training is achievable through network quantization, this technique requires hardware support and only\n15We formally define saliency on residual unit sequences in Appendix G.3 16We omit comparison to DSR due to differing underlying deep learning library which makes walltime\ncomparisons inaccurate.\nSNIP 72.0% 90.6% 27.9h 0.7h 62.0% 83.8% 15.8h 0.7h 50.9% 74.2% 21.7h 0.7h GraSP 72.1% 90.6% 27.1h 2.7h 61.6% 83.6% 16.5h 2.7h 52.2% 75.4% 21.7h 2.7h BEP 1e−1 72.2% 90.6% 31.6h 2.9h 62.0% 83.8% 22.8h 1.5h 53.7% 76.8% 27.8h 2.5h BEP 1e−4 72.5% 91.0% 34.8h 2.2h 62.3% 84.5% 22.0h 1.8h 53.5% 76.6% 27.8h 2.6h\nprovides coarse granularity in trading off computational effort vs. performance. Current GPUs only extend native support to 16-bit floating point operations. Furthermore, our approach is orthogonal to quantization allowing the techniques to be combined for further speedup.\nInitialization time or training time pruning. Frankle & Carbin (2019) show that a randomly initialized DNN contains a small subnetwork, which if trained by itself, yields equivalent performance to the original network. SNIP (Lee et al., 2019) and GraSP (Wang et al., 2020) propose pruning connection weights prior to the training process through a first order and second order saliency function respectively. Sparse Evolutionary Training (Mocanu et al., 2018) propose initializing networks with sparse topology prior to training. Narang et al. (2017) consider connection weight pruning during training for recurrent neural networks using a heuristic approach.\nDynamic sparse reparameterization considers pruning and regrowing parameter weights during the training process (Bellec et al., 2018; Dettmers & Zettlemoyer, 2019; Mostafa & Wang, 2019). Dai et al. (2019) propose a grow and prune approach to learning network architecture and connection layout. We differ from existing work as our focus is on speeding up neural network training, meanwhile other works in training time pruning aim to achieve sparse network layouts. To the best of our knowledge, except for small speedups presented in (Dettmers & Zettlemoyer, 2019), the above works do not demonstrate speedup during training time using popular deep learning libraries run on modern GPUs.\nPruneTrain (Lym et al., 2019) also proposes pruning filters during training to achieve speedup while minimizing degradation to performance with periodic pruning iterations. In contrast to our approach, PruneTrain does not allow specification of the desired network size after training. A specified network size may be useful if training for resource constrained devices such as mobile phones or edge devices. We compare with PruneTrain under the early pruning problem definition in Appendix E." }, { "heading": "6 CONCLUSION", "text": "This paper presents a novel efficient algorithm to perform pruning of DNN elements such as neurons, or convolutional layers during the training process. To achieve early pruning before the training converges while preserving the performance of the DNN upon convergence, a Bayesian model (i.e., MOGP) is used to predict the saliency of DNN elements in the future (unseen) training iterations by exploiting the exponentially decaying behavior of the saliency and the correlations between saliency of different network elements. Then, we exploit a property (Lemma 1) of the objective function and propose an efficient Bayesian early pruning algorithm. Empirical evaluations on benchmark datasets show that our algorithm performs favorably to related works for pruning convolutional filters and neurons. Our approach remains flexible to changes in saliency function, and appropriately balances the training time vs. performance tradeoff in training DNNs. We are able to train an early pruned ResNet-50 model achieving a 48.6% speedup (37h vs. 55h) while maintaining a validation accuracy of 72.5%." }, { "heading": "A MODELING DETAILS", "text": "A.1 SALIENCY FUNCTION\nIn this work, we use a first order Taylor-series saliency function proposed by Molchanov et al. (2017). Our design (Section 3) remains flexible to allow usage of arbitrary saliency functions in a plug-n-play basis. We partition a DNN of L layers, where each layer ` contains C` convolutional filters, into a sequence of convolutional filters [z`,c] c=1,...,C` `=1,...,L . Each filter z`,c : RC`−1×W`−1×H`−1 → RW`×H` can be considered as one network element in vT and z`,c(P`−1) , R(W`,c ∗ P`−1 + b`,c) where W`,c ∈ RC`×O`×O ′ ` , b`,c are kernel weights and bias.with receptive field O` × O′`, ‘∗’ represents the convolution operation, R is the activation function, P`−1 represents the output of z`−1 , [z`−1,c′ ]c′=1,...,C`−1 with P0 corresponding to an input xd ∈ X , and W`, H` are width and height dimensions of layer ` for ` = 1, . . . , L. Let Nz`:z`′ , z`′◦, . . . , ◦z` denote a partial neural network of layers [`, . . . , `′]1≤`≤`′≤L. The Taylor-series saliency function on the convolutional filter z`,c denoted as s([`, c]) is defined17:\ns([`, c]) , 1\nD D∑ d=1 ∣∣∣∣∣∣ 1W` ×H` W`×H`∑ j=1 ∂L(P(xd)` , yd;Nz`+1:zL) ∂P (xd) `,c,j P (xd) `,c,j ∣∣∣∣∣∣ . (8) where P(xd)` is the output of the partial neural network Nz1:z` with xd as the input and [Pxd`,c,j ]j=1,...,W`×H` interprets the output of the c-th filter in vectorized form. This function uses the first-order Taylor-series approximation of L to approximate the change in loss if z`,c was changed to a constant 0 function. Using the above saliency definition, pruning filter z`,c corresponds to collectively zeroing W`,c, b`,c as well as weight parameters18 [W`+1,c′,{:,:,c}]c′=1,...,C`+1 of z`+1 which utilize the output of zl,c. This definition can be extended to elements (e.g. neurons) which output scalars by setting W` = H` = 1.\nA.2 ON THE CHOICE OF THE “EXPONENTIAL KERNEL”\nWe justify our choice of the exponential kernel as a modeling mechanism by presenting visualizations of saliency measurements collected during training, and comparing these to samples drawn from the exponential kernel kq(t, t′) , β α\n(t+t′+β)α , as shown in Figs. 3-4. Both the saliency and the function samples exhibit exponentially decaying behavior, which makes the exponential kernel a strong fit for modeling saliency evolution over time.\nFurthermore we note that the exponential kernel was used to great effect in Swersky et al. (2014) with respect to modeling loss curves as a function of epochs. Loss curves also exhibit asymptotic behavior, similar to saliency measurement curves, thus providing evidence for the exponential kernel being an apt fit for our task.\nA.3 PREDICTIVE DISTRIBUTION OF THE SALIENCY\nLet the prior covariance matrix be Kτ1:τ2 , [cov[s a t , s\na′ t′ ]] a,a′=1,...,M t,t′=τ1,...,τ2\nfor any 1 ≤ τ1 ≤ τ2 ≤ T . Given a vector of observed saliency s̃1:t, the MOGP regression model can provide a Gaus-\n17For brevity, we omit parameters X , Y ,Nz1:zL , L. 18Here we use {} to distinguish indexing into a tensor from indexing into the sequence of tensors [W`+1,c′ ].\nsian predictive distribution p(st′ |s̃1:t) = N (µt′|1:t,Kt′|1:t) for any future saliency st′ with the following posterior mean vector and covariance matrix: µt′|1:t , K[t′t]K −1 1:t s̃1:t, Kt′|1:t , Kt′:t′ −K[t′t]K−11:tK>[t′t] where K[t′t] , [cov[s a t′ , s a′ τ ]] a,a′=1,...,M τ=1,...,t . Then, the a-th element µ a t′|1:t of µt′|1:t is the predictive mean of the saliency sat′ . And the [a, a ′]-th element of K[t′t] denoted as σaa ′\nt′|1:t is the predictive (co)variance between the saliency s a t′ and s\na′ t′ .\nB SUBMODULARITY OF E[ρ̂T ]\nIn (6), the problem of choosing m from {0, 1}M can be considered as selecting a subset A of indexes from {1, . . . ,M} such that mat = 1 for a ∈ A, and mat = 0 otherwise. Therefore, P (m) , Ep(sT |s̃1:t)[ρ̂T (m, Bs)] can be considered as a set function which we will show to be submodular. To keep notation consistency, we will remain using P (m) instead of representing it as a function of the index subset A.\nLemma 2 (Submodularity). Let m′, m′′ ∈ {0, 1}M , and e(a) be arbitrary M-dimensional one hot vector with 1 ≤ a ≤M . We have P (m′ ∨ e(a))− P (m′) ≥ P (m′′ ∨ e(a))− P (m′′) for any m′ ≤̇m′′,m′ ∧ e(a) = 0, andm′′ ∧ e(a) = 0.\nProof. According to (3), Ep(sT |s̃1:t)[ρ̂T (m, Bs)] = Ep(sT |s̃1:t) [ max mT [ mT · s̃T , s.t. ||mT ||0 ≤ Bs,mT ≤̇m ]] Let α(m) , arg maxmT [ mT · s̃T , s.t. ||mT ||0 ≤ Bs,mT ≤̇m ] return the optimized mask mT given anym, Λm , min(α(m) sT ) be the minimal saliency of the network elements selected at iteration T for P (m). Then, we have\nP (m ∨ e(a)) = Ep(sT |s̃1:t) [ ρ̂T (m ∨ e(a), Bs) ] = Ep(sT |s̃1:t) [ρ̂T (m, Bs)− Λm + max(s a T ,Λm)]\nThe second equality is due to the fact that the network element vaT would only replace the lowest included element inmT in order to maximize the objective. Then,\nP (m ∨ e(a))− P (m) = Ep(sT |s̃1:t) [ρ̂T (m, Bs)− Λm + max(s a T ,Λm)]− Ep(sT |s̃1:t) [ρ̂T (m, Bs)]\n= Ep(sT |s̃1:t) [−Λm + max(s a T ,Λm)] = Ep(sT |s̃1:t) [max(s a T − Λm, 0)] (9)\nGivenm′ ≤̇m′′, we have Λm′ ≤ Λm′′ sincemT ≤̇m in α(m′) is a tighter constraint than that in α(m′′). Consequently, we can get sat − Λm′ ≥ sat − Λm′′ , and thus\n[P (m′ ∨ e(a))− P (m′)] ≥ [P (m′′ ∨ e(a))− P (m′′)] ." }, { "heading": "C PROOF OF LEMMA 1", "text": "We restate Lemma 1 for clarity.\nLemma 1. Let e(i) be an M -dimensional one-hot vectors with the i-th element be 1. ∀ 1 ≤ a, b ≤ M ; m ∈ {0, 1}M s.t.m ∧ (e(a) ∨ e(b)) = 0. Given a vector of observed saliency s̃1:t , if µaT |1:t ≥ µ b T |1:t and µ a T |1:t ≥ 0, then\nEp(sT |s̃1:t)[ρT (m ∨ e(b))]− Ep(sT |s̃1:t)[ρT (m ∨ e(a))] ≤ µbT |1:t Φ(ν/θ) + θ φ(ν/θ)\nwhere θ , √ σaaT |1:t + σ bb T |1:t − 2σ ab T |1:t , ν , µ b T |1:t − µ a T |1:t , and Φ and φ are standard normal CDF and PDF, respectively.\nTo prove this Lemma, we prove the following first: Lemma 3. Ep(sT |s̃1:t) [ ρT (m ∨ e(b)) ] − Ep(sT |s̃1:t) [ ρT (m ∨ e(a)) ] ≤ E[max(sbT − saT , 0)].\nProof. Due to (9), we have\nEp(sT |s̃1:t) [ ρT (m ∨ e(b)) ] − Ep(sT |s̃1:t) [ ρT (m ∨ e(a)) ] = P (m ∨ e(b))− P (m)− (P (m ∨ e(a))− P (m)) = Ep(sT |s̃1:t) [ max(sbT − Λm, 0) ] − Ep(sT |s̃1:t) [max(s a T − Λm, 0)]\n= Ep(sT |s̃1:t) [ max(sbT − Λm, 0)−max(saT − Λm, 0) ] (10)\n= Ep(sT |s̃1:t) [ max(sbT − saT ,Λm − saT )−max(0,Λm − saT ) ] (11)\n≤ Ep(sT |s̃1:t) [ max(sbT − saT , 0) ] (12)\nThe equality (11) is achieved by adding Λm − saT in each term of the two max functions in (10). The inequality (12) can be proved by considering the following two cases:\nIf Λm − saT ≥ 0, then\nmax(sbT − saT ,Λm − saT )−max(0,Λm − saT ) = max(sbT − saT ,Λm − saT )− (Λm − saT ) = max(sbT − saT − (Λm − saT ), 0) ≤ max(sbT − saT , 0) .\nIf Λm − saT < 0, then\nmax(sbT − saT ,Λm − saT )−max(0,Λm − saT ) = max(sbT − saT ,Λm − saT ) ≤ max(sbT − saT , 0) .\nNext we utilize a well known bound regarding the maximum of two Gaussian random variables (Nadarajah & Kotz, 2008), which we restate:\nLemma 4. Let sa, sb be Gaussian random variables with means µa, µb and standard deviations σa, σb, then E[max(sa, sb)] ≤ µaΦ ( µb−µa θ ) + µbΦ ( µb−µa θ ) + θφ ( µb−µa θ ) where θ ,√\n[σb]2 + [σa]2 − 2cov(sb, sa) and Φ, φ are standard normal CDF and PDF respectively.\nThen,\nEp(sT |s̃1:t)[max(s b T − saT , 0)]\n= Ep(sT |s̃1:t)[max(s b T , s a T )]− Ep(sT |s̃1:t)[s a T ]\n≤ (µbT |1:t + µ a T |1:t)Φ (µbT |1:t − µaT |1:t θ ) + θφ (µbT |1:tµaT |1:t θ ) − µaT |1:t\n= µbT |1:tΦ (µbT |1:t − µaT |1:t\nθ\n) + θφ (µbT |1:tµaT |1:t θ ) + µaT |1:t\n( Φ ( µbT |1:t − µ a T |1:t\nθ\n) − 1 )\n≤ µbT |1:tΦ (µbT |1:t − µaT |1:t\nθ\n) + θφ (µbT |1:t − µaT |1:t θ ) The first inequaltiy follows from Lemma 4. The second inequaltiy is due to Φ (µbT |1:t−µaT |1:t θ ) ≤ 1 and µaT |1:t ≥ 0." }, { "heading": "D DYNAMIC PENALTY SCALING AS A FEEDBACK LOOP", "text": "We designed a feedback loop to automatically determine λt during early pruning. A proportional feedback loop can be defined as follows19:\nλt , λ+Kp × e(t) (13)\nwhere Kp ≥ 0 is a proportional constant which modulates λt according to a signed measure of error e(·) at time t. Note that λt ≥ λ as e(t) ≥ 0, and the opposite occurs if e(t) ≤ 0, which allows the error to serve as feedback to determine λt. Implicitly, λt asserts some control over e(t+ 1), and thus closing the feedback loop.\nTraditional PID approaches to determine Kp do not work in our case as λ may vary over several orders of magnitude. Consequently, a natural choice for Kp is λ itself which preserves the same order of magnitude between Kp and λ:\nλt = λ+ λ× e(t) = λ(1 + e(t)). (14)\nHere we make two decisions to adapt the above to our task. First, as λ is likely to be extremely small, we use exponentiation, as opposed to multiplication. Secondly as λ ≤ 1 in practice, we use 1− e(t) as an exponent:\nλt = λ ∧ [1− e(t)] = λ [(1/λ) ∧e(t)] . (15)\nThe above derivation is complete with our definition of e(t):\ne(t) , (T − t)||mt||0/Bt,c − 1. (16)\nThe above determines error by the discrepancy between the anticipated compute required to complete training (T − t)||mt||0, vs. the remaining budget Bt,c with e(t) = 0 if the two are equal. This is a natural measure of feedback for λ as we expect the two to be equal if λ is serving well to early prune the network." }, { "heading": "E COMPARISON WITH PRUNETRAIN", "text": "We compare with PruneTrain (Lym et al., 2019) in Table 5. PruneTrain uses an orthogonal technique of dynamically increasing the minibatch size to achieve further wall time improvements. This prevents accurate wall time comparisons between BEP and PruneTrain. To compare with PruneTrain which\n19This approach is inspired from Proportional-Integral-Derivative (PID) controllers (Bellman, 2015), see Åström et al. (1993) for an introductory survey.\ndoes not constrain the trained network size (3b), we train ResNet-50 under equivalent inference cost as a network trained by PruneTrain. To compute train/inference cost (FLOPs) for a convolutional layer, we used a formula defined in Molchanov et al. (2017) A.1:\nFLOPs , 2HW (CinK2 + 1)Cout (17)\nwhere H , W , Cin are input height, width, and channels respectively, K is the convolutional kernel size, and Cout is the number of output channels of the layer.\nUnder equivalent inference cost, BEP 1e−1 outperforms PruneTrain in Top-1 performance. We also find that BEP 1e−1 and BEP 1e−4 consumes fewer training FLOPs when compared to baseline. It should be noted that PruneTrain does not provide a mechanism to constrain the trained network size, thus it is unclear how to utilize it in order to solve the early pruning problem (3), (4)." }, { "heading": "F TABLE OF NOTATIONS", "text": "We list a table of notations used elsewhere in the paper in Table 6.\nGP 1.19(0.5) 1.08(0.06) 1.07(1.07)e5 0.96(0.04) 0.93(0.03) 2.47(0.04) 0.49(0.01) 0.48(0.01) 1.33(0.02) 4-MOGP 1.15(0.05) 0.89(0.06) 2.44(0.05) 0.91(0.02) 0.80(0.03) 2.20(0.03) 0.38(0.02) 0.39(0.02) 1.25(0.02) 8-MOGP 1.09(0.04) 0.86(0.05) 2.38(0.04) 0.84(0.03) 0.78(0.03) 2.16(0.03) 0.32(0.01) 0.35(0.02) 1.20(0.02) 18-MOGP 0.97(0.04) 0.80(0.05) 2.33(0.04) 0.89(0.03) 0.76(0.03) 2.13(0.03) 0.31(0.01) 0.35(0.02) 1.20(0.02) 32-MOGP 0.96(0.06) 0.81(0.06) 2.32(0.04) 0.79(0.03) 0.74(0.03) 2.13(0.03) 0.31(0.01) 0.34(0.02) 1.20(0.02)" }, { "heading": "G MORE EXPERIMENTAL RESULTS AND EXPERIMENTAL DETAILS", "text": "G.1 GP VS. MOGP LOG-LIKELIHOOD ON CIFAR-10 DATASET\nTable 7 presents the results of the experiment in Section 4.1 for the CIFAR-10 dataset.\nG.2 EXPERIMENTAL DETAILS\nTo train our CIFAR-10 and CIFAR-100 models we used an Adam optimizer (Kingma & Ba, 2015) with an initial learning rate of 0.001. The learning rate used an exponential decay of k = 0.985, and a batch size of 32 was used. Training was paused three times evenly spaced per epoch. During this pause, we collected saliency measurements using 40% of the training dataset. This instrumentation subset was randomly select from the training dataset at initialization, and remained constant throughout the training procedure. We performed data preprocessing of saliency evaluations into a standardized [0, 10] range.20 We used (8) to measure saliency of neurons/convolutional filters. For the convolutional layers we used 12 latent MOGP functions. For the dense layer we used 4 latent MOGP functions.\nFor our ResNet-50 model we used an SGD with Momentum optimizer with an initial learning rate of 0.1. The learning rate was divided by ten at t = [30, 60, 80] epochs. We collected saliency data every 5 iterations of SGD, and averaged them into buckets corresponding to 625 iterations of SGD to form our dataset. We used a minimum of 4 latent functions per MOGP, however this was dynamically increased if the model couldn’t fit the data up to a maximum of 15.\nWe sampled 10K points from our MOGP model to estimate ∆(·) for CIFAR-10/CIFAR-100. For ResNet we sampled 15K points. We repeated experiments 5 times for reporting accuracy on CIFAR10/CIFAR-100.\nG.3 PRUNING ON RESNET\nResNet architecture is composed of a sequence of residual units: Z` , F(P`−1) + P`−1, where P`−1 is the output of the previous residual unit Z`−1 and ‘+’ denotes elementwise addition. Internally, F is typically implemented as three stacked convolutional layers: F(P`−1) , [z`3 ◦ z`2 ◦ z`1 ] (P`−1) where z`1 , z`2 , z`3 are convolutional layers. Within this setting we consider convolutional filter pruning. Although z`1 , z`2 may be pruned using the procedure described earlier. Pruning z`3 requires a different procedure. Due to the direct addition of P`−1 to F(P`−1), the output dimensions of Z`−1 and z`3 must match exactly. Thus a ResNet architecture consists of sequences of residual units of length B with matching input/output dimensions: ζ , [Z`]`=1,...,B , s.t. dim(P1) = dim(P2) = . . . = dim(PB). We propose group pruning of layers [z`3 ]`=1,...,B where filters are removed from all z`3 in a residual unit sequence in tandem. We define s([ζ, c]) , ∑B `=1 s([`3, c]), where s(·) is defined for convolutional layers as in (8). To prune the channel c from ζ , we prune it from each layer in [z`3]`=1,...,B . Typically we pruned sequence channels less aggressively than convolutional filters as these channels feed into several convolutional layers.\n20Generally, saliency evaluations are relatively small (≤ 0.01), which leads to poor fitting models or positive log-likelihood. Precise details of our data preprocessing is in Appendix G.4.\nG.4 DATA PREPROCESSING\nWe followed the same data preprocessing procedure for both our small scale and ImageNet experiments. To standardize the saliency measurements for a training dataset s̃1:t in our modeling experiments we clip them between 0 and an upper bound computed as follows: ub , percentile(s̃1:t, 95) × 1.3. This procedure removes outliers. We used 1.3 as a multiplier, as this upper bound is used to transform test dataset as well, which may have higher saliency evaluations.\nAfter clipping the training data, we perform a trend check for each element va by fitting a Linear Regression model to the data s̃a1:t. For s̃ a 1:t with an increasing trend (i.e., the linear regression model has positive slope) we perform the transformation s̃a1:t = ub− s̃a1:t. The reasoning behind this is that the exponential kernel strongly prefers decaying curves. After this preprocessing, we scale up the saliency measurements to a [0, 10] range: s̃1:t = s̃1:t × 10. We found that without scaling to larger values, log-likelihood of our models demonstrated extremely high positive values due to small values of unscaled saliency measurements.\nWe transform the test data in our modeling experiments s̃t+1:T with the same procedure using the same ub and per-element va regression models as computed by the training data. We measure log-likelihood after this transformation for both the test dataset in our small scale experiments.\nDuring the BEP Algorithm, the same steps are followed, however we inverse the trend check transformation (s̃a1:t = ub− s̃a1:t) on the predicted MOGP distribution of sT prior to sampling for estimation of ∆(·)." } ]
2,020
null
SP:eb16e608d4bb9be2c7f2e358a5166c6c202272cc
[ "This paper proposes methods to induce diversity in the networks of ensemble-based Q-Learning methods. This is achieved my maximizing a variety of measures of inequality based on the L2 parameter norms of individual networks in an ensemble. This is motivated by the benefit of having diversity in the learned features, which itself is motivated by observations on the CKA of some ensembleDQN networks.", "Q-learning is known to have overestimation bias. Approaches like EnsembleDQN and MaxminDQN try to use different estimates from ensembles of learners to reduce the bias. The authors study a specific observation and try to tackle it by regularization technique to maximise the diversity of representation space. Five different regularization functions are evaluated in the paper. And experiments show that the proposed regularization helps on the diversity and outperform MaxminDQN and EnsembleDQN. Note that the reviewer is not very familiar with methods to introduce diversity in representation, but based on educated guess, the proposed method look interesting." ]
The first deep RL algorithm, DQN, was limited by the overestimation bias of the learned Q-function. Subsequent algorithms proposed techniques to reduce this problem, without fully eliminating it. Recently, the Maxmin and Ensemble Qlearning algorithms used the different estimates provided by ensembles of learners to reduce the bias. Unfortunately, these learners can converge to the same point in the parametric or representation space, falling back to the classic single neural network DQN. In this paper, we describe a regularization technique to maximize diversity in the representation space in these algorithms. We propose and compare five regularization functions inspired from economics theory and consensus optimization. We show that the resulting approach significantly outperforms the Maxmin and Ensemble Q-learning algorithms as well as non-ensemble baselines.
[]
[ { "authors": [ "Rishabh Agarwal", "Dale Schuurmans", "Mohammad Norouzi" ], "title": "Striving for simplicity in off-policy deep reinforcement learning", "venue": "arXiv preprint arXiv:1907.04543,", "year": 2019 }, { "authors": [ "Paul D. Allison" ], "title": "Measures of inequality", "venue": "American Sociological Review,", "year": 1978 }, { "authors": [ "Oron Anschel", "Nir Baram", "Nahum Shimkin" ], "title": "Averaged-DQN: Variance reduction and stabilization for deep reinforcement learning", "venue": "In Proceedings of the International Conference on Machine Learning (ICML-2017),", "year": 2017 }, { "authors": [ "Anthony B Atkinson" ], "title": "On the measurement of inequality", "venue": "Journal of economic theory,", "year": 1970 }, { "authors": [ "Stephen Boyd", "Neal Parikh", "Eric Chu", "Borja Peleato", "Jonathan Eckstein" ], "title": "Distributed optimization and statistical learning via the alternating direction method of multipliers", "venue": "Foundations and Trends in Machine Learning,", "year": 2011 }, { "authors": [ "Richard Y Chen", "Szymon Sidor", "Pieter Abbeel", "John Schulman" ], "title": "Ucb exploration via q-ensembles", "venue": "arXiv preprint arXiv:1706.01502,", "year": 2017 }, { "authors": [ "Richard Cheng", "Abhinav Verma", "Gabor Orosz", "Swarat Chaudhuri", "Yisong Yue", "Joel Burdick" ], "title": "Control regularization for reduced variance reinforcement learning", "venue": null, "year": 1905 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Corinna Cortes", "Mehryar Mohri", "Afshin Rostamizadeh" ], "title": "Algorithms for learning kernels based on centered alignment", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Nello Cristianini", "John Shawe-Taylor", "Andre Elisseeff", "Jaz S Kandola" ], "title": "On kernel-target alignment", "venue": "In Proceedings of the Advances in Neural Information Processing Systems (NIPS-2002),", "year": 2002 }, { "authors": [ "Carlo D’Eramo", "Alessandro Nuara", "Matteo Pirotta", "Marcello Restelli" ], "title": "Estimating the maximum expected value in continuous reinforcement learning problems", "venue": "In Proceedings of the Conference on Artificial Intelligence", "year": 2017 }, { "authors": [ "Jesse Farebrother", "Marlos C. Machado", "Michael Bowling" ], "title": "Generalization and regularization in DQN, 2018", "venue": null, "year": 2018 }, { "authors": [ "Scott Fujimoto", "Herke Van Hoof", "David Meger" ], "title": "Addressing function approximation error in actor-critic methods", "venue": "arXiv preprint arXiv:1802.09477,", "year": 2018 }, { "authors": [ "Alexandre Galashov", "Siddhant Jayakumar", "Leonard Hasenclever", "Dhruva Tirumala", "Jonathan Schwarz", "Guillaume Desjardins", "Wojciech Czarnecki", "Yee Whye Teh", "Razvan Pascanu", "Nicolas Heess" ], "title": "Information asymmetry in KL-regularized RL", "venue": "In Proceedings of the International Conference on Learning Representation", "year": 2019 }, { "authors": [ "Jordi Grau-Moya", "Felix Leibfried", "Peter Vrancx" ], "title": "Soft Q-learning with mutual-information regularization", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR-2019),", "year": 2019 }, { "authors": [ "Arthur Gretton", "Olivier Bousquet", "Alex Smola", "Bernhard Schölkopf" ], "title": "Measuring statistical dependence with hilbert-schmidt norms", "venue": "In Algorithmic Learning Theory (ALT),", "year": 2005 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "Hado Hado van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep Reinforcement Learning with Double Q-learning", "venue": "In Proceeding of Conference on Artificial Intelligence", "year": 2016 }, { "authors": [ "Simon Kornblith", "Mohammad Norouzi", "Honglak Lee", "Geoffrey Hinton" ], "title": "Similarity of neural network representations revisited", "venue": "In Proceedings of the International Conference on Machine Learning", "year": 2019 }, { "authors": [ "Aviral Kumar", "Justin Fu", "Matthew Soh", "George Tucker", "Sergey Levine" ], "title": "Stabilizing off-policy Q-learning via bootstrapping error reduction", "venue": "In Proceedings of the Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Thanard Kurutach", "Ignasi Clavera", "Yan Duan", "Aviv Tamar", "Pieter Abbeel" ], "title": "Model-ensemble trust-region policy optimization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Qingfeng Lan", "Yangchen Pan", "Alona Fyshe", "Martha White" ], "title": "Maxmin Q-learning: Controlling the estimation bias of Q-learning", "venue": "In Proceeding of the International Conference on Learning Representations (ICLR-2020),", "year": 2020 }, { "authors": [ "Donghun Lee", "Boris Defourny", "Warren B. Powell" ], "title": "Bias-corrected Q-learning to Control Maxoperator Bias in Q-learning", "venue": "In IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning,", "year": 2013 }, { "authors": [ "Kimin Lee", "Laskin Michael", "Aravind Srinivas", "Pieter Abbeel" ], "title": "Sunrise: A simple unified framework for ensemble learning in deep reinforcement learning", "venue": "arXiv preprint arXiv:2007.04938,", "year": 2020 }, { "authors": [ "Yixuan Li", "Jason Yosinski", "Jeff Clune", "Hod Lipson", "John Hopcroft" ], "title": "Convergent learning: Do different neural networks learn the same representations", "venue": "In Proceedings of the International Workshop on Feature Extraction: Modern Questions and Challenges at NIPS 2015,", "year": 2015 }, { "authors": [ "Zhuang Liu", "Xuanlin Li", "Bingyi Kang", "Trevor Darrell" ], "title": "Regularization matters in policy optimization", "venue": "arXiv preprint arXiv:1910.09191,", "year": 2020 }, { "authors": [ "P. Lv", "X. Wang", "Y. Cheng", "Z. Duan" ], "title": "Stochastic double deep q-network", "venue": "IEEE Access,", "year": 2019 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K. Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Youssef Mroueh", "Etienne Marcheret", "Vaibhava Goel" ], "title": "Multimodal retrieval with asymmetrically weighted truncated-svd canonical correlation analysis", "venue": "CoRR, abs/1511.06267,", "year": 2015 }, { "authors": [ "Efe A. Ok", "James Foster" ], "title": "Lorenz Dominance and the Variance of Logarithms", "venue": "Working Papers 97-22,", "year": 1997 }, { "authors": [ "Ian Osband", "Charles Blundell", "Alexander Pritzel", "Benjamin Van Roy" ], "title": "Deep exploration via bootstrapped dqn", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Anay Pattanaik", "Zhenyi Tang", "Shuijing Liu", "Gautham Bommannan", "Girish Chowdhary" ], "title": "Robust deep reinforcement learning with adversarial attacks", "venue": "In Proceedings of the International Conference on Autonomous Agents and MultiAgent Systems (AAMAS-2018),", "year": 2018 }, { "authors": [ "Lan Qingfeng" ], "title": "Gym compatible games for reinforcenment learning", "venue": "https://github.com/ qlan3/gym-games,", "year": 2019 }, { "authors": [ "Maithra Raghu", "Justin Gilmer", "Jason Yosinski", "Jascha Sohl-Dickstein" ], "title": "Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability", "venue": "In Proceedings of the Advances in Neural Information Processing Systems (NIPS-2017),", "year": 2017 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "High-dimensional continuous control using generalized advantage estimation", "venue": "In Proceedings of the International Conference on Learning Representations", "year": 2016 }, { "authors": [ "James E Smith", "Robert L Winkler" ], "title": "The optimizer’s curse: Skepticism and postdecision surprise in decision analysis", "venue": "Management Science,", "year": 2006 }, { "authors": [ "Alexander L. Strehl", "Lihong Li", "Michael L. Littman" ], "title": "Reinforcement Learning in Finite MDPs: PAC Analysis", "venue": "Journal of Machine Learning Research (JMLR),", "year": 2009 }, { "authors": [ "István Szita", "András Lőrincz" ], "title": "The Many Faces of Optimism: A Unifying Approach", "venue": "In Proceedings of International Conference on Machine Learning", "year": 2008 }, { "authors": [ "Richard Thaler" ], "title": "The winner’s curse: Paradoxes and anomalies of economic life", "venue": null, "year": 2012 }, { "authors": [ "Sebastian Thrun", "A. Schwartz" ], "title": "Issues in using function approximation for reinforcement learning", "venue": "In Proceedings of the Connectionist Models Summer School (CMSS-1993),", "year": 1993 }, { "authors": [ "Josh Tobin", "Rachel Fong", "Alex Ray", "Jonas Schneider", "Wojciech Zaremba", "Pieter Abbeel" ], "title": "Domain randomization for transferring deep neural networks from simulation to the real world", "venue": "In Proceedings of the International Conference on Intelligent Robots and Systems (IROS-2017),", "year": 2017 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-SNE", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Hado van Hasselt" ], "title": "Double Q-learning", "venue": "In Proceedings of the Advances in Neural Information Processing Systems", "year": 2010 }, { "authors": [ "Liwei Wang", "Lunjia Hu", "Jiayuan Gu", "Zhiqiang Hu", "Yue Wu", "Kun He", "John Hopcroft" ], "title": "Towards understanding learning representations: To what extent do different neural networks learn the same representation", "venue": "In Proceedings of the Advances in Neural Information Processing Systems (NIPS-2018),", "year": 2018 }, { "authors": [ "Chris Watkins" ], "title": "Learning from Delayed Rewards", "venue": "PhD thesis, King’s College,", "year": 1989 }, { "authors": [ "Kenny Young", "Tian Tian" ], "title": "Minatar: An atari-inspired testbed for thorough and reproducible reinforcement learning experiments", "venue": "arXiv preprint arXiv:1903.03176,", "year": 2019 }, { "authors": [ "Chiyuan Zhang", "Oriol Vinyals", "Remi Munos", "Samy Bengio" ], "title": "A study on overfitting in deep reinforcement learning", "venue": "arXiv preprint arXiv:1804.06893,", "year": 2018 }, { "authors": [ "Zongzhang Zhang", "Zhiyuan Pan", "Mykel J Kochenderfer" ], "title": "Weighted double Q-learning", "venue": "In Proceedings of the International Joint Conferences on Artificial Intelligence (IJCAI-2017),", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Q-learning (Watkins, 1989) and its deep learning based successors inaugurated by DQN (Mnih et al., 2015) are model-free, value function based reinforcement learning algorithms. Their popularity stems from their intuitive, easy-to-implement update rule derived from the Bellman equation. At each time step, the agent updates its Q-value towards the expectation of the current reward plus the value corresponding to the maximal action in the next state. This state-action value represents the maximum sum of reward the agent believes it could obtain from the current state by taking the current action. Unfortunately (Thrun & Schwartz, 1993; van Hasselt, 2010) have shown that this simple rule suffers from overestimation bias: due to the maximization operator in the update rule, positive and negative errors do not cancel each other out, but positive errors accumulate. The overestimation bias is particularly problematic under function approximation and have contributed towards learning sub-optimal policies (Thrun & Schwartz, 1993; Szita & Lőrincz, 2008; Strehl et al., 2009).\nA possible solution is to introduce underestimation bias in the estimation of the Q-value. Double Q-learning (van Hasselt, 2010) maintains two independent state-action value estimators (Q-functions). The state-action value of estimator one is calculated by adding observed reward and maximal stateaction value from the other estimator. Double DQN (Hado van Hasselt et al., 2016) applied this idea using neural networks, and was shown to provide better performance than DQN. More recent actor-critic type deep RL algorithms such as TD3 (Fujimoto et al., 2018) and SAC (Haarnoja et al., 2018) also use two Q function estimators (in combination with other techniques).\nOther approaches such as EnsembleDQN (Anschel et al., 2017) and MaxminDQN (Lan et al., 2020) maintain ensembles of Q-functions to estimate an unbiased Q-function. EnsembleDQN estimates the state-action values by adding the current observed reward and the maximal state-action value from the average of Q-functions from the ensemble. MaxminDQN creates a proxy Q-function by selecting the minimum Q-value for each action from all the Q-functions and using the maximal state-action value from the proxy Q-function to estimate an unbiased Q-function. Both EnsembleDQN and MaxminDQN have been shown to perform better than Double DQN. The primary insight of this paper is that the performance of ensemble based methods is contingent on maintaining sufficient diversity in the representation space between the Q-functions in the ensembles. If the Q-functions in the ensembles converge to a common representation (we will show that this is the case in many scenarios), the performance of these approaches significantly degrades.\nIn this paper we propose to use cross-learner regularizers to prevent the collapse of the representation space in ensemble-based Q-learning methods. Intuitively, these representations capture an inductive bias towards more diverse representations. We have investigated five different regularizers. The mathematical formulation of four of the regularizers correspond to inequality measures borrowed from economics theory. While in economics, high inequality is seen as a negative, in this case we use the metrics to encourage inequality between the representations. The fifth regularizer is inspired from consensus optimization.\nThere is a separate line of reinforcement learning literature where ensembles are used to address several different issues (Chen et al., 2017; Chua et al., 2018; Kurutach et al., 2018; Lee et al., 2020; Osband et al., 2016) such as exploration and error propagation but we limit our solution to algorithms addressing the overestimation bias problem only.\nTo summarize, our contributions are following:\n1. We show that high representation similarity between neural network based Q-functions leads to decline in performance in ensemble based Q-learning methods.\n2. To mitigate this, we propose five regularizers based on inequality measures from economics theory and consensus optimization that maximize representation diversity between Q-functions in ensemble based Q-learning methods.\n3. We show that applying the proposed regularizers to the MaxminDQN and EnsembleDQN methods can lead to significant improvement in performance over a variety of benchmarks." }, { "heading": "2 BACKGROUND", "text": "Reinforcement learning considers an agent as a Markov Decision Process (MDP) defined as a five element tuple (S,A, P, r, γ), where S is the state space,A is the action space, P : S×A×S → [0, 1] are the state-action transition probabilities, r : S × A × S → R is the reward mapping and γ → [0, 1] is the discount factor. At each time step t the agent observes the state of the environment st ∈ S and selects an action at ∈ A. The effect of the action triggers a transition to a new state st+1 ∈ S according to the transition probabilities P , while the agent receives a scalar reward Rt = r (st, at, st+1). The goal of the agent is to learn a policy π that maximizes the expectation of the discounted sum of future rewards.\nOne way to implicitly learn the policy π is the Q-learning algorithm that estimates the expected sum of rewards of state st if we take the action at by solving the Bellman equation\nQ∗ (st, at) = E [ Rt + max\na′∈A Q∗ (st+1, a ′) ] The implicit policy π can extracted by acting greedily with respect to the optimal Q-function: arg max a∈A Q∗ (s, a). One possible way to estimate the optimal Q-value is by iteratively updating it for sampled states st and action at using\nQ∗ (st, at)← Q∗ (st, at) + α (Yt −Q∗ (st, at)) where Yt = Rt + max a′∈A Q∗ (st+1, a ′)\nwhere α is the step size and Yt is called the target value. While this algorithm had been initially studied in the context of a tabular representation of Q for discrete states and actions, in many practical applications the Q value is approximated by a learned function. Since the emergence of deep learning, the preferred approximation technique is based on a deep neural network. DQN (Mnih et al., 2015), had demonstrated super-human performance in Atari Games, but required a very large number of training iterations. From this baseline, subsequent algorithms improved both the learning speed and achievable performance, with one of the main means for this being techniques to reduce the overestimation bias of the Q-function.\nEnsembleDQN (Anschel et al., 2017) uses an ensemble of N neural networks to estimate state-action values and uses their average to reduce both overestimation bias and estimation variance. Formally,\nthe target value for EnsembleDQN is calculated using\nQE (·) = 1\nN N∑ i=1 Qi (·)\nY Et = Rt + max a′∈A QE (st+1, a ′) (1)\nMore recent, MaxminDQN (Lan et al., 2020) addresses the overestimation bias using order statistics, using the ensemble size N as a hyperparameter to tune between underestimating and overestimating bias. The target value for MaxminDQN is calculated using\nQM (·, ·) = min i=1,...,N Qi (·, ·)\nYMt = Rt + max a′∈A QM (st+1, a ′) (2)" }, { "heading": "3 RELATED WORK", "text": "Techniques to Address Overestimation Bias in RL: Addressing overestimation bias is a long standing research topic not only in reinforcement learning but other fields of science such as economics and statistics. It is commonly known as max-operator bias in statistics (D’Eramo et al., 2017) and as the winner’s curse in economics (Thaler, 2012; Smith & Winkler, 2006). To address this, (van Hasselt, 2010) proposed Double Q-learning, subsequently adapted to a neural network based function approximators as Double DQN (Hado van Hasselt et al., 2016). Alternatively, (Zhang et al., 2017; Lv et al., 2019) proposed weighted estimators of Double Q-learning and (Lee et al., 2013) introduced a bias correction term. Other approaches to address the overestimation are based on averaging and ensembling. Techniques include averaging Q-values from previous N versions of the Q-network (Anschel et al., 2017), taking linear combinations of min and max over the pool of Q-values (Kumar et al., 2019), or using a random mixture from the pool (Agarwal et al., 2019).\nRegularization in Reinforcement Learning: Regularization in reinforcement learning has been used to perform effective exploration and learning generalized policies. For instance, (Grau-Moya et al., 2019) uses mutual-information regularization to optimize a prior action distribution for better performance and exploration, (Cheng et al., 2019) regularizes the policy π(a|s) using a control prior, (Galashov et al., 2019) uses temporal difference error regularization to reduce variance in Generalized Advantage Estimation (Schulman et al., 2016). Generalization in reinforcement learning refers to the performance of the policy on different environment compared to the training environment. For example, (Farebrother et al., 2018) studied the effect ofL2 norm on DQN on generalization, (Tobin et al., 2017) studied generalization between simulations vs. the real world, (Pattanaik et al., 2018) studied parameter variations and (Zhang et al., 2018) studied the effect of different random seeds in environment generation.\nRepresentation Similarity: Measuring similarity between the representations learned by different neural networks is an active area of research. For instance, (Raghu et al., 2017) used Canonical Correlation Analysis (CCA) to measure the representation similarity. CCA find two basis matrices such that when original matrices are projected on these bases, the correlation is maximized. (Raghu et al., 2017; Mroueh et al., 2015) used truncated singular value decomposition on the activations to make it robust for perturbations. Other work such as (Li et al., 2015) and (Wang et al., 2018) studied the correlation between the neurons in the neural networks." }, { "heading": "4 MAXIMIZING REPRESENTATION DIVERSITY IN ENSEMBLE-BASED DEEP Q-LEARNING", "text": "The work described in this paper is based on the conjecture that while ensemble-based deep Qlearning approaches aim to reduce the overestimation bias, this only works to the degree that the neural networks in the ensemble use diverse representations. If during training, these networks collapse to closely related representations, the learning performance decreases. From this idea, we propose to use regularization techniques to maximize representation diversity between the networks of the ensemble." }, { "heading": "4.1 REPRESENTATION SIMILARITY MEASURE", "text": "Let X ∈ Rn×p1 denote a matrix of activations of p1 neurons for n examples and Y ∈ Rn×p2 denote a matrix of activations of p2 neurons for the same n examples. Furthermore, we consider Kij = k (xi, xj) and Lij = l (yi, yj) where k and l are two kernels.\nCentered Kernel Alignment (CKA) (Kornblith et al., 2019; Cortes et al., 2012; Cristianini et al., 2002) is a method for comparing representations of neural networks, and identifying correspondences between layers, not only in the same network but also on different neural network architectures. CKA is a normalized form of Hilbert-Schmidt Independence Criterion (HSIC) (Gretton et al., 2005). Formally, CKA is defined as:\nCKA (K,L) = HSIC (K,L)√\nHSIC (K,K) · HSIC (L,L) HSIC is a test statistic for determining whether two sets of variables are independent. The empirical estimator of HSIC is defined as:\nHSIC (K,L) = 1\n(n− 1)2 tr (KHLH)\nwhere H is the centering matrix Hn = In − 1\nn 11T ." }, { "heading": "4.2 CORRELATION BETWEEN PERFORMANCE AND REPRESENTATION SIMILARITY", "text": "The work in this paper starts from the conjecture that high representation similarity between neural networks in an ensemble-based Q-learning technique correlates to poor performance. To empirically verify our hypothesis, we trained a MaxminDQN agent with two neural networks on the Catcher\nenvironment (Qingfeng, 2019) for about 3000 episodes (5× 106 training steps) and calculated the CKA similarity with a linear kernel after every 500 episodes. The training graph along with the CKA similarity heatmaps are shown in Figure 1. Notably at episode 500 (heatmap A) and episode 2000 (heatmap C), the representation similarity between neural networks is low but the average return is relatively high. In contrast, at episode 1000 (heatmap B) and episode 3000 (heatmap D) the representation similarity is highest but the average return is lowest.\nAdditionally, in Appendix A.1, we performed a regression experiment to demonstrate that when two neural networks trained on same data, despite having different architecture, learning rate and batch size can learn almost identical representations. This experiment also demonstrates that random initialization of neural networks enforces diversity is a misconception." }, { "heading": "4.3 REGULARIZATION FOR MAXIMIZING REPRESENTATION DIVERSITY", "text": "In order to maximize the representation diversity, we propose to regularize the training algorithm with an additional criteria that favors diversity in the representation space. In the following, N is the number of neural networks in the ensemble, `i is the L2 norm of the i-th neural network’s parameters, ¯̀ is the mean of all the L2 norms and ` is the list of all the L2 norms.\nThe first four metrics we consider are based on inequality measures from economic theory. While in economics, inequality is usually considered something to be avoided, in our case we aim to increase inequality (and thus, representation diversity).\nThe Atkinson Index (Atkinson et al., 1970) measures income inequality and is useful in identifying the end of the distribution that contributes the most towards the observed inequality. Formally, it is defined as\nA = 1− 1¯̀ ( 1 N N∑ i=1 `1− i ) 1 1− at , for 0 ≤ at 6= 1, 1− 1¯̀ ( 1\nN N∏ i=1 `i ) 1 N , for at = 1,\n(3)\nwhere at is the inequality aversion parameter used to tune the sensitivity of the measured change. When at = 0, the index is more sensitive to the changes at the upper end of the distribution, while the index becomes more sensitive towards the change at the lower end of the distribution when at approaches 1.\nThe Gini coefficient (Allison, 1978) is a statistical measure of the wealth distribution or income inequality among a population and defined as the half of the relative mean absolute difference:\nG =\n∑N i=1 ∑N j=1 |`i − `j |\n2N2 ¯̀ (4)\nThe Gini coefficient is more sensitive to deviation around the middle of the distribution than at the upper or lower part of the distribution.\nThe Theil index (Johnston, 1969) measures redundancy, lack of diversity, isolation, segregation and income inequality among a population. Using the Theil index is identical to measuring the redundancy in information theory, defined as the maximum possible entropy of the data minus the observed entropy:\nTT = 1\nN N∑ i=1 `i ¯̀ ln `i ¯̀ (5)\nThe variance of logarithms (Ok & Foster, 1997) is a widely used measure of dispersion with natural links to wage distribution models. Formally, it is defined as:\nVL(`) = 1\nN N∑ i=1 [ln `i − ln g(`)]2 (6)\nwhere g(`) is the geometric mean of ` defined as ( ∏N i=1 `i) 1/N .\nThe final regularization method we use is inspired from consensus optimization. In a consensus method (Boyd et al., 2011), a number of models are independently optimized with their own taskspecific parameters, and the tasks communicate via a penalty that encourages all the individual solutions to converge around a common value. Formally, it is defined as\nM = ‖¯̀− `i‖2 (7)\nWe will refer this regularizer as MeanVector throughout this paper." }, { "heading": "4.4 TRAINING ALGORITHM", "text": "Using the regularization functions defined above, we can develop diversity-regularized variants of the the MaxminDQN and EnsembleDQN algorithms. The training technique is identical to the algorithms described in (Lan et al., 2020) and (Anschel et al., 2017), with a regularization term added to the loss of the Q-functions. The loss term for i-th Q-function with parameters ψi is:\nL (ψi) = Es,a,r,s′ [( Qiψ (s, a)− Y )2]− λI (`i, `) , where Y is the target value calculated using either Equation (1) or Equation (2) depending on the algorithm, I is the regularizer of choice from the list above and λ is the regularization weight. Notice that the regularization term appears with a negative sign, as the regularizers are essentially inequality metrics that we want to maximize. For completeness, the algorithm are shown in Appendix B." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 TRAINING CURVES", "text": "We chose three environments from PyGames(Qingfeng, 2019) and MinAtar (Young & Tian, 2019): Catcher, Pixelcopter and Asterix. These environments were used by the authors in the MaxminDQN (Lan et al., 2020) paper. We reused all the hyperparameter settings from (Lan et al., 2020) except the number of neural networks, which we limited to four and trained each solution for five fixed seeds. For the regularization weight λ, we chose the best value from {10−5, 10−6, 10−7, 10−8}. The baselines were also fine-tuned. The complete list of training parameters can be found in Appendix E.\nFigure 2 shows the training curves for the three environments. To avoid crowding the figures, for each environment and baseline algorithm (MaxminDQN and EnsembleDQN) we only plotted the regularized version which performed the best. We also show as baseline the original MaxminDQN and EnsembleDQN, as well as the DQN and DDQN algorithms. For the Catcher environment, both Gini-MaxminDQN and VOL-EnsembleDQN were able to quickly reach the optimal performance and stabilized after 2 × 106 training steps while the baseline MaxminDQN reached its maximum performance after 3.5 × 106 training steps but went down afterwards. Similarly, the baseline EnsembleDQN reached its maximum performance after 4× 106 training steps, with the performance fluctuating with continued training. For the PixelCopter environment, VOL-MaxminDQN and TheilEnsembleDQN were slower in the initial part of the learning that some of the other approaches, but\nover time they achieved at least double return compared to the other approaches. Similarly, for the Asterix environment, Atkinson-MaxminDQN and Theil-EnsembleDQN lagged in training for about 1×106 training steps but after that they achieved at least 50% higher return compared to the baselines. Full results together with CKA similarity heatmaps are shown in Appendix C." }, { "heading": "5.2 T-SNE VISUALIZATIONS", "text": "To visualize the impact of the regularization, Figure 3 shows t-SNE (van der Maaten & Hinton, 2008) visualization of the activations of the last layer of the trained networks. Figure 3a show the network trained for the Catcher environment, while Figure 3b, the network trained for the PixelCopter environment. The upper row of the figure shows the original, unregularized models, while the lower row a regularized version. For all combinations, we find that the activations from the original MaxminDQN and EnsembleDQN versions do not show any obvious pattern, while the regularized ones show distinct clusters. An additional benefit of t-SNE visualizations over CKA similarity heatmaps is that the CKA similarity heatmaps are useful to show representation similarity between two neural networks, but they become counter intuitive as the number of neural networks increases. More t-SNE visualizations for four neural network experiments are shown in Appendix C.3." }, { "heading": "5.3 STATISTICAL ANALYSIS", "text": "What is the impact of the regularization on the performance? Similar to the approach taken by (Liu et al., 2020), we performed a z-score test to rigorously evaluate the improvement of regularization over baseline solutions. The z-score is also known as “standard score”, the signed fractional number of standard deviations by which the value of a data point is above the mean value. A regularizer’s z-score roughly measures its relative performance among others. For each algorithm, environment and neural network setting, we calculated the z-score for each regularization method and the baseline by treating results all the results as a populations. For example, to find out which EnsembleDQN with two neural networks is best for the Catcher environment, we took the average reward of 10 episodes for each experiment ((5 + 1)× 5 seeds) and treated it as a population. Finally, we averaged the z-scores to generate the final result presented in Table 1. In terms of improved performance, all the regularizers have achieved significant improvement over the baselines for all three environments. The z-scores for four neural network experiments are shown in Appendix C.4.\nIs the improvement statistically significant? We collected the z-scores from the previous section and performed the Welch’s t-test with the corresponding z-scores produced by the baseline. The resulting p-values are presented in Table 2. From the results, we observed that the improvement introduced from regularization is statistically significant (p < 0.05) in almost all the cases." }, { "heading": "6 IDENTICAL LAYERS EXPERIMENT", "text": "To test the limits of the regularizers, we initialized, each layer of each neural network with the same fixed seed. This initialization enforces maximum representation similarity and is considered the worst case scenario for ensemble based learning methods. We performed this experiment on all three environments and used the same seeds and hyperparameters that were used for the main experiments. The training curves are shown in Figure 4. Notably, the results from the baseline MaxminDQN and EnsembleDQN on both Catcher and PixelCopter environments are similar to the main results. For the Catcher environment, both Gini-MaxminDQN and Theil-EnsembleDQN were slow in learning for about 2× 106 training steps but both solutions were able to achieve the optimal performance by the end of training. Similarly for PixelCopter environment, the VOL-MaxminDQN was slow in learning till 1.5× 106 training steps but it was able to outperform the baseline results and achieved optimal performance. The complete training plots for these experiments are shown in Appendix D." }, { "heading": "7 CONCLUSION", "text": "In this paper we showed that high representation similarity between the Q-functions in ensemble based Q-learning algorithms such as MaxminDQN and EnsembleDQN leads to a decline in learning performance. To mitigate this, we proposed a regularization approach using five different metrics to maximize the diversity in the representation space of the Q-functions. Experiments have shown that our solution outperforms baseline MaxminDQN and EnsembleDQN in standard training settings as well as in scenarios where the parameters of the neural layers were initialized using one fixed seed." }, { "heading": "A SUPPLEMENTARY MATERIAL", "text": "A.1 MOTIVATING EXAMPLE TO DEMONSTRATE SIMILARITY BETWEEN NEURAL NETWORKS\nWe performed a regression experiment in which we learnt a sine wave function using two different three layered fully connected neural networks with 64 and 32 neurons in each hidden layer with ReLU. The neural networks were initialized using different seeds and were trained using different batch sizes (512, 128) and learning rates (1e−4, 1e−3). The Figure 5a shows the learnt functions while Figure 5b represents their CKA similarity heatmap before and after training. The odd numbered layers represent pre-ReLU activations while the even numbered layers represent post-ReLU activations. It can be seen that before training, the CKA similarity between the two neural networks from layer 4 and onward is relatively low and the output being 0% similar while after training, the trained networks have learnt highly similar representation while their output being 98% similar.\nThis example shows that neural networks can learn similar representation while trained on different batches. This observation is important because in MaxminDQN and EnsembleDQN training, each neural network is trained on a separate batch from the replay buffer but still learns similar representation similarity (see Figure 8)." }, { "heading": "B ALGORITHMS", "text": "For completeness, the regularizered MaxminDQN and EnsembleDQN algorithms are given below\nAlgorithm 1: Regularized MaxminDQN The differences between the baseline MaxminDQN and regularized MaxminDQN are highlighted Initialize N Q-functions {Q1, . . . , QN} parameterized by {ψ1, . . . , ψN} Initialize empty replay buffer D Observe initial state s while Agent is interacting with the Environment do\nQmin(s, a)← mink∈{1,...,N}Qk(s, a), ∀a ∈ A Choose action a by -greedy based on Qmin Take action a, observe r, s′ Store transition (s, a, r, s′) in D Select a subset S from {1, . . . , N} (e.g., randomly select one i to update) for i ∈ S do\nSample random mini-batch of transitions (sD, aD, rD, s′D) from D Get update target: YM ← rD + γmaxa′∈AQmin(s′D, a′) Generate list of L2 norms : ` = [ ‖ψ1‖2, . . . , ‖ψN‖2 ] Update Qi by minimizing EsD,aD,rD,s′D ( Qiψi (sD, aD)− Y M )2 −λI (`i, `)\nend s← s′\nend\nAlgorithm 2: Regularized EnsembleDQN The differences between the baseline EnsembleDQN and regularized EnsembleDQN are highlighted Initialize N Q-functions {Q1, . . . , QN} parameterized by {ψ1, . . . , ψN} Initialize empty replay buffer D Observe initial state s while Agent is interacting with the Environment do\nQens(s, a)← 1\nN\n∑N i=1Q i(s, a)\nChoose action a by -greedy based on Qens Take action a, observe r, s′ Store transition (s, a, r, s′) in D Select a subset S from {1, . . . , N} (e.g., randomly select one i to update) for i ∈ S do\nSample random mini-batch of transitions (sD, aD, rD, s′D) from D Get update target: Y E ← rD + γmaxa′∈AQens(s′D, a′) Generate list of L2 norms : ` = [ ‖ψ1‖2, . . . , ‖ψN‖2 ] Update Qi by minimizing EsD,aD,rD,s′D ( Qiψi (sD, aD)− Y E )2 −λI (`i, `)\nend s← s′\nend" }, { "heading": "C ALL TRAINING PLOTS", "text": "C.1 TRAINING PLOTS FOR MAXMINDQN\nC.2 TRAINING PLOTS FOR ENSEMBLEDQN\nC.3 HEATMAPS AND T-SNE VISUALIZATIONS\nFigure 8 represents the CKA similarity heatmaps using a linear kernel of all the two neural network experiments after training averaged over all the five seeds. The point of interest is the right diagonal (bottom left to top right) that represents the representation similarity between corresponding layers. For the baseline experiments, the output layer has more than 96% similarity in almost all the scenarios while for the regularized versions have around 90% similarity in the output layer. This 10% difference provides enough variance in the Q-values to prevent the ensemble based Q-learning methods converging to the standard DQN.\nFigure 9 represents the t-SNE visualizations of the baseline and regularized solutions trained with four neural networks on the PixelCopter environment. This visualization is consistent with the visualizations shown in Section 5.2 where the baseline activations are cluttered without any pattern while the Theil-MaxminDQN and Theil-EnsembleDQN activations have visible clusters.\nC.4 Z-SCORE TABLE FOR FOUR NEURAL NETWORK EXPERIMENTS\nD IDENTICAL LAYERS EXPERIMENT (ALL TRAINING PLOTS)\nD.1 TRAINING PLOTS FOR MAXMINDQN\nD.2 TRAINING PLOTS FOR ENSEMBLEDQN\nE IMPLEMENTATION DETAILS AND HYPERPARAMETERS\nFor our implementation of MaxminDQN and EnsembleDQN we used the code provided by the MaxminDQN authors that has implementations of different DQN based methods (github.com/qlan3/Explorer). For the baseline experiments, we used most of the hyperparameter settings provided in the configuration files by the authors except learning rates which we limited to [1e− 3, 1e− 4, 3e− 5] and limited the number of ensembles to four. The complete list of hyperparameters for each environment is shown in Table 5. The values in bold represent the values used for the reported results.\nFor the identical layer experiment, no hyperparameter tuning was performed and we reused the hyperparameters from the main results. In terms of number of experiments, we ran 190 experiments: (5 regularizers × 5 seeds × 3 ensemble settings × 2 algorithms) + 40 baseline experiments for each environment totaling 570 runs for all environments after hyperparameter tuning. The same number of experiments were performed for the identical layer experiment which sums up to 1140 runs where each run took 11 hours of compute time on average." }, { "heading": "F PLOTTING THE GINI INEQUALITY", "text": "We measured the L2 norm inequality of the baseline MaxminDQN and EnsembleDQN along with their regularized versions. We trained baseline MaxminDQN and EnsembleDQN with two neural networks along with their Gini index versions with regularization weight of 10−8 on the PixelCopter environment on a fixed seed . Figure 12 represents the L2 norm inequality of the experiments along their average return during training. Notably, despite each neural network being trained on a different batch, the L2 norm of the baseline MaxminDQN and EnsembleDQN are quite similar while the L2 norm of the regularized MaxminDQN and EnsembleDQN have high inequality." } ]
2,020
PREVENTING VALUE FUNCTION COLLAPSE
SP:f746ca9d21491dd433de8667cb51e6a137f2898f
[ "This paper evaluated four unsupervised learning approaches (BCPNN, KH, RBM, AE) by training a supervised classification layer on top of the hidden representation. Specifically, the authors qualitatively compared the receptive fields and quantitatively compared the classification performance across four models. The authors emphasized the advantages of BCPNN since it applies biologically plausible local learning rules and requires fewer epochs for convergence.", "The Bayesian Confidence Propagating Neural Network has recently been extended to the case of unsupervised learning (Ravichandran et al., IJCNN, 2020). This paper compares this extension to restricted Boltzmann machines, autoencoders, and a biologically plausible model proposed by (Krotov & Hopfield, PNAS, 2019) on the MNIST dataset. For evaluation the authors consider the learned receptive fields and the classification performance of a linear classifier. The paper is very similar to (Ravichandran et al., IJCNN, 2020) but with an extended experimental section." ]
Unsupervised learning of hidden representations has been one of the most vibrant research directions in machine learning in recent years. In this work we study the brain-like Bayesian Confidence Propagating Neural Network (BCPNN) model, recently extended to extract sparse distributed high-dimensional representations. The saliency and separability of the hidden representations when trained on MNIST dataset is studied using an external linear classifier and compared with other unsupervised learning methods that include restricted Boltzmann machines and autoencoders.
[]
[ { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "Simon Benjaminsson", "Peter Fransson", "Anders Lansner" ], "title": "A novel model-free data analysis technique based on clustering in a mutual information space: application to resting-state fmri", "venue": "Frontiers in systems neuroscience,", "year": 2010 }, { "authors": [ "Pierre Berthet", "Jeanette Hellgren Kotaleski", "Anders Lansner" ], "title": "Action selection performance of a reconfigurable basal ganglia inspired model with hebbian–bayesian go-nogo connectivity", "venue": "Frontiers in behavioral neuroscience,", "year": 2012 }, { "authors": [ "Markus Butz", "Florentin Wörgötter", "Arjen van Ooyen" ], "title": "Activity-dependent structural plasticity", "venue": "Brain research reviews,", "year": 2009 }, { "authors": [ "Rodney J Douglas", "Kevan AC Martin" ], "title": "Recurrent neuronal circuits in the neocortex", "venue": "Current biology,", "year": 2007 }, { "authors": [ "Kenji Doya", "Shin Ishii", "Alexandre Pouget", "Rajesh PN Rao" ], "title": "Bayesian brain: Probabilistic approaches to neural coding", "venue": "MIT press,", "year": 2007 }, { "authors": [ "Dumitru Erhan", "Yoshua Bengio", "Aaron Courville", "Pierre-Antoine Manzagol", "Pascal Vincent", "Samy Bengio" ], "title": "Why does unsupervised pre-training help deep discriminant learning", "venue": null, "year": 2009 }, { "authors": [ "Florian Fiebig", "Anders Lansner" ], "title": "A spiking working memory model based on hebbian short-term potentiation", "venue": "Journal of Neuroscience,", "year": 2017 }, { "authors": [ "Geoffrey E Hinton" ], "title": "A practical guide to training restricted boltzmann machines", "venue": "In Neural networks: Tricks of the trade,", "year": 2012 }, { "authors": [ "Bernd Illing", "Wulfram Gerstner", "Johanni Brea" ], "title": "Biologically plausible deep learning—but how far can we go with shallow networks", "venue": "Neural Networks,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Dmitry Krotov", "John J Hopfield" ], "title": "Unsupervised learning by competing hidden units", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Anders Lansner" ], "title": "Associative memory models: from the cell-assembly theory to biophysically detailed cortex simulations", "venue": "Trends in neurosciences,", "year": 2009 }, { "authors": [ "Anders Lansner", "Simon Benjaminsson", "Christopher Johansson" ], "title": "From ann to biomimetic information processing", "venue": "In Biologically Inspired Signal Processing for Chemical Sensing,", "year": 2009 }, { "authors": [ "Yann LeCun" ], "title": "The mnist database of handwritten digits. http://yann", "venue": "lecun. com/exdb/mnist/,", "year": 1998 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Timothy P Lillicrap", "Adam Santoro", "Luke Marris", "Colin J Akerman", "Geoffrey Hinton" ], "title": "Backpropagation and the brain", "venue": "Nature Reviews Neuroscience,", "year": 2020 }, { "authors": [ "Grace Lindsay" ], "title": "Convolutional neural networks as a model of the visual system: past, present, and future", "venue": "Journal of Cognitive Neuroscience,", "year": 2020 }, { "authors": [ "Mikael Lundqvist", "Pawel Herman", "Anders Lansner" ], "title": "Theta and gamma power increases and alpha/beta power decreases with memory load in an attractor network model", "venue": "Journal of cognitive neuroscience,", "year": 2011 }, { "authors": [ "Vernon B Mountcastle" ], "title": "The columnar organization of the neocortex. Brain: a journal of neurology", "venue": null, "year": 1997 }, { "authors": [ "Roland Orre", "Anders Lansner", "Andrew Bate", "Marie Lindquist" ], "title": "Bayesian neural networks with confidence estimations applied to data mining", "venue": "Computational Statistics & Data Analysis,", "year": 2000 }, { "authors": [ "Cengiz Pehlevan", "Dmitri B Chklovskii" ], "title": "Neuroscience-inspired online unsupervised learning algorithms: Artificial neural networks", "venue": "IEEE Signal Processing Magazine,", "year": 2019 }, { "authors": [ "Naresh Balaji Ravichandran", "Anders Lansner", "Pawel Herman" ], "title": "Learning representations in bayesian confidence propagation neural networks", "venue": "In International Joint Conference on Neural Networks (IJCNN),", "year": 2020 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "arXiv preprint arXiv:1401.4082,", "year": 2014 }, { "authors": [ "Kathleen S Rockland" ], "title": "Five points on columns", "venue": "Frontiers in Neuroanatomy,", "year": 2010 }, { "authors": [ "David E Rumelhart", "David Zipser" ], "title": "Feature discovery by competitive learning", "venue": "Cognitive science,", "year": 1985 }, { "authors": [ "Anders Sandberg", "Anders Lansner", "Karl Magnus Petersson", "Ekeberg" ], "title": "A bayesian attractor network with incremental learning", "venue": "Network: Computation in neural systems,", "year": 2002 }, { "authors": [ "Martin Schain", "Simon Benjaminsson", "Katarina Varnäs", "Anton Forsberg", "Christer Halldin", "Anders Lansner", "Lars Farde", "Andrea Varrone" ], "title": "Arterial input function derived from pairwise correlations between pet-image voxels", "venue": "Journal of Cerebral Blood Flow & Metabolism,", "year": 2013 }, { "authors": [ "Dimitrios Stathis", "Chirag Sudarshan", "Yu Yang", "Matthias Jung", "Christian Weis", "Ahmed Hemani", "Anders Lansner", "Norbert Wehn" ], "title": "ebrainii: a 3 kw realtime custom 3d dram integrated asic implementation of a biologically plausible model of a human scale cortex", "venue": "Journal of Signal Processing Systems,", "year": 2020 }, { "authors": [ "Philip J Tully", "Matthias H Hennig", "Anders Lansner" ], "title": "Synaptic and nonsynaptic plasticity approximating probabilistic inference", "venue": "Frontiers in synaptic neuroscience,", "year": 2014 }, { "authors": [ "Gina G Turrigiano", "Sacha B Nelson" ], "title": "Homeostatic plasticity in the developing nervous system", "venue": "Nature reviews neuroscience,", "year": 2004 }, { "authors": [ "James CR Whittington", "Rafal Bogacz" ], "title": "Theories of error back-propagation in the brain", "venue": "Trends in cognitive sciences,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Artificial neural networks have made remarkable progress in supervised pattern recognition in recent years. In particular, deep neural networks have dominated the field largely due to their capability to discover hierarchies of salient data representations. However, most recent deep learning methods rely extensively on supervised learning from labelled samples for extracting and tuning data representations. Given the abundance of unlabeled data there is an urgent demand for unsupervised or semi-supervised approaches to learning of hidden representations (Bengio et al., 2013). Although early concepts of greedy layer-wise pretraining allow for exploiting unlabeled data, ultimately the application of deep pre-trained networks to pattern recognition problems rests on label dependent end-to-end weight fine tuning (Erhan et al., 2009). At the same time, we observe a surge of interest in more brain plausible networks for unsupervised and semi-supervised learning problems that build on some fundamental principles of neural information processing in the brain (Pehlevan & Chklovskii, 2019; Illing et al., 2019). Most importantly, these brain-like computing approaches rely on local learning rules and label independent biologically compatible mechanisms to build data representations whereas deep learning methods predominantly make use of error back-propagation (backprop) for learning the weights. Although efficient, backprop has several issues that make it an unlikely candidate model for synaptic plasticity in the brain. The most apparent issue is that the synaptic connection strength between two biological neurons is expected to comply with Hebb’s postulate, i.e. to depend only on the available local information provided by the activities of preand postsynaptic neurons. This is violated in backprop since synaptic weight updates need gradient signals to be communicated from distant output layers. Please refer to (Whittington & Bogacz, 2019; Lillicrap et al., 2020) for a detailed review of possible biologically plausible implementations of and alternatives to backprop.\nIn this work we utilize the MNIST dataset to compare two classical learning systems, the autoencoder (AE) and the restricted Boltzmann machine (RBM), with two brain-like approaches to unsupervised learning of hidden representations, i.e. the recently proposed model by Krotov and Hopfield (referred to as the KH model) (Krotov & Hopfield, 2019), and the BCPNN model (Ravichandran et al., 2020), which both rely on biologically plausible learning strategies. In particular, we qualitatively examine the extracted hidden representations and quantify their label dependent separability using a simple linear classifier on top of all the networks under investigation. This classification step is not part of the learning strategy, and we use it merely to evaluate the resulting representations.\nSpecial emphasis is on the feedforward BCPNN model with a single hidden layer, which frames the update and learning steps of the neural network as probabilistic computations. Probabilistic ap-\nproaches are widely used in both deep learning models (Goodfellow et al., 2016) and computational models of brain function (Doya et al., 2007). One disadvantage of probabilistic models is that exact inference and learning on distributed representations is often intractable and forces approximate approaches like sampling-based or variational methods (Rezende et al., 2014). In this work, we adopt a modular BCPNN architecture, previously used in abstract models of associative memory (Sandberg et al., 2002; Lansner et al., 2009), action selection (Berthet et al., 2012), and in application to brain imaging (Benjaminsson et al., 2010; Schain et al., 2013) and data mining (Orre et al., 2000). Spiking versions of BCPNN have also been used in biologically detailed models of different forms of cortical associative memory (Lundqvist et al., 2011; Fiebig & Lansner, 2017; Tully et al., 2014). The modules in BCPNN, referred to as hypercolumns (HCs), comprise a set of functional minicolumns (MCs) that compete in a soft-winner-take-all manner. The abstract view of a HC in this abstract cortical-like network is that it represents some attribute, e.g. edge orientation, in a discrete coded manner. A minicolumn comprises a unit that conceptually represents one discrete value (a realization of the given attribute) and, as a biological parallel, it accounts for a local subnetwork of around a hundred recurrently connected neurons with similar receptive field properties (Mountcastle, 1997). Such an architecture was initially generalized from the primary visual cortex, but today has more support also from later experimental work and has been featured in spiking computational models of cortex (Rockland, 2010; Lansner, 2009).\nFinally, in this work we highlight additional mechanisms of bias regulation and structural plasticity, introduced recently to the BCPNN framework (Ravichandran et al., 2020), which enable unsupervised learning of hidden representations. The bias regulation mechanism ensures that the activities of all units in the hidden layer are maintained near their target activity by regulating their bias parameter. Structural plasticity learns a set of sparse connections from the input layer to hidden layer by maximizing a local greedy information theoretic score." }, { "heading": "2 RELATED WORKS", "text": "A popular unsupervised learning approach is to train a hidden layer to reproduce the input data as, for example, in AE and RBM. The AE and RBM networks trained with a single hidden layer are relevant here since learning weights of the input-to-hidden-layer connections relies on local gradients, and the representations can be stacked on top of each other to extract hierarchical features. However, stacked autoencoders and deep belief nets (stacked RBMs) have typically been used for pre-training procedures followed by end-to-end supervised fine-tuning (using backprop) (Erhan et al., 2009). The recently proposed KH model (Krotov & Hopfield, 2019) addresses the problem of learning solely with local gradients by learning hidden representations only using an unsupervised method. In this network the input-to-hidden connections are trained and additional (non-plastic) lateral inhibition provides competition within the hidden layer. For evaluating the representation, the weights are frozen, and a linear classifier trained with labels is used for the final classification. Our approach shares some common features with the KH model, e.g. learning hidden representations solely by unsupervised methods, and evaluating the representations by a separate classifier (Illing et al. (2019) provides an extensive review of methods with similar goals).\nAll the aforementioned models employ either competition within the hidden layer (KH), or feedback connections from hidden to input (RBM and AE). The BCPNN uses only the feedforward connections, along with an implicit competition via a local softmax operation, the neural implementation of which would be lateral inhibition.\nIt is also observed that, for unsupervised learning, having sparse connectivity in the feedforward connections performs better than full connectivity (Illing et al., 2019). In addition to the unsupervised methods, networks employing supervised learning like convolutional neural networks (CNNs) force a fixed spatial filter to obtain this sparse connectivity (Lindsay, 2020). The BCPNN model takes an alternate approach where, along with learning the weights of the feedforward connections, which is regarded as biological synaptic plasticity, a sparse connectivity between the input and hidden layer is learnt simultaneously, in analogy with the structural plasticity in the brain (Butz et al., 2009)." }, { "heading": "3 BAYESIAN CONFIDENCE PROPAGATION NEURAL NETWORK", "text": "Here we describe the BCPNN network architecture and update rules (Sandberg et al., 2002; Lansner et al., 2009). The feedforward BCPNN architecture contains two layers, referred to as the input layer and hidden layer. A layer consists of a set of HCs, each of which represents a discrete random variable Xi (upper case). Each HC, in turn, is composed of a set of MCs representing a particular value xi (lower case) of Xi. The probability of Xi is then a multinomial distribution, defined as p(Xi = xi), such that ∑ xi p(Xi = xi) = 1. In the neural network, the activity of the MC is interpreted as p(Xi = xi), and the activities of all the MCs inside a HC sum to one.\nSince the network is a probabilistic graphical model (see Fig. 1), we can compute the posterior of a target HC in the hidden layer conditioned on all the source HCs in the input layer. We will use x’s and y’s for referring the HCs in the input and hidden layer respectively. Computing the exact posterior p(Yj |X1, ..., XN ) over the target HC is intractable, since it scales exponentially with the number of units. The naive Bayes assumption p(X1, .., XN |Yj) = ∏N i=1 p(Xi|Yj) allows us to write the posterior as follows:\np(Yj |X1, ..., XN ) = p(Yj)\n∏N i=1 p(Xi|Yj)\np(X1, ..., XN ) ∝ p(Yj) N∏ i=1 p(Xi|Yj) (1)\nWhen the network is driven by input data {X1, .., XN} = {xD1 , .., xDN}, we can write the posterior probabilities of a target MC in terms of the source MCs as:\np(yj |xD1 , ..., xDN ) ∝ p(yj) N∏ i=1 p(xDi |yj) = p(yj) N∏ i=1 ∏ xi p(xi|yj)I(xi=x D i ) (2)\nwhere I(·) is the indicator function that equals 1 if its argument is true, and zero otherwise. We have written the posterior of the target MC as a function of all the source MCs (all xi’s). The log posterior can be written as:\nlog p(yj |xD1 , ..., xDN ) ∝ log p(yj) + N∑ i=1 ∑ xi I(xi=xDi ) log p(xi|yj) (3)\nSince the posterior is linear in the indicator function of data sample, I(xi=xDi ) can be approximated by its expected value defined as π(xi) = p(xi=xDi ). Except for π(xi), all the terms in the posterior are functions of the marginals p(yj) and p(xi, yj). We define the terms bias β(yj) = log p(yj) and weight w(xi, yj) = log p(xi|yj) in analogy with artificial neural networks. The inference step to calculate the posterior probabilities of the target MCs conditioned on the input sample is given by the activity update equations:\nh(yj) = β(yj) + N∑ i=1 ∑ xi π(xi)w(xi, yj) (4) π(yj) = exp(h(yj))∑ k exp(h(yk)) (5)\nwhere h(yj) is the total input received by each target MC from which the posterior probability π(yj) = p(yj |xD1 , ..., xDN ) is recovered by softmax normalization of all MCs within the HC. As each data sample is presented, the learning step updates the marginal probabilities, weights, and biases as follows:\nτp dp(yj)\ndt = π(yj)− p(yj) (6)\nτp dp(xi, yj)\ndt = π(xi)π(yj)− p(xi, yj) (7)\nβ(yj) = kβ log p(yj) (8)\nw(xi, yj) = log p(xi, yj)\np(yj) (9)\nThe terms τp is a learning time constant and kβ is the bias gain. The set of Equations 4-9 define the update and learning equations of the BCPNN architecture. In this work, we use the abstract nonspiking model of BCPNN for the purpose of representation learning. The network for unsupervised representation learning requires, in addition to the update and learning equations, the following two mechanisms to enable learning representations (Ravichandran et al., 2020)." }, { "heading": "3.1 BIAS REGULATION", "text": "The BCPNN update rule implements Bayesian inference if the parameters are learnt with the source and target layer probabilities available as observations. When the target layer is hidden, we are learning the representations, and we cannot expect the update rule to follow Bayesian inference. In fact, we can see that performing learning and inference simultaneously is counter-productive in this case. Consider a hidden representation with random initialization that assigns some MCs with slightly higher marginal probability p(yj) than others. Learning would then amplify this difference and find parameters that would associate more input samples with the MCs with high p(yj), causing the marginals to increase further. One way to circumvent this effect is to promote MCs with low p(yj) to be more active in the future, like an activity dependent homeostasis process in biological terms (Turrigiano & Nelson, 2004).\nWe use a bias regulation mechanism, where the bias gain kβ for each MC (equal to 1 if only Bayesian inference is performed) depends on p(yj). One motivation for choosing the bias gain is to influence the marginal p(yj) alone without affecting the weight parameters that are responsible for learning the input to hidden mapping. The value of p(yj) is compared with respect to the maximum entropy probability, pMaxEnt = 1/NMC , where NMC is the number of MCs per HC. It is worth noting that the maximum entropy is the ideal representation without the input layer since all the MCs have equal marginal probability, and hence acts as the homeostatic reference for bias regulation. The dynamic update of kβ with the time constant τβ follows Eq. 10\nτβ dkβ dt\n= 1 + (khalf − 1) ( pMaxEnt 4\np(yj)− pMaxEnt4\n)2 − kβ (10)\nThe mechanism maintains the value of gain kβ at around 1 when p(yj) pMaxEnt4 , and drops sharply to negative values when p(yj) is below pMaxEnt (see Fig. 2A). The rate of this drop is controlled using the hyperparameter khalf , defined as the value of gain kβ = khalf at p(yj) = pMaxEnt\n2 ." }, { "heading": "3.2 STRUCTUAL PLASTICITY", "text": "Structural plasticity builds a set of receptive fields for the hidden layer from the input. We define a Boolean variableMij for the connection from the ith input HC to jth hidden HC as active,Mij = 1, or silent, Mij = 0 . Each Mij is initialized randomly with probability pM , where setting pM to a low value ensures patchy and sparse connectivity (Fig. 2B). Once initialized, the total number of active incoming connections to each hidden HC is fixed whereas the outgoing connections from a source HC can be changed. The mutual information (MI) between the ith input HC and jth hidden HC is estimated from the BCPNN weights: Iij = ∑ xi ∑ yj P (xi, yj)w(xi, yj). Each input HC normalizes the MI by the total number of active outgoing connections:\nÎij = Iij 1 + ∑ kMik\n(11)\nSince the total number of active incoming connections is fixed, each hidden HC greedily maximizes the Îij it receives by removing the active connection with the lowest Îij (set Mij from 1 to 0) and adds an inactive connection with the highest Îij (set Mij from 0 to 1). We call this operation a flip and use a parameter Nflips to set the number of flips made per training epoch." }, { "heading": "4 EXPERIMENTS", "text": "Here we describe the experimental setup for the BCPNN and three other related models for unsupervised learning, as discussed in section 2. Next, we introduce a supervised classification layer trained on the representations learnt by the four models. Finally, we qualitatively study these representations and provide quantitative performance results of the models in supervised classification.\nWe ran the experiments on the MNIST handwritten digits dataset (LeCun, 1998). MNIST contains Ntrain = 60000 training and Ntest = 10000 test images of 28x28 handwritten digits. The images were flattened to 784 dimensions and the grey-scale intensities were normalized to the range [0,1]. The images act as the input layer for the models." }, { "heading": "4.1 MODELS", "text": "We considered four network architectures: BCPNN (c.f. section 3), AE, RBM and, KH. All the models had one hidden layer and 3000 hidden units.\nBCPNN The BCPNN network had a hidden layer with 30 HCs and 100 MCs per HC. Each sample was clamped to the input layer for Nsample iterations of time-step ∆t, and the training was performed for Nepoch epochs of the training set. The time constants τ0k and τ 0 p were scaled by the total training time per epoch, that is, τk = τ0kNtrainNsample∆t and τp = τ 0 pNtrainNsample∆t. For tuning the parameters, τ0k , τ 0 p , khalf and, Nflips, we used a held-out validation set of 10000 samples from the training set, and chose values that maximize the validation accuracy (for details, see Ravichandran et al. (2020)). The entire list of parameters and their values are listed in Table 1. The simulations were performed on code parallelized using MPI on 2.3 GHz Xeon E5 processors and the training process took approximately two hours per run.\nKH The KH network was reproduced from the original work using the code provided by (Krotov & Hopfield, 2019). We kept all the parameters as originally described, except for having 3000 hidden units instead of 2000, to be consistent in the comparison with other models.\nRBM For the RBM network, we used sigmoidal units for both input and hidden layer. The weights were trained using the Contrastive Divergence algorithm with one iteration of Gibbs sampling (CD1) (Hinton, 2012). The learning rate α was set as 0.01 and the training was done in minibatches of 256 samples for 300 epochs.\nAE For the AE network, we used sigmoidal units for both hidden layer and reconstruction layer and two sets of weights, one for encoding from input to hidden layer and another for decoding from hidden to reconstruction layer. The weights were trained using the Adam optimizer and L2 reconstruction loss with an additional L1 sparsity loss on the hidden layer. The sparsity loss coeffi-\ncient was determined as λ=1e-7 by maximizing the accuracy of a held-out validation set of 10000 samples. The training was in minibatches of 256 samples for 300 epochs." }, { "heading": "4.2 RECEPTIVE FIELD COMPARISON", "text": "As can be observed in Fig. 3A, the distribution of weight values considerably differs across the networks examined in this work. It appears that the range of values for BCPNN corresponds to that reported for AE, whereas for KH and RBM, weights lie in a far narrower interval centered around 0. Importantly, BCPNN has by far the highest proportion of zero weights (90%), which renders the connectivity truly sparse.\nIn Fig. 4, we visualize the receptive fields of the four unsupervised learning networks. Firstly, it is straightforward to see that the receptive fields of all the networks differ significantly. The RBM (Fig. 4C) and AE (Fig. 4D) have receptive fields that are highly localized and span the input space, a characteristic of distributed representations. The KH model (Fig. 4B) has receptive fields that resemble the entire image, showing both positive and negative values over the image, as a result of Hebbian and anti-Hebbian learning Krotov & Hopfield (2019). Generally, local representations like mixture models and competitive learning, as opposed to distributed representations, tend to have receptive fields that resemble prototypical samples (Rumelhart & Zipser, 1985). With this distinction in mind, the receptive fields in the BCPNN should be closely examined (Fig. 4A). The receptive fields of HCs (first column) are localized and span the input space, much like a distributed representation. Within each HC however, the MCs have receptive fields (each row) resembling prototypical samples, like diverse sets of lines and strokes. This suggests that the BCPNN representations are ”hybrid”, with the higher-level HCs coding distributed representation, and the lower level MCs coding local representation." }, { "heading": "4.3 CLASSIFICATION PERFORMANCE", "text": "For all the four models of unsupervised learning, we employed the same linear classifier for predicting the labels (see Fig. 3B). This allowed us to consistently evaluate the representations learned by the different models. The linear classifier considers the hidden layer as the input and the MNIST labels as the output. The output layer consists of softmax units for the 10 labels. The classifier’s weights were trained by stochastic gradient descent with the Adam optimizer (Kingma & Ba, 2014) using cross-entropy loss function. The training procedure used minibatches of 256 samples and a total of 300 training epochs.\nThe results of the classification are shown in Table II. All the results presented here are the mean and standard deviation of the classification accuracy over ten random runs of the network. We performed\nthree independent comparisons of BCPNN with KH, RBM, and AE using the Kruskal-Wallis H test. BCPNN outperforms KH1 (p=0.02), while there is no statistical difference with RBM (p=0.28) / AE (p=0.30)." }, { "heading": "5 DISCUSSION", "text": "We have evaluated four different network models that can perform unsupervised representation learning using correlation based biologically plausible local learning rules. We made our assessment relying on the assumption that the saliency of representations is reflected in their class dependent separability, which can be quantified by classification performance, similar to Illing et al. (2019) and Krotov & Hopfield (2019). Learning representations without supervised fine-tuning is a harder task\n1This is lower than the test accuracy of 98.54% reported by Krotov & Hopfield (2019). This is due to differences in the classifier used: exponentiated ReLU activation along with an exponentiated loss function. We instead used a simpler softmax activation and cross-entropy loss.\ncompared to similar networks with end-to-end backprop, since the information about the samples’ corresponding labels cannot be utilized. Consequently, representations learnt with unsupervised methods cannot be expected to offer better class separability than the classification performance reported by supervised end-to-end approaches. We show that the investigated unsupervised methods score remarkably similar around 97.7%, which is only slightly worse compared to the 98.5% accuracy of networks with one hidden layer trained with end-to-end backprop (LeCun et al., 1998).\nWe also showed that the recently proposed BCPNN model performs competitively against other unsupervised learning models. The modular structure of the BCPNN layer led to “hybrid” representations that differ from the well-known distributed and local representations. In contrast to the minibatch method of other unsupervised learning methods, learning in BCPNN was chosen to remain incremental using dynamical equations, since such learning is biologically feasible and useful in many autonomous engineering solutions. Despite the slow convergence properties of an incremental approach, BCPNN required only 5 epochs of unsupervised training, in comparison to 300 epochs for AE and RBM, and 1000 epochs for KH. The incremental learning, along with modular architecture, sparse connectivity, and scalability of BCPNN is currently also taken advantage of in dedicated VLSI design (Stathis et al., 2020).\nOne important difference between current deep learning architectures and the brain concerns the abundance of recurrent connections in the latter. Deep learning architectures rely predominantly on feedforward connectivity. A typical cortical area receives only around 10% of synapses from lower order structures, e.g. thalamus, and the rest from other cortical areas (Douglas & Martin, 2007). These feedback and recurrent cortical connections are likely involved in associative memory, constraint-satisfaction e.g. for figure-ground segmentation, top-down modulation and selective attention (Douglas & Martin, 2007). Incorporating these important aspects of cortical computation can play a key role in improving our machine learning models and approaches.\nIt is important to note that the unsupervised learning models discussed in this work are proof-ofconcept designs and not meant to directly model some specific biological system or structure. Yet, they may shed some light on the hierarchical functional organization of e.g. sensory processing streams in the brain. Further work will focus on extending our study to multi-layer architectures." } ]
2,020
A COMPARATIVE STUDY
SP:66df426d54b2965855f955ec2946f5304b974ef5
[ "This work proposes LeVER, a method that modifies general off-policy RL algorithms with a fixed layer freezing policy for early embedding layers (in this particular case, a few early layers of a CNN). As a direct consequence, the method enables to store embeddings in the experience replay buffer rather than observations, with a potential decrease in memory required, as well as providing a boost in clock time due to fewer gradient computations needed for every update. The method is benchmarked with a couple of off-policy RL algorithms against a few different environments.", "This manuscript proposes to reduce the intensive computation and memory requirement in reinforcement learning trainings by freezing the parameters of lower layers early. Besides, the authors also propose to store the low-dimensional latent vectors rather than the high-dimensional images in the replay buffer for experience replay. The effectiveness of the proposed techniques is evaluated on DeepMind Control environments and Atari. The motivation for this work is strong, and the results are impressive. However, the proposed technique is described in a very general way without clearly defined applicable conditions and specific design methods. Below are detailed comments and questions." ]
Recent advances in off-policy deep reinforcement learning (RL) have led to impressive success in complex tasks from visual observations. Experience replay improves sample-efficiency by reusing experiences from the past, and convolutional neural networks (CNNs) process high-dimensional inputs effectively. However, such techniques demand high memory and computational bandwidth. In this paper, we present Latent Vector Experience Replay (LeVER), a simple modification of existing off-policy RL methods, to address these computational and memory requirements without sacrificing the performance of RL agents. To reduce the computational overhead of gradient updates in CNNs, we freeze the lower layers of CNN encoders early in training due to early convergence of their parameters. Additionally, we reduce memory requirements by storing the low-dimensional latent vectors for experience replay instead of high-dimensional images, enabling an adaptive increase in the replay buffer capacity, a useful technique in constrainedmemory settings. In our experiments, we show that LeVER does not degrade the performance of RL agents while significantly saving computation and memory across a diverse set of DeepMind Control environments and Atari games. Finally, we show that LeVER is useful for computation-efficient transfer learning in RL because lower layers of CNNs extract generalizable features, which can be used for different tasks and domains.
[]
[ { "authors": [ "Adrià Puigdomènech Badia", "Bilal Piot", "Steven Kapturowski", "Pablo Sprechmann", "Alex Vitvitskyi", "Daniel Guo", "Charles Blundell" ], "title": "Agent57: Outperforming the atari human benchmark", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Marc G Bellemare", "Will Dabney", "Rémi Munos" ], "title": "A distributional perspective on reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Davis Blalock", "Jose Javier Gonzalez Ortiz", "Jonathan Frankle", "John Guttag" ], "title": "What is the state of neural network pruning", "venue": "arXiv preprint arXiv:2003.03033,", "year": 2020 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Remi Munos", "Karen Simonyan", "Volodymir Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning", "Shane Legg", "Koray Kavukcuoglu" ], "title": "Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures, 2018", "venue": null, "year": 2018 }, { "authors": [ "Kuan Fang", "Alexander Toshev", "Li Fei-Fei", "Silvio Savarese" ], "title": "Scene memory transformer for embodied agents in long-horizon tasks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Ian Fischer", "Ruben Villegas", "David Ha", "Honglak Lee", "James Davidson" ], "title": "Learning latent dynamics for planning from pixels", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Jimmy Ba", "Mohammad Norouzi" ], "title": "Dream to control: Learning behaviors by latent imagination", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "arXiv preprint arXiv:1510.00149,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Matteo Hessel", "Joseph Modayil", "Hado Van Hasselt", "Tom Schaul", "Georg Ostrovski", "Will Dabney", "Dan Horgan", "Bilal Piot", "Mohammad Azar", "David Silver" ], "title": "Rainbow: Combining improvements in deep reinforcement learning", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Irina Higgins", "Arka Pal", "Andrei A Rusu", "Loic Matthey", "Christopher P Burgess", "Alexander Pritzel", "Matthew Botvinick", "Charles Blundell", "Alexander Lerchner. Darla" ], "title": "Improving zero-shot transfer in reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Gao Huang", "Shichen Liu", "Laurens Van der Maaten", "Kilian Q Weinberger" ], "title": "Condensenet: An efficient densenet using learned group convolutions", "venue": "In IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Forrest N Iandola", "Song Han", "Matthew W Moskewicz", "Khalid Ashraf", "William J Dally", "Kurt Keutzer" ], "title": "Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size", "venue": "arXiv preprint arXiv:1602.07360,", "year": 2016 }, { "authors": [ "Max Jaderberg", "Volodymyr Mnih", "Wojciech Marian Czarnecki", "Tom Schaul", "Joel Z Leibo", "David Silver", "Koray Kavukcuoglu" ], "title": "Reinforcement learning with unsupervised auxiliary tasks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Jongheon Jeong", "Jinwoo Shin" ], "title": "Training cnns with selective allocation of channels", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Leslie Pack Kaelbling", "Michael L Littman", "Anthony R Cassandra" ], "title": "Planning and acting in partially observable stochastic domains", "venue": "Artificial intelligence,", "year": 1998 }, { "authors": [ "Lukasz Kaiser", "Mohammad Babaeizadeh", "Piotr Milos", "Blazej Osinski", "Roy H Campbell", "Konrad Czechowski", "Dumitru Erhan", "Chelsea Finn", "Piotr Kozakowski", "Sergey Levine" ], "title": "Model-based reinforcement learning for atari", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Dmitry Kalashnikov", "Alex Irpan", "Peter Pastor", "Julian Ibarz", "Alexander Herzog", "Eric Jang", "Deirdre Quillen", "Ethan Holly", "Mrinal Kalakrishnan", "Vincent Vanhoucke" ], "title": "Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation", "venue": "In Conference on Robot Learning,", "year": 2018 }, { "authors": [ "Ilya Kostrikov", "Denis Yarats", "Rob Fergus" ], "title": "Image augmentation is all you need: Regularizing deep reinforcement learning from pixels", "venue": "arXiv preprint arXiv:2004.13649,", "year": 2020 }, { "authors": [ "Brenden M Lake", "Tomer D Ullman", "Joshua B Tenenbaum", "Samuel J Gershman" ], "title": "Building machines that learn and think like people", "venue": "Behavioral and brain sciences,", "year": 2017 }, { "authors": [ "Michael Laskin", "Kimin Lee", "Adam Stooke", "Lerrel Pinto", "Pieter Abbeel", "Aravind Srinivas" ], "title": "Reinforcement learning with augmented data", "venue": "In Advances in neural information processing systems,", "year": 2020 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Kimin Lee", "Michael Laskin", "Aravind Srinivas", "Pieter Abbeel" ], "title": "Sunrise: A simple unified framework for ensemble learning in deep reinforcement learning", "venue": "arXiv preprint arXiv:2007.04938,", "year": 2020 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Long-Ji Lin" ], "title": "Self-improving reactive agents based on reinforcement learning, planning and teaching", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Peter Mattson", "Christine Cheng", "Cody Coleman", "Greg Diamos", "Paulius Micikevicius", "David Patterson", "Hanlin Tang", "Gu-Yeon Wei", "Peter Bailis", "Victor Bittorf" ], "title": "Mlperf training benchmark", "venue": "In Conference on Machine Learning and Systems,", "year": 2020 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Ari Morcos", "Maithra Raghu", "Samy Bengio" ], "title": "Insights on representational similarity in neural networks with canonical correlation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Ian Osband", "Charles Blundell", "Alexander Pritzel", "Benjamin Van Roy" ], "title": "Deep exploration via bootstrapped dqn", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Lorenzo Pellegrini", "Gabrile Graffieti", "Vincenzo Lomonaco", "Davide Maltoni" ], "title": "Latent replay for real-time continual learning", "venue": "arXiv preprint arXiv:1912.01100,", "year": 2019 }, { "authors": [ "Maithra Raghu", "Justin Gilmer", "Jason Yosinski", "Jascha Sohl-Dickstein" ], "title": "Svcca: Singular vector canonical correlation analysis for deep understanding and improvement", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Julian Schrittwieser", "Ioannis Antonoglou", "Thomas Hubert", "Karen Simonyan", "Laurent Sifre", "Simon Schmitt", "Arthur Guez", "Edward Lockhart", "Demis Hassabis", "Thore Graepel" ], "title": "Mastering atari, go, chess and shogi by planning with a learned model", "venue": "arXiv preprint arXiv:1911.08265,", "year": 2019 }, { "authors": [ "John Schulman", "Xi Chen", "Pieter Abbeel" ], "title": "Equivalence between policy gradients and soft qlearning", "venue": "arXiv preprint arXiv:1704.06440,", "year": 2017 }, { "authors": [ "Aravind Srinivas", "Michael Laskin", "Pieter Abbeel" ], "title": "Curl: Contrastive unsupervised representations for reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Adam Stooke", "Kimin Lee", "Pieter Abbeel", "Michael Laskin" ], "title": "Decoupling representation learning from reinforcement learning, 2020", "venue": null, "year": 2020 }, { "authors": [ "Zhiqing Sun", "Hongkun Yu", "Xiaodan Song", "Renjie Liu", "Yiming Yang", "Denny Zhou" ], "title": "Mobilebert: a compact task-agnostic bert for resource-limited devices", "venue": "In Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Richard S Sutton" ], "title": "Dyna, an integrated architecture for learning, planning, and reacting", "venue": "ACM Sigart Bulletin,", "year": 1991 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": null, "year": 2018 }, { "authors": [ "Yi Tay", "Aston Zhang", "Luu Anh Tuan", "Jinfeng Rao", "Shuai Zhang", "Shuohang Wang", "Jie Fu", "Siu Cheung Hui" ], "title": "Lightweight and efficient neural natural language processing with quaternion networks", "venue": "In Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Hado Van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double qlearning", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Hado P van Hasselt", "Matteo Hessel", "John Aslanides" ], "title": "When to use parametric models in reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Igor Babuschkin", "Wojciech M Czarnecki", "Michaël Mathieu", "Andrew Dudzik", "Junyoung Chung", "David H Choi", "Richard Powell", "Timo Ewalds", "Petko Georgiev" ], "title": "Grandmaster level in starcraft ii using multi-agent reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Denis Yarats", "Amy Zhang", "Ilya Kostrikov", "Brandon Amos", "Joelle Pineau", "Rob Fergus" ], "title": "Improving sample efficiency in model-free reinforcement learning from images", "venue": null, "year": 1910 }, { "authors": [ "Jason Yosinski", "Jeff Clune", "Yoshua Bengio", "Hod Lipson" ], "title": "How transferable are features in deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Brian D Ziebart" ], "title": "Modeling purposeful adaptive behavior with the principle of maximum causal entropy", "venue": null, "year": 2010 } ]
[ { "heading": "1 INTRODUCTION", "text": "Success stories of deep reinforcement learning (RL) from high dimensional inputs such as pixels or large spatial layouts include achieving superhuman performance on Atari games (Mnih et al., 2015; Schrittwieser et al., 2019; Badia et al., 2020), grandmaster level in Starcraft II (Vinyals et al., 2019) and grasping a diverse set of objects with impressive success rates and generalization with robots in the real world (Kalashnikov et al., 2018). Modern off-policy RL algorithms (Mnih et al., 2015; Hessel et al., 2018; Hafner et al., 2019; 2020; Srinivas et al., 2020; Kostrikov et al., 2020; Laskin et al., 2020) have improved the sample-efficiency of agents that process high-dimensional pixel inputs with convolutional neural networks (CNNs; LeCun et al. 1998) using the past experiential data that is typically stored as raw observations form in a replay buffer (Lin, 1992). However, these methods demand high memory and computational bandwidth, which makes deep RL inaccessible in several scenarios, such as learning with much lighter on-device computation (e.g. mobile phones or other light-weight edge devices).\nFor compute- and memory-efficient deep learning, several strategies, such as network pruning (Han et al., 2015; Frankle & Carbin, 2019), quantization (Han et al., 2015; Iandola et al., 2016) and freezing (Yosinski et al., 2014; Raghu et al., 2017) have been proposed in supervised learning and unsupervised learning for various purposes (see Section 2 for more details). In computer vision, Raghu et al. (2017) showed that the computational cost of updating CNNs can be reduced by freezing lower layers earlier in training, and Han et al. (2015) introduced a deep compression, which reduces the memory requirement of neural networks by producing a sparse network. In natural language processing, several approaches (Tay et al., 2019; Sun et al., 2020) have studied improving the computational efficiency of Transformers (Vaswani et al., 2017). In deep RL, however, developing compute- and memory-efficient techniques has received relatively little attention despite their serious impact on the practicality of RL algorithms.\nIn this paper, we propose Latent Vector Experience Replay (LeVER), a simple technique to reduce computational overhead and memory requirements that is compatible with various off-policy RL algorithms (Haarnoja et al., 2018; Hessel et al., 2018; Srinivas et al., 2020). Our main idea is to freeze the lower layers of CNN encoders of RL agents early in training, which enables two key capabilities: (a) compute-efficiency: reducing the computational overhead of gradient updates in CNNs; (b) memory-efficiency: saving memory by storing the low-dimensional latent vectors to experience replay instead of high-dimensional images. Additionally, we leverage the memory-efficiency of LeVER to adaptively increase the replay capacity, resulting in improved sample-efficiency of offpolicy RL algorithms in constrained-memory settings. LeVER achieves these improvements without sacrificing the performance of RL agents due to early convergence of CNN encoders.\nTo summarize, the main contributions of this paper are as follows:\n• We present LeVER, a compute- and memory-efficient technique that can be used in conjunction with most modern off-policy RL algorithms (Haarnoja et al., 2018; Hessel et al., 2018).\n• We show that LeVER significantly reduces computation while matching the original performance of existing RL algorithms on both continuous control tasks from DeepMind Control Suite (Tassa et al., 2018) and discrete control tasks from Atari games (Bellemare et al., 2013).\n• We show that LeVER improves the sample-efficiency of RL agents in constrained-memory settings by enabling an increased replay buffer capacity.\n• Finally, we show that LeVER is useful for computation-efficient transfer learning, highlighting the generality and transferability of encoder features." }, { "heading": "2 RELATED WORK", "text": "Off-policy deep reinforcement learning. The most sample-efficient RL agents often use off-policy RL algorithms, a recipe for improving the agent’s policy from experiences that may have been recorded with a different policy (Sutton & Barto, 2018). Off-policy RL algorithms are typically based on Q-Learning (Watkins & Dayan, 1992) which estimates the optimal value functions for the task at hand, while actor-critic based off-policy methods (Lillicrap et al., 2016; Schulman et al., 2017; Haarnoja et al., 2018) are also commonly used. In this paper we will consider Deep QNetworks (DQN; Mnih et al. 2015),which combine the function approximation capability of deep convolutional neural networks (CNNs; LeCun et al. 1998) with Q-Learning along with the usage of the experience replay buffer (Lin, 1992) as well as off-policy actor-critic methods (Lillicrap et al., 2016; Haarnoja et al., 2018), which have been proposed for continuous control tasks.\nTaking into account the learning ability of humans and practical limitations of wall clock time for deploying RL algorithms in the real world, particularly those that learn from raw high dimensional inputs such as pixels (Kalashnikov et al., 2018), the sample-inefficiency of off-policy RL algorithms has been a research topic of wide interest and importance (Lake et al., 2017; Kaiser et al., 2020). To address this, several improvements in pixel-based off-policy RL have been proposed recently: algorithmic improvements such as Rainbow (Hessel et al., 2018) and its data-efficient versions (van Hasselt et al., 2019); using ensemble approaches based on bootstrapping (Osband et al., 2016; Lee et al., 2020); combining RL algorithms with auxiliary predictive, reconstruction and contrastive losses (Jaderberg et al., 2017; Higgins et al., 2017; Oord et al., 2018; Yarats et al., 2019; Srinivas et al., 2020; Stooke et al., 2020); using world models for auxiliary losses and/or synthetic rollouts (Sutton, 1991; Ha & Schmidhuber, 2018; Kaiser et al., 2020; Hafner et al., 2020); using data-augmentations on images to improve sample-efficiency (Laskin et al., 2020; Kostrikov et al., 2020).\nCompute-efficient techniques in machine learning. Most recent progress in deep learning and RL has relied heavily on the increased access to more powerful computational resources. To address this, Mattson et al. (2020) presented MLPerf, a fair and precise ML benchmark to evaluate model training time on standard datasets, driving scalability alongside performance, following a recent focus on mitigating the computational cost of training ML models. Several techniques, such as pruning and quantization (Han et al., 2015; Frankle & Carbin, 2019; Blalock et al., 2020; Iandola et al., 2016; Tay et al., 2019) have been developed to address compute and memory requirements. Raghu et al. (2017) proposed freezing earlier layers to remove computationally expensive backward passes in supervised learning tasks, motivated by the bottom-up convergence of neural networks. This intuition was\nfurther extended to recurrent neural networks (Morcos et al., 2018) and continual learning (Pellegrini et al., 2019), and Yosinski et al. (2014) study the transferability of frozen and fine-tuned CNN parameters. Fang et al. (2019) store low-dimensional embeddings of input observations in scene memory for long-horizon tasks. We focus on the feasibility of freezing neural network layers in deep RL and show that this idea can improve the compute- and memory-efficiency of many offpolicy algorithms using standard RL benchmarks." }, { "heading": "3 BACKGROUND", "text": "We formulate visual control task as a partially observable Markov decision process (POMDP; Sutton & Barto 2018; Kaelbling et al. 1998). Formally, at each timestep t, the agent receives a highdimensional observation ot, which is an indirect representation of the state st, and chooses an action at based on its policy π. The environment returns a reward rt and the agent transitions to the next observation ot+1. The return Rt = ∑∞ k=0 γ\nkrt+k is the total accumulated rewards from timestep t with a discount factor γ ∈ [0, 1). The goal of RL is to learn a policy π that maximizes the expected return over trajectories. By following the common practice in DQN (Mnih et al., 2015), we handle the partial observability of environment using stacked input observations, which are processed through the convolutional layers of an encoder fψ .\nSoft Actor-Critic. SAC (Haarnoja et al., 2018) is an off-policy actor-critic method based on the maximum entropy RL framework (Ziebart, 2010), which encourages the robustness to noise and exploration by maximizing a weighted objective of the reward and the policy entropy. To update the parameters, SAC alternates between a soft policy evaluation and a soft policy improvement. At the soft policy evaluation step, a soft Q-function, which is modeled as a neural network with parameters θ, is updated by minimizing the following soft Bellman residual:\nLSACQ (θ, ψ) = Eτt∼B [( Qθ(fψ(ot), at)− rt\n− γEat+1∼πφ [ Qθ̄(fψ̄(ot+1), at+1)− α log πφ(at+1|fψ(ot+1)) ])2] ,\nwhere τt = (ot, at, rt, ot+1) is a transition, B is a replay buffer, θ̄, ψ̄ are the delayed parameters, and α is a temperature parameter. At the soft policy improvement step, the policy π with its parameter φ is updated by minimizing the following objective:\nLSACπ (φ) = Eot∼B,at∼πφ [ α log πφ(at|fψ(ot))−Qθ(fψ(ot), at) ] . (1)\nHere, the policy is modeled as a Gaussian with mean and covariance given by neural networks to handle continuous action spaces.\nDeep Q-learning. DQN algorithm (Mnih et al., 2015) learns a Q-function, which is modeled as a neural network with parameters θ, by minimizing the following Bellman residual:\nLDQN(θ, ψ) = Eτt∼B [( Qθ(fψ(ot), at)− rt − γmax\na Qθ̄(fψ̄(ot+1), a)\n)2 ] , (2)\nwhere τt = (ot, at, rt, ot+1) is a transition, B is a replay buffer, and θ̄, ψ̄ are the delayed parameters. Rainbow DQN integrates several techniques, such as double Q-learning (Van Hasselt et al., 2016) and distributional DQN (Bellemare et al., 2017). For exposition, we refer the reader to Hessel et al. (2018) for more detailed explanations of Rainbow DQN." }, { "heading": "4 LEVER: LATENT VECTOR EXPERIENCE REPLAY", "text": "In this section, we present LeVER: Latent Vector Experience Replay, which can be used in conjunction with most modern off-policy RL algorithms, such as SAC (Haarnoja et al., 2018) and Rainbow DQN (Hessel et al., 2018). Our main idea is to freeze lower layers during training and only update higher layers, which eliminates the computational overhead of computing gradients and updating in lower layers. We additionally improve the memory-efficiency of off-policy RL algorithms by storing low-dimensional latent vectors in the replay buffer instead of high-dimensional pixel observations. See Figure 1 and Appendix A for more details of our method." }, { "heading": "4.1 FREEZING ENCODER FOR SAVING COMPUTATION AND MEMORY", "text": "We process high-dimensional image input with an encoder fψ to obtain zt = fψ(ot), which is used as input for policy πφ and Q-function Qθ as described in Section 3. In off-policy RL, we store transitions (ot, at, ot+1, rt) in the replay buffer B to improve sample-efficiency by reusing experience from the past. However, processing high-dimensional image input ot is computationally expensive. To handle this issue, after Tf updates, we freeze the parameters of encoder ψ, and only update the policy and Q-function. We remark that this simple technique can save computation without performance degradation because the encoder is modeled as deep convolutional neural networks, while a shallow MLP is used for policy and Q-function. Freezing lower layers of neural networks also has been investigated in supervised learning based on the observation that neural networks converge to their final representations from the bottom-up, i.e., lower layers converge very early in training (Raghu et al., 2017). For the first time, we show the feasibility and effectiveness of this idea in pixel-based reinforcement learning (see Figure 7(a) for supporting experimental results) and present solutions to its RL-specific implementation challenges.\nMoreover, in order to save memory, we consider storing (compressed) latent vectors instead of high-dimensional image inputs. Specifically, each experience in B is replaced by the latent transition (zt, at, zt+1, rt), and the replay capacity is increased to Ĉ (see Section 4.2 for more details). Thereafter, for each subsequent environment interaction, the latent vectors zt = fψ(ot) and zt+1 = fψ(ot+1) are computed prior to storing (zt, at, zt+1, rt) in B. During agent updates, the sampled latent vectors are directly passed into the policy πφ and Q-function Qθ, bypassing the encoder convolutional layers. Since the agent samples and trains with latent vectors after freezing, we only store the latent vectors and avoid the need to maintain large image observations in B." }, { "heading": "4.2 ADDITIONAL TECHNIQUES AND DETAILS FOR LEVER", "text": "Data augmentations. Recently, various data augmentations (Srinivas et al., 2020; Laskin et al., 2020; Kostrikov et al., 2020) have provided large gains in the sample-efficiency of RL from pixel observations. However, LeVER precludes data augmentations because we store the latent vector instead of the raw pixel observation. We find that the absence of data augmentations could decrease sample-efficiency in some cases, e.g., when the capacity of B is small. To mitigate this issue, we perform K number of different data augmentations for each input observation ot and store K distinct latent vectors {zkt = fψ(AUGk(ot))|k = 1 · · ·K}. We find empirically thatK = 4 achieves competitive performance to standard RL algorithms in most cases.\nIncreasing replay capacity. By storing the latent vector in replay buffer, we can adaptively increase the capacity (i.e., total number of transitions), which is determined by the size difference between the input pixel observations and the latent vectors output by the encoder, with a few additional considerations. The new capacity of the replay buffer is\nĈ = ⌊ C ∗ ( P\n4NKL\n) ⌋ ,\nwhere C is the capacity of the original replay buffer, P is the size of the raw observation, L is the size of the latent vector, and K is the number of data augmentations. The number of encoders N is algorithm-specific and determines the number of distinct latent vectors encountered for each observation during training. For Q-learning algorithms N = 1, whereas for actor-critic algorithms N = 2 if the actor and critic each compute their own latent vectors. Some algorithms employ a target network for updating the Q-function (Mnih et al., 2015; Haarnoja et al., 2018), but we use the same latent vectors for the online and target networks after freezing to avoid storing target latent vectors separately and find that tying their parameters does not degrade performance.1 The factor of 4 arises from the cost of saving floats for latent vectors, while raw pixel observations are saved as integer pixel values. We assume the memory required for actions, and rewards is small and only consider only the memory used for observations." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "We designed our experiments to answer the following questions:\n• Can LeVER reduce the computational overhead of various off-policy RL algorithms for both continuous (see Figure 2) and discrete (see Figure 3) control tasks?\n• Can LeVER reduce the memory consumption and improve the sample-efficiency of off-policy RL algorithms by adaptively increasing the buffer size (see Figure 4 and Figure 5)?\n• Can LeVER be useful for computation-efficient transfer learning (see Figure 7(a))? • Do CNN encoders of RL agents converge early in training (see Figure 7(b) and Figure 7(c))?" }, { "heading": "5.1 SETUPS", "text": "Computational efficiency. We first demonstrate the computational efficiency of LeVER on the DeepMind Control Suite (DMControl; Tassa et al. 2018) and Atari games (Bellemare et al., 2013) benchmarks. DMControl is commonly used for benchmarking sample-efficiency for image-based continuous control methods. For DMControl experiments, we consider a state-of-the-art model-free RL method, which applies contrastive learning (CURL; Srinivas et al. 2020) to SAC (Haarnoja et al., 2018), using the image encoder architecture from SAC-AE (Yarats et al., 2019). For evaluation, we compare the computational efficiency of CURL with and without LeVER by measuring floating point operations (FLOPs).2 For discrete control tasks from Atari games, we perform similar experiments comparing the FLOPs required by Rainbow (Hessel et al., 2018) with and without LeVER. For both our method and the baseline, we use the hyperparameters and encoder architecture of data-efficient Rainbow (van Hasselt et al., 2019). We train our method and the baseline for 500K timesteps as done in Srinivas et al. (2020) and Laskin et al. (2020).\nMemory efficiency. We showcase the memory efficiency of LeVER with a set of constrainedmemory experiments in DMControl. For Cartpole and Finger, the memory allocated for storing observations is constrained to 0.03 GB, corresponding to an initial replay buffer capacity C = 1000. For Reacher and Walker, the memory is constrained to 0.06 GB for an initial capacity of C = 2000 due to the difficulty of learning in these environments. In this constrained-memory setting, we compare the sample-efficiency of CURL with and without LeVER. As an upper bound, we also report the performance of CURL without memory constraints, i.e., the replay capacity is set to the number of training steps. For Atari experiments, the baseline agent is data-efficient Rainbow and the memory allocation is 0.07 GB, corresponding to initial replay capacity C = 10000. The other hyperparameters are the same as those in the computational efficiency experiments.\nThe encoder architecture used for our experiments with CURL is used in Yarats et al. (2019). It consists of four convolutional layers with 3 x 3 kernels and 32 channels, with the ReLU activation applied after each conv layer. The architecture used for our Rainbow experiments is from van Hasselt et al. (2019), consisting of a convolutional layer with 32 channels followed by a convolutional layer with 64 channels, both with 5 x 5 kernels and followed by a ReLU activation. For our method, we freeze the first fully-connected layer of the actor and critic in CURL experiments and\n1We remark that the higher layers of the target network are not tied to the online network after freezing. 2We explain our procedure for counting the number of FLOPs in Appendix B.\nthe last convolutional layer of the encoder in Rainbow experiments (see Appendix E for justification). We present the best results achieved by LeVER across various values of Tf , the number of training steps before freezing the encoder. The full list of hyperparameters is provided in Appendix G (DMControl) and Appendix H (Atari)." }, { "heading": "5.2 IMPROVING COMPUTE- AND MEMORY-EFFICIENCY", "text": "Experimental results in DMControl and Atari showcasing the computational efficiency of LeVER are provided in Figures 2 and Figure 3. CURL and Rainbow both achieve higher performance within significantly fewer FLOPs when combined with LeVER in DMControl and Atari, respectively. Additionally, Table 1 compares the performance of Rainbow with and without LeVER at 45T (4.5e13) FLOPs. In particular, the average returns are improved from 145.8 to 276.6 compared to baseline Rainbow in BankHeist and from 2325.5 to 4123.5 in Qbert. We remark that LeVER achieves better computational efficiency while maintaining the agent’s final performance and comparable sampleefficiency (see Appendix E for corresponding figures).\nExperimental results in Atari and DMControl showcasing the sample-efficiency of LeVER in the constrained-memory setup are provided in Figure 4 and Figure 5. CURL and Rainbow achieve higher final performance and better sample-efficiency when combined with LeVER in DMControl and Atari, respectively. Additionally, Table 1 compares the performance of unbounded memory Rainbow and constrained-memory (0.07 GB) Rainbow with and without LeVER at 500K environment interactions. In particular, the average returns are improved from 10498.0 to 17620.0 compared to baseline Rainbow in CrazyClimber and from 2430.5 to 3231.0 in Qbert. Although we disentangle the computational and memory benefits of LeVER in these experiments, we also highlight the\ncomputational gain of LeVER in constrained-memory settings (effectively combining the benefits) in Appendix D." }, { "heading": "5.3 FREEZING LARGER CONVOLUTIONAL ENCODERS", "text": "We also verify the benefits of LeVER using deeper convolutional encoders, which are widely used in a range of applications such as visual navigation tasks and favored for their superior generalization ability. Specifically, we follow the setup described in Section 5.1 and replace the SAC-AE architecture (4 convolutional layers) with the IMPALA architecture (Espeholt et al., 2018) (15 convolutional layers containing residual blocks (He et al., 2016)). Figure 6(b) shows the computational efficiency of LeVER in Cartpole-swingup and Walker-walk with the IMPALA architecture. CURL achieves higher performance within significantly fewer FLOPs when combined with LeVER. We remark that the gains due to LeVER are more significant because computing and updating gradients for large convolutional encoders is very computationally expensive." }, { "heading": "5.4 IMPROVING SAMPLE EFFICIENCY WITH LEVER AND LARGER BATCH SIZES", "text": "In this subsection we show that we can combine LeVER with larger batch sizes to improve the sample-efficiency of RL agents. Larger batch sizes have been shown to improve agent performance in many settings, but require more compute since each gradient calculation and update is performed on more observations. We demonstrate that LeVER can mitigate these issues by showing results in Cheetah-run, a task known to achieve better performance with larger batch sizes. Figure 6(a) shows the sample-efficiency of CURL (batch 128) and CURL+LeVER (batch 512), and the corresponding computational efficiency of each agent. CURL achieves better sample-efficiency when combined with LeVER and the larger batch size, but does this within a comparable compute budget. In contrast, CURL (batch 256) requires significantly more compute to achieve similar performance to CURL+LeVER (batch 512)." }, { "heading": "5.5 IMPROVING COMPUTATIONAL EFFICIENCY IN TRANSFER SETTINGS", "text": "We demonstrate, as another application of our method, that LeVER increases computational efficiency in the transfer setting: utilizing the parameters from Task A on unseen Tasks B. Specifically, we train a CURL agent for 60K environment interactions on Walker-stand; then, we only fine-tune\nthe policy and Q-functions on unseen tasks (e.g., Walker-walk and Cheetah-run) using network parameters from Walker-stand. To save computation, during fine-tuning, we freeze the encoder parameters. Figure 7(a) shows the computational gain of LeVER in task transfer (i.e., Walker-stand to Walker-walk similar to Yarats et al. (2019)), and domain transfer (i.e., Walker-stand to Cheetah-run and Walker-stand to Hopper-hop) is shown in Appendix C. Due to the generality of CNN features, we can achieve this computational gain using a pretrained encoder. For the task transfer setup, we provide more analysis on the number of frozen layers and freezing time hyperparameter Tf in Appendix C." }, { "heading": "5.6 ENCODER ANALYSIS", "text": "In this subsection we present visualizations to verify that the neural networks employed in deep reinforcement learning indeed converge from the bottom up, similar to those used in supervised learning (Raghu et al., 2017). Figure 7(b) shows the spatial attention map for two Atari games and one DMControl environment at various points during training. Similar to Laskin et al. (2020) and Zagoruyko & Komodakis (2017), we compute the spatial attention map by mean-pooling the absolute values of the activations along the channel dimension and follow with a 2-dimensional spatial softmax. The attention map shows significant change in the first 20% of training, and remains relatively unchanged thereafter, suggesting that the encoder converges to its final representations early in training. Figure 7(c) shows the SVCCA (Raghu et al., 2017) score, a measure of neural network layer similarity, between a layer and itself at time t and t+10K. The convolutional layers of the encoder achieve high similarity scores with themselves between time t and t+10K, while the higher layers of the policy and Q-network continue to change throughout training. In our DMControl environments we freeze the convolutional layers and the first fully-connected layer of the policy and Q-network (denoted fc1). Although the policy fc1 continues to change, the convergence of the Qnetwork fc1 and the encoder layers allow us to achieve our computational and memory savings with minimal performance degradation." }, { "heading": "6 CONCLUSION", "text": "In this paper, we presented LeVER, a simple but powerful modification of off-policy RL algorithms that significantly reduces computation and memory requirements while maintaining state-of-theart performance. We leveraged the intuition that CNN encoders in deep RL converge to their final representations early in training to freeze the encoder and subsequently store latent vectors to save computation and memory. In our experimental results, we demonstrated the compute- and memory-efficiency of LeVER in various DMControl environments and Atari games, and proposed a technique for computation-efficient transfer learning. With LeVER, we highlight the potential for improvements in compute- and memory-efficiency in deep RL that can be made without sacrificing performance, in hopes of making deep RL more practical and accessible in the real world." }, { "heading": "A ALGORITHM", "text": "We detail the specifics of modifying off-policy RL methods with LeVER below. For concreteness, we describe LeVER combined with deep Q-learning methods.\nAlgorithm 1 Latent Vector Experience Replay (DQN Base Agent) 1: Initialize replay buffer B with capacity C 2: Initialize action-value network Q with parameters θ and encoder f with parameters ψ 3: for each timestep t do 4: Select action: at ← arg maxaQθ(fψ(ot), a) 5: Collect observation ot+1 and reward rt from the environment by taking action at 6: if t ≤ Tf then 7: Store transition (ot, at, ot+1, rt) in replay buffer B 8: else 9: Compute latent states zt, zt+1 ← fψ(ot), fψ(ot+1) 10: Store transition (zt, at, zt+1, rt) in replay buffer B 11: end if 12: // REPLACE PIXEL-BASED TRANSITIONS WITH LATENT TRAJECTORIES 13: if t = Tf then 14: Compute latent states {(zt, zt+1)} min(Tf ,c) t=1 ← {(fψ(ot), fψ(ot+1))} min(Tf ,c) t=1 15: Replace {(ot, at, ot+1, rt)} min(Tf ,c) t=1 with latent transitions {(zt, at, zt+1, rt)} min(Tf ,c) t=1 16: Increase the capacity of B to Ĉ 17: end if 18: // UPDATE PARAMETERS OF Q-NETWORK WITH SAMPLED IMAGES OR LATENTS 19: for each gradient step do 20: if t < Tf then 21: Sample random minibatch {(oj , aj , oj+1, rj)}bj=1 ∼ B 22: Calculate target yj = rj + γmaxa′ Qθ̄(fψ̄(oj+1), a ′) 23: Perform a gradient step on LDQN(θ, ψ) (2) 24: else 25: Sample random minibatch {(zj , aj , zj+1, rj)}bj=1 ∼ B 26: Calculate target yj = rj + γmaxa′ Qθ̄(zj+1, a\n′) 27: Perform a gradient step on LDQN(θ) (2) 28: end if 29: end for 30: end for" }, { "heading": "B CALCULATION OF FLOATING POINT OPERATIONS", "text": "We consider each backward pass to require twice as many FLOPs as a forward pass. 3 Each weight requires one multiply-add operation in the forward pass. In the backward pass, it requires two multiply-add operations: at layer i, the gradient of the loss with respect to the weight at layer i and with respect to the output of layer (i− 1) need to be computed. The latter computation is necessary for subsequent gradient calculations for weights at layer (i− 1). We use functions from Huang et al. (2018) and Jeong & Shin (2019) to obtain the number of operations per forward pass for all layers in the encoder (denoted E) and number of operations per forward pass for all MLP layers (denoted M ). For concreteness, we provide a FLOPs breakdown by layer for the architectures we use in Table 2 and 3.\n3This method for FLOP calculation is used in https://openai.com/blog/ai-and-compute/.\nWe denote the number of forward passes per iteration F , the number of backward passes per iteration B, and the batch size b. We assume the number of updates per timestep is 1. Then, the number of FLOPs per iteration before freezing at time t = Tf is:\nbF (E +M) + 2bB(E +M)\nFor the baseline, FLOPs are computed using this formula throughout training.\nLeVER reduces computational overhead by eliminating most of the encoder forward and backward passes. The number of FLOPs per iteration after freezing is:\nbFM + 2bBM + EKN\nwhere K is the number of data augmentations and N is the number of networks as described in Section 4.2. The forward and backward passes of the encoder are removed, with the exception of the EKN term at the end that arises from calculating latent vectors for the current observation.\nAt freezing time t = Tf , we need to compute latent vectors for each transition in the replay buffer. This introduces a one-time cost of (EKN min(Tf , C)) FLOPs, since the number of transitions in the replay buffer is min(Tf , C), where C is the initial replay capacity." }, { "heading": "C ADDITIONAL TRANSFER EXPERIMENTS", "text": "Domain transfer. In Figure 7(a) we show the computational efficiency of LeVER in a task transfer setting. Here we show in Figure 8 that frozen encoder parameters can also be used in domain transfer tasks (i.e. Walker-stand to Cheetah-run and Walker-stand to Hopper-hop). In this setting, we only transfer the encoder parameters, whereas we transfer the entire network in task transfer.\nTransfer setting analysis. In Figure 7(a) we show the computational efficiency of LeVER on Walker-walk with Walker-stand pretrained for 60K steps, with four convolutional layers frozen. We provide analysis for the number of layers frozen and number of environment interactions before freezing Tf in Figure 9. We find that freezing more layers allows for more computational gain, since we can avoid computing gradients for the frozen layers without sacrificing performance. Longer pretraining in the source task improves compute-efficiency in the target task; however, early convergence of encoder parameters enables the agent to learn a good policy even with only 20K interactions before transfer.\nWe remark that Yosinski et al. (2014) examine the generality of features learned by neural networks and the feasibility of transferring parameters between similar image classification tasks. Yarats et al. (2019) show that transferring encoder parameters pretrained from Walker-walk to Walker-stand and Walker-run can improve the performance and sample-efficiency of a SAC agent. For the first time, we show that encoder parameters trained on simple tasks can be useful for computation-efficient training in complex tasks and new domains." }, { "heading": "D COMPUTATIONAL EFFICIENCY IN CONSTRAINED-MEMORY SETTINGS", "text": "In our main experiments, we isolate the two major contributions of our method, reduced computational overhead and improved sample-efficiency in constrained-memory settings. In Figures 10 and 11 we show that these benefits can also be combined for significant computational gain in constrained-memory settings." }, { "heading": "E SAMPLE-EFFICIENCY PLOTS", "text": "In section 5.2 we show the computational efficiency of our method in DMControl and Atari environments. We show in Figure 12 that our sample-efficiency is very close to that of baseline CURL (Srinivas et al., 2020), with only slight degradation in Cartpole-swingup and Walker-walk. In Atari games (Figure 13), we match the sample-efficiency of baseline Rainbow (Hessel et al., 2018) very closely, with no degradation." }, { "heading": "F GENERAL IMPLEMENTATION DETAILS", "text": "LeVER can be applied to any convolutional encoder which compresses the input observation into a latent vector with smaller dimension than the observation. We generally freeze all the convolutional\nlayers and possibly the first fully-connected layer. In our main experiments, we chose to freeze the first fully-connected layer for DM Control experiments and the last convolutional layer for Atari experiments. We made this choice in order to simultaneously save computation and memory; for those architectures, if we freeze an earlier layer, we save less computation, and the latent vectors (convolutional features) are too large for our method to save memory. In DM Control experiments, the latent dimension of the first fully-connected layer is 50, which allows a roughly 12X memory\ngain. In Atari experiments, the latent dimension of the last convolutional layer is 576, which allows a roughly 3X memory gain." }, { "heading": "G DMCONTROL IMPLEMENTATION DETAILS", "text": "We use the network architecture in https://github.com/MishaLaskin/curl for our CURL (Srinivas et al., 2020) implementation. We show a full list of hyperparameters in Table 4." }, { "heading": "H ATARI IMPLEMENTATION DETAILS", "text": "We use the network architecture in https://github.com/Kaixhin/Rainbow for our Rainbow (Hessel et al., 2018) implementation and the data-efficient Rainbow (van Hasselt et al., 2019) encoder architecture and hyperparameters. We show a full list of hyperparameters in Table 5." } ]
2,020
null
SP:686d12e3c1b9b03b8a0ad2106de8108b793daab3
[ "The authors consider the usage of autoregressive dynamics models for batch model-based RL, where state-variable/reward predictions are performed sequentially conditioned on previously-predicted variables. Extensive numerical results are provided in several continuous domains for both policy evaluation and optimization problems. The results showcase the effectiveness of autoregressive models and, in particular, their superiority over standard feed-forward models.", "The paper studies offline policy evaluation (OPE) and optimization in the model-based setting. The main methodological contribution of the paper is using autoregressive models for the next state and reward prediction. The authors demonstrate that autoregressive models achieve higher likelihood compared to feedforward models on 9 environments from RL Unplugged [1] offline dataset. Given that model likelihood is only a proxy quality metric in OPE and control, they further demonstrate a positive correlation between likelihood and OPE estimates. The paper shows quantitatively that using autoregressive models results in more accurate OPE estimates than for feedforward models and model-free benchmarks. Finally, the authors apply autoregressive models for offline control and achieve higher returns than for feedforward models." ]
Standard dynamics models for continuous control make use of feedforward computation to predict the conditional distribution of next state and reward given current state and action using a multivariate Gaussian with a diagonal covariance structure. This modeling choice assumes that different dimensions of the next state and reward are conditionally independent given the current state and action and may be driven by the fact that fully observable physics-based simulation environments entail deterministic transition dynamics. In this paper, we challenge this conditional independence assumption and propose a family of expressive autoregressive dynamics models that generate different dimensions of the next state and reward sequentially conditioned on previous dimensions. We demonstrate that autoregressive dynamics models indeed outperform standard feedforward models in log-likelihood on heldout transitions. Furthermore, we compare different model-based and model-free off-policy evaluation (OPE) methods on RL Unplugged, a suite of offline MuJoCo datasets, and find that autoregressive dynamics models consistently outperform all baselines, achieving a new state-of-the-art. Finally, we show that autoregressive dynamics models are useful for offline policy optimization by serving as a way to enrich the replay buffer through data augmentation and improving performance using model-based planning.
[ { "affiliations": [], "name": "Michael R. Zhang" }, { "affiliations": [], "name": "Tom Le Paine" }, { "affiliations": [], "name": "Ofir Nachum" }, { "affiliations": [], "name": "Cosmin Paduraru" }, { "affiliations": [], "name": "George Tucker" }, { "affiliations": [], "name": "Ziyu Wang" }, { "affiliations": [], "name": "Mohammad Norouzi" } ]
[ { "authors": [ "Rishabh Agarwal", "Dale Schuurmans", "Mohammad Norouzi" ], "title": "An optimistic perspective on offline reinforcement learning", "venue": "International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Christopher G Atkeson", "Andrew W Moore", "Stefan Schaal" ], "title": "Locally weighted learning", "venue": "In Lazy learning,", "year": 1997 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "arXiv preprint arXiv:1409.0473,", "year": 2014 }, { "authors": [ "Michael Bain" ], "title": "A framework for behavioural cloning", "venue": "In Machine Intelligence", "year": 1995 }, { "authors": [ "Gabriel Barth-Maron", "Matthew W Hoffman", "David Budden", "Will Dabney", "Dan Horgan", "Dhruva Tb", "Alistair Muldal", "Nicolas Heess", "Timothy Lillicrap" ], "title": "Distributed distributional deterministic policy gradients", "venue": null, "year": 2018 }, { "authors": [ "Tom B Brown", "Benjamin Mann", "Nick Ryder", "Melanie Subbiah", "Jared Kaplan", "Prafulla Dhariwal", "Arvind Neelakantan", "Pranav Shyam", "Girish Sastry", "Amanda Askell" ], "title": "Language models are few-shot learners", "venue": null, "year": 2005 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Marc Deisenroth", "Carl E Rasmussen" ], "title": "Pilco: A model-based and data-efficient approach to policy search", "venue": "In Proceedings of the 28th International Conference on machine learning", "year": 2011 }, { "authors": [ "Miroslav Dudík", "John Langford", "Lihong Li" ], "title": "Doubly robust policy evaluation and learning", "venue": null, "year": 2011 }, { "authors": [ "Justin Fu", "Mohammad Norouzi", "Ofir Nachum", "George Tucker", "Ziyu Wang", "Alexander Novikov", "Mengjiao Yang", "Michael R. Zhang", "Yutian Chen", "Aviral Kumar", "Cosmin Paduraru", "Sergey Levine", "Thomas Paine" ], "title": "Benchmarks for deep off-policy evaluation", "venue": "In International Conference on Learning Representations,", "year": 2021 }, { "authors": [ "Scott Fujimoto", "David Meger", "Doina Precup" ], "title": "Off-policy deep reinforcement learning without exploration", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Assaf Hallak", "François Schnitzler", "Timothy Mann", "Shie Mannor" ], "title": "Off-policy model-based learning under unknown factored dynamics", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Josiah Hanna", "Scott Niekum", "Peter Stone" ], "title": "Importance sampling policy evaluation with an estimated behavior policy", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Josiah P Hanna", "Peter Stone", "Scott Niekum" ], "title": "Bootstrapping with models: Confidence intervals for off-policy evaluation", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Michael Janner", "Justin Fu", "Marvin Zhang", "Sergey Levine" ], "title": "When to trust your model: Model-based policy optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": null, "year": 2014 }, { "authors": [ "Ilya Kostrikov", "Ofir Nachum" ], "title": "Statistical bootstrapping for uncertainty estimation in off-policy evaluation, 2020", "venue": null, "year": 2020 }, { "authors": [ "Aviral Kumar", "Justin Fu", "Matthew Soh", "George Tucker", "Sergey Levine" ], "title": "Stabilizing off-policy q-learning via bootstrapping error reduction", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Thanard Kurutach", "Ignasi Clavera", "Yan Duan", "Aviv Tamar", "Pieter Abbeel" ], "title": "Model-Ensemble Trust-Region Policy Optimization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Nathan Lambert", "Brandon Amos", "Omry Yadan", "Roberto Calandra" ], "title": "Objective mismatch in model-based reinforcement learning", "venue": null, "year": 2002 }, { "authors": [ "Sascha Lange", "Thomas Gabel", "Martin Riedmiller" ], "title": "Batch reinforcement learning", "venue": "In Reinforcement learning,", "year": 2012 }, { "authors": [ "Hoang M Le", "Cameron Voloshin", "Yisong Yue" ], "title": "Batch policy learning under constraints", "venue": "arXiv preprint arXiv:1903.08738,", "year": 2019 }, { "authors": [ "Sergey Levine", "Aviral Kumar", "George Tucker", "Justin Fu" ], "title": "Offline reinforcement learning: Tutorial, review, and perspectives on open problems", "venue": null, "year": 2005 }, { "authors": [ "Lihong Li", "Wei Chu", "John Langford", "Xuanhui Wang" ], "title": "Unbiased offline evaluation of contextualbandit-based news article recommendation algorithms", "venue": "In Proceedings of the fourth ACM international conference on Web search and data mining,", "year": 2011 }, { "authors": [ "Lihong Li", "Remi Munos", "Csaba Szepesvári" ], "title": "On minimax optimal offline policy evaluation", "venue": null, "year": 2014 }, { "authors": [ "Qiang Liu", "Lihong Li", "Ziyang Tang", "Dengyong Zhou" ], "title": "Breaking the curse of horizon: Infinitehorizon off-policy estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Travis Mandel", "Yun-En Liu", "Sergey Levine", "Emma Brunskill", "Zoran Popovic" ], "title": "Offline policy evaluation across representations with applications to educational games. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems, pages 1077–1084", "venue": "International Foundation for Autonomous Agents and Multiagent Systems,", "year": 2014 }, { "authors": [ "Tatsuya Matsushima", "Hiroki Furuta", "Yutaka Matsuo", "Ofir Nachum", "Shixiang Gu" ], "title": "Deploymentefficient reinforcement learning via model-based offline optimization", "venue": null, "year": 2006 }, { "authors": [ "Andrew W Moore", "Christopher G Atkeson" ], "title": "Memory-based reinforcement learning: Efficient computation with prioritized sweeping", "venue": "In Advances in neural information processing systems,", "year": 1993 }, { "authors": [ "Ofir Nachum", "Yinlam Chow", "Bo Dai", "Lihong Li" ], "title": "Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Anusha Nagabandi", "Gregory Kahn", "Ronald S Fearing", "Sergey Levine" ], "title": "Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Aaron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew Senior", "Koray Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": null, "year": 2016 }, { "authors": [ "Cosmin Paduraru" ], "title": "Planning with approximate and learned models of markov decision processes", "venue": "MsC Thesis, University of Alberta,", "year": 2007 }, { "authors": [ "Tom Le Paine", "Cosmin Paduraru", "Andrea Michi", "Caglar Gulcehre", "Konrad Zolna", "Alexander Novikov", "Ziyu Wang", "Nando de Freitas" ], "title": "Hyperparameter selection for offline reinforcement learning", "venue": null, "year": 2007 }, { "authors": [ "Jing Peng", "Ronald J Williams" ], "title": "Efficient learning and planning within the dyna framework", "venue": "Adaptive behavior,", "year": 1993 }, { "authors": [ "Doina Precup" ], "title": "Eligibility traces for off-policy policy evaluation", "venue": "Computer Science Department Faculty Publication Series,", "year": 2000 }, { "authors": [ "Noah Siegel", "Jost Tobias Springenberg", "Felix Berkenkamp", "Abbas Abdolmaleki", "Michael Neunert", "Thomas Lampe", "Roland Hafner", "Nicolas Heess", "Martin Riedmiller" ], "title": "Keep doing what worked: Behavior modelling priors for offline reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Richard S Sutton" ], "title": "Integrated architectures for learning, planning, and reacting based on approximating dynamic programming", "venue": "In Machine learning proceedings", "year": 1990 }, { "authors": [ "P. Thomas", "E. Brunskill" ], "title": "Data-efficient off-policy policy evaluation for reinforcement learning", "venue": "In Proceedings of the 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Philip Thomas", "Emma Brunskill" ], "title": "Data-efficient off-policy policy evaluation for reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Masatoshi Uehara", "Nan Jiang" ], "title": "Minimax weight and q-function learning for off-policy evaluation", "venue": null, "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Cameron Voloshin", "Hoang M Le", "Nan Jiang", "Yisong Yue" ], "title": "Empirical study of off-policy policy evaluation for reinforcement learning", "venue": null, "year": 1911 }, { "authors": [ "Tingwu Wang", "Xuchan Bao", "Ignasi Clavera", "Jerrick Hoang", "Yeming Wen", "Eric Langlois", "Shunshi Zhang", "Guodong Zhang", "Pieter Abbeel", "Jimmy Ba" ], "title": "Benchmarking model-based reinforcement learning", "venue": null, "year": 1907 }, { "authors": [ "Ziyu Wang", "Alexander Novikov", "Konrad Żołna", "Jost Tobias Springenberg", "Scott Reed", "Bobak Shahriari", "Noah Siegel", "Josh Merel", "Caglar Gulcehre", "Nicolas Heess" ], "title": "Critic regularized regression", "venue": null, "year": 2006 }, { "authors": [ "Junfeng Wen", "Bo Dai", "Lihong Li", "Dale Schuurmans" ], "title": "Batch stationary distribution estimation", "venue": "arXiv preprint arXiv:2003.00722,", "year": 2020 }, { "authors": [ "Grady Williams", "Andrew Aldrich", "Evangelos Theodorou" ], "title": "Model predictive path integral control using covariance variable importance sampling", "venue": null, "year": 2015 }, { "authors": [ "Yifan Wu", "George Tucker", "Ofir Nachum" ], "title": "Behavior regularized offline reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Mengjiao Yang", "Ofir Nachum", "Bo Dai", "Lihong Li", "Dale Schuurmans" ], "title": "Off-policy evaluation via the regularized lagrangian", "venue": null, "year": 2020 }, { "authors": [ "Tianhe Yu", "Garrett Thomas", "Lantao Yu", "Stefano Ermon", "James Zou", "Sergey Levine", "Chelsea Finn", "Tengyu Ma" ], "title": "MOPO: Model-based offline policy optimization", "venue": null, "year": 2005 }, { "authors": [ "Paine" ], "title": "OPE BASELINES Fitted Q-Evaluation (FQE) As in Le et al. (2019), we train a neural network to estimate the value of the evaluation policy πe by bootstrapping fromQ(s′, πe(s)). We tried two different implementations, one from Kostrikov and Nachum", "venue": null, "year": 2020 }, { "authors": [ "Hanna" ], "title": "Importance Sampling (IS) We perform importance sampling with a learned behavior policy. We use the implementation from Kostrikov and Nachum (2020), which uses self-normalized (also known as weighted) step-wise importance sampling (Liu et al., 2018", "venue": "Nachum et al.,", "year": 2019 }, { "authors": [ "Yang" ], "title": "BestDICE. Variational Power Method (VPM) This method runs a variational power iteration algorithm to estimate the importance weights d(s, a)/dB (s, a) without the knowledge of the behavior policy. It then estimates the target policy value using weighted average of rewards similar to the DICE method. Our implementation is based on the same network and hyperparameters for OPE setting as in Wen", "venue": null, "year": 2020 }, { "authors": [ "Janner" ], "title": "2019), forming an ensemble with 4 models. We see some improvement in policy evaluation results, as shown in Figure A.1. Ensembling could likely be further improved by forcing unique hyperparameter settings and seeds", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Model-based Reinforcement Learning (RL) aims to learn an approximate model of the environment’s dynamics from existing logged interactions to facilitate efficient policy evaluation and optimization. Early work on Model-based RL uses simple tabular (Sutton, 1990; Moore and Atkeson, 1993; Peng and Williams, 1993) and locally linear (Atkeson et al., 1997) dynamics models, which often result in a large degree of model bias (Deisenroth and Rasmussen, 2011). Recent work adopts feedforward neural networks to model complex transition dynamics and improve generalization to unseen states and actions, achieving a high level of performance on standard RL benchmarks (Chua et al., 2018; Wang et al., 2019). However, standard feedforward dynamics models assume that different dimensions of the next state and reward are conditionally independent given the current state and action, which may lead to a poor estimation of uncertainty and unclear effects on RL applications.\nIn this work, we propose a new family of autoregressive dynamics models and study their effectiveness for off-policy evaluation (OPE) and offline policy optimization on continuous control. Autoregressive dynamics models generate each dimension of the next state conditioned on previous dimensions of the next state, in addition to the current state and action (see Figure 1). This means that to sample the next state from an autoregressive dynamics model, one needs n sequential steps, where n is the number of state dimensions, and one more step to generate the reward. By contrast, standard feedforward dynamics models take current state and action as input and predict the distribution of the next state and reward as a multivariate Gaussian with a diagonal covariance structure (e.g., Chua et al. (2018); Janner et al. (2019)). This modeling choice assumes that different state dimensions are conditionally independent.\n∗Work done as an intern at Google Brain.\nAutoregressive generative models have seen success in generating natural images (Parmar et al., 2018), text (Brown et al., 2020), and speech (Oord et al., 2016), but they have not seen use in Model-based RL for continuous control.\nWe find that autoregressive dynamics models achieve higher log-likelihood compared to their feedforward counterparts on heldout validation transitions of all DM continuous control tasks (Tassa et al., 2018) from the RL Unplugged dataset (Gulcehre et al., 2020). To determine the impact of improved transition dynamics models, we primarily focus on OPE because it allows us to isolate contributions of the dynamics model in value estimation vs. the many other factors of variation in policy optimization and data collection. We find that autoregressive dynamics models consistently outperform existing Model-based and Model-free OPE baselines on continuous control in both ranking and value estimation metrics. We expect that our advances in model-based OPE will improve offline policy selection for offline RL (Paine et al., 2020). Finally, we show that our autoregressive dynamics models can help improve offline policy optimization by model predictive control, achieving a new state-of-the-art on cheetah-run and fish-swim from RL Unplugged (Gulcehre et al., 2020).\nKey contributions of this paper include: • We propose autoregressive dynamics models to capture dependencies between state dimensions\nin forward prediction. We show that autoregressive models improve log-likelihood over nonautoregressive models for continuous control tasks from the DM Control Suite (Tassa et al., 2018).\n• We apply autoregressive dynamics models to Off-Policy Evaluation (OPE), surpassing the performance of state-of-the art baselines in median absolute error, rank correlation, and normalized top-5 regret across 9 control tasks.\n• We show that autoregressive dynamics models are more useful than feedforward models for offline policy optimization, serving as a way to enrich experience replay by data augmentation and improving performance via model-based planning." }, { "heading": "2 PRELIMINARIES", "text": "Here we introduce relevant notation and discuss off-policy (offline) policy evaluation (OPE). We refer the reader to Lange et al. (2012) and Levine et al. (2020) for background on offline RL, which is also known as batch RL in the literature.\nA finite-horizon Markov Decision Process (MDP) is defined by a tupleM = (S,A, T , d0, r, γ), where S is a set of states s ∈ S, A is a set of actions a ∈ A, T defines transition probability distributions p(st+1|st, at), d0 defines the initial state distribution d0 ≡ p(s0), r defines a reward function r : S × A → R, and γ is a scalar discount factor. A policy π(a | s) defines a conditional distribution over actions conditioned on states. A trajectory consists of a sequence of states and actions τ = (s0, a0, s1, a1, . . . , sH) of horizon length H . We use st,i to denote the i-th dimension of the state at time step t (and similarly for actions). In reinforcement learning, the objective is to maximize the expected sum of discounted rewards over the trajectory distribution induced by the policy:\nVγ(π) = Eτ∼pπ(τ) [ H∑ t=0 γtr(st, at) ] . (1)\nThe trajectory distribution is characterized by the initial state distribution, policy, and transition probability distribution:\npπ(τ) = d0(s0) H−1∏ t=0 π(at|st)p(st+1|st, at). (2)\nIn offline RL, we are given access to a dataset of transitions D = {(sit, ait, rit+1, sit+1)}Ni=1 and a set of initial states S0. Offline RL is inherently a data-driven approach since the agent needs to optimize the same objective as in Eq. (1) but is not allowed additional interactions with the environment. Even though offline RL offers the promise of leveraging existing logged datasets, current offline RL algorithms (Fujimoto et al., 2019; Agarwal et al., 2020; Kumar et al., 2019) are typically evaluated using online interaction, which limits their applicability in the real world.\nThe problem of off-policy (offline) policy evaluation (OPE) entails estimating Vγ(π), the value of a target policy π, based on a fixed dataset of transitions denoted D, without access to the environment’s dynamics. Some OPE methods assume that D is generated from a known behavior (logging) policy µ and assume access to µ in addition to D. In practice, the logged dataset D may be the result of following some existing system that does not have a probabilistic form. Hence, in our work, we will assume no access to the original behavior policy µ for OPE. That said, for methods that require access to µ, we train a behavior cloning policy on D." }, { "heading": "3 PROBABILISTIC DYNAMICS MODELS", "text": "Feedforward dynamics model. In the context of our paper, we use the term “model” to jointly refer to the forward dynamics model ps(st+1|st, at) and reward model pr(rt+1|st, at). We use neural nets to parameterize both distributions since they are powerful function approximators that have been effective for model-based RL (Chua et al., 2018; Nagabandi et al., 2018; Janner et al., 2019).\nLet θ denote the parameters of a fully connected network used to model pθ(st+1, rt+1 | st, at). We expect joint modeling of the next state and reward to benefit from sharing intermediate network features. Similar to prior work (Janner et al., 2019), our baseline feedforward model outputs the mean and log variance of all state dimensions and reward simultaneously, as follows:\npθ(st+1, rt+1 | st, at) = N ( µ(st, at),Diag(exp{l(st, at)}) ) , (3)\nwhere µ(st, at) ∈ Rn+1 denotes the mean for the concatenation of the next state and reward, l(st, at) ∈ Rn+1 denotes the log variance, and Diag(v) is an operator that creates a diagonal matrix with the main diagonal specified by the vector v. During training, we seek to minimize the negative log likelihood of the parameters given observed transitions in the dataset D:\n`(θ | D) = − ∑\n(s,a,r′,s′)∈D log pθ(s\n′, r′ | s, a) . (4)\nWhile it is possible to place different weights on the loss for next state and reward prediction, we did not apply any special weighting and treated the reward as an additional state dimension in all of our experiments. This is straightforward to implement and does not require tuning an additional hyperparameter, which is challenging for OPE. Note that the input has |s|+ |a| dimensions. Autoregressive dynamics model. We now describe our autoregressive model. We seek to demonstrate the utility of predicting state dimensions in an autoregressive way. Therefore, rather than using a complex neural network architecture, where improvements in log-likelihood and policy evaluation are confounded by architectural differences, we opt to make simple modifications to the feedforward model described above. This allows us to isolate the source of performance improvements.\nThe autoregressive model we use is a fully connected model that predicts the mean and log variance of a single state dimension. We augment the input space of the baseline with the previous predicted\nstate dimensions and a one-hot encoding to indicate which dimension to predict. This is illustrated in Figure 1. The autoregressive model therefore has 3|s| + |a| input dimensions. Hence, the autoregressive model has a small number of additional weights in the first fully connected layer, but as will be shown in our experiments, these extra parameters are not the reason for a performance gain.\nAt training time, the autoregressive model has a similar computational cost to the fully connected model as we can mask ground truth states and use data parallelism to compute all state dimensions simultaneously. At inference, the autoregressive model requires additional forward passes, on the order of the number of state dimensions in a given environment. We use the default ordering for the state dimensions in a given environment, though it is interesting to explore different orderings in future works. The negative log-likelihood for an autoregressive model takes the form of:\n`(θ | D) = − ∑\n(s,a,r′,s′)∈D\n[ log pθ(r ′ | s, a, s′) + ∑n\ni=1 log pθ(s\n′ i | s, a, s′1, . . . , s′i−1)\n] , (5)\nwhere we use chain rule to factorize the joint probability of p(s′, r′ | s, a). The main advantage of the autoregressive model is that it makes no conditional independence assumption between next state dimensions. This class of models can therefore capture non-unimodal dependencies, e.g., between different joint angles of a robot. Paduraru (2007) demonstrates this increased expressivity in the tabular setting, constructing an example on which a model assuming conditional independence fails. While the expressive power of autoregressive models have been shown in various generative models (Parmar et al., 2018; Oord et al., 2016), autoregressive dynamics models have not seen much use in Model-based RL for continuous control before this work.\nAlgorithm 1 Model-based OPE Require: Number of rollouts n, discount\nfactor γ, horizon length H , policy π, dynamics model p, set of initial states S0 for i = 1, 2, . . . n do Ri ← 0 sample initial state s0 ∼ S0 for t = 0, 1, 2, . . . ,H − 1 do\nsample from policy: at ∼ π(· | st) sample from the dynamics model: st+1, rt+1 ∼ p(·, · | st, at) Ri ← Ri + γtrt+1\nend for end for return 1n ∑n i=1Ri\nModel-based OPE. Once a dynamics model is trained from offline data, OPE can be performed in a direct and primitive way. We let the policy and model interact—the policy generates the next action, the model plays the role of the environment and generates the next state and reward. Due to the stochasticity in the model and the policy, we estimate the return for a policy with Monte-Carlo sampling and monitor standard error. See Algorithm 1 for pseudocode." }, { "heading": "4 RELATED WORK", "text": "Our work follows a long line of OPE research, which is especially relevant to many practical domains such as medicine (Murphy et al., 2001), recommendation systems (Li et al., 2011), and education (Mandel et al., 2014) in order to avoid the costs and risks associated with online evaluation. There exists a large body of work on OPE, including methods based on importance weighting (Precup, 2000; Li et al., 2014) and Lagrangian duality (Nachum et al., 2019; Yang et al., 2020; Uehara and Jiang, 2019). The model-based approach that we focus on in this paper lies within the class of algorithms referred to as the direct method (Kostrikov and Nachum, 2020; Dudík et al., 2011; Voloshin et al., 2019), which approximate the value of a new policy by either explicitly or implicitly estimating the transition and reward functions of the environment. While model-based policy evaluation has been considered by previous works (Paduraru, 2007; Thomas and Brunskill, 2016a; Hanna et al., 2017), it has largely been confined to simple domains with finite state and action spaces where function approximation is not necessary. By contrast, our work provides an extensive demonstration of model-based OPE in challenging continuous control benchmark domains. Previous instances of the use of function approximation for model-based OPE (Hallak et al., 2015) impose strong assumptions on the probabilistic dynamics models, such as factorability of the MDP. Our results indicate that even seemingly benign assumptions about the independence of different state dimensions can have detrimental consequences for the effectiveness of a model-based OPE estimate.\nWhile the use of model-based principles in OPE has been relatively rare, it has been more commonly used for policy optimization. The field of model-based RL has matured in recent years to yield impressive results for both online (Nagabandi et al., 2018; Chua et al., 2018; Kurutach et al., 2018; Janner et al., 2019) and offline (Matsushima et al., 2020; Kidambi et al., 2020; Yu et al., 2020; Argenson and Dulac-Arnold, 2020) policy optimization. Several of the techniques we employ, such\nas the normalization of the observation space, are borrowed from this previous literature (Nagabandi et al., 2018; Chua et al., 2018). Conversely, we present strong empirical evidence that the benefits of our introduced autoregressive generative models of state observations do carry over to model-based policy optimization, at least in the offline setting, and this is an interesting avenue for future work." }, { "heading": "5 RESULTS", "text": "We conduct our experiments on the DeepMind control suite (Tassa et al., 2018), a set of control tasks implemented in MuJoCo (Todorov et al., 2012). We use the offline datasets from RL Unplugged (Gulcehre et al., 2020), the details of which are provided in Table 1. These environments capture a wide range of complexity, from 40K transitions in a 5-dimensional cartpole environment to 1.5 million transitions on complex manipulation tasks. We follow the evaluation protocol in the Deep OPE (Fu et al., 2021) benchmark and use policies generated by four different algorithms: behavioral cloning (Bain, 1995), D4PG (Barth-Maron et al., 2018), Critic Regularized Regression (Wang et al., 2020), and ABM (Siegel et al., 2019). With varied hyperparameters, these form a diverse set of policies of varying quality.\nWe perform a thorough hyperparameter sweep in the experiments and use standard practice from generative modeling to improve the quality of the models. We allocate 80% of the data for training and 20% of the data for model selection. We vary the depth and width of the neural networks (number of layers ∈ {3, 4}, layer size ∈ {512, 1024}), add different amounts of noise to input states and actions, and consider two levels of weight decay for regularization (input noise ∈ {0, 1e−6, 1e−7}, weight decay ∈ {0, 1e−6}). For the choice of optimizer, we consider both Adam (Kingma and Ba, 2014) and SGD with momentum and find Adam to be more effective at maximizing log-likelihood across all tasks in preliminary experiments. We thus use Adam in all of our experiments with two learning rates ∈ {1e−3, 3e−4}. We decay the optimizer’s learning rate linearly to zero throughout training, finding this choice to outperform a constant learning rate. Lastly, we find that longer training often improves log-likelihood results. We use 500 epochs for training final models.\nFor each task we consider in total 48 hyperparameter combinations (listed above) for both models and pick the best model in each model family based on validation log-likelihood. This model is then used for model-based OPE and policy optimization. Note that, in our experiments, 20% of the transitions are used only for validation, but we believe one can re-train the models with the best hyperparameter configuration on the full transition datasets to improve the results even further." }, { "heading": "5.1 AUTOREGRESSIVE DYNAMICS MODELS OUTPERFORM FEEDFORWARD MODELS IN NLL", "text": "To evaluate the effectiveness of autoregressive dynamics models compared to feedforward counterparts, Table 2 reports negative log-likelihood (NLL) on the heldout validation set for the best\nperforming models from our hyperparameter sweep. For each environment, we report the NLL for the best-performing model (Top-1) and the average NLL across the Top-5 models. The autoregressive model has lower NLL on all environments, indicating that it generalizes better to unseen data.\nTo study the impact of model size on NLL, Figure 2 shows validation NLL as a function of parameter count. We find that on small datasets large models hurt, but more importantly autoregressive models outperform feedforward models regardless of the parameter count regime, i.e., even small autoregressive models attain a lower validation NLL compared to big feedforward models. This indicates that autoregressive models have a better inductive bias in modeling the transition dynamics than feedforward models that make a conditional independence assumption." }, { "heading": "5.2 ARE DYNAMICS MODELS WITH LOWER NLL BETTER FOR MODEL-BASED OPE?", "text": "We ultimately care not just about the log-likelihood numbers, but also whether or not the dynamics models are useful in policy evaluation and optimization. To study the relationship of NLL and OPE performance for model-based methods, we compute OPE estimates via Algorithm 1 and compute the Pearson correlation between the OPE estimates and the true discounted returns. This serves as a measure of the effectiveness of the model for OPE. We repeat this for all 96 dynamics models we trained on a given environment and plot the correlation coefficients against validation NLL in Figure 3.\nModels with low NLL are generally more accurate in OPE. Lambert et al. (2020) have previously demonstrated that in Model-based RL, “training cost does not hold a strong correlation to maximization of episode reward.\" We use validation NLL instead, and our results on policy evaluation decouple the model from policy optimization, suggesting a more nuanced picture: low validation NLL numbers generally correspond to accurate policy evaluation, while higher NLL numbers are generally less meaningful. In other words, if the dynamics model does not capture the transition dynamics accurately enough, then it is very hard to predict its performance on OPE. However, once the model starts to capture the dynamics faithfully, we conjecture that NLL starts to become a reasonable metric for model selection. For instance, validation NLL does not seem to be a great metric for ranking feedforward models, whereas it is more reasonable for autoregressive models." }, { "heading": "5.3 COMPARISON WITH OTHER OPE METHODS", "text": "We adopt a recently proposed benchmark for OPE (Fu et al., 2021) and compare our model-based approaches with state-of-the-art OPE baselines therein. Figures 4 and B.4 compare OPE estimates from two Fitted-Q Evaluation (FQE) baselines (Le et al., 2019; Kostrikov and Nachum, 2020; Paine et al., 2020), our feedforward models, and the autoregressive approach. Each plot reports the Pearson correlation between the OPE estimates and the true returns. The autoregressive model consistently outperforms the feedforward model and FQE methods on most environments. We report ensembling results in the appendix, but compare single models for fairness in the rest of the paper.\nWe compute summary statistics for OPE methods in Table 3, Table A.1, and Table A.2. These tables report the Spearman’s rank correlation, regret, and absolute error, respectively. These metrics capture different desirable properties of OPE methods (Fu et al., 2021); more details about how they are computed are in the appendix. In all three metrics, the autoregressive model achieves the best median performance across nine environments, whereas the baseline model is not as good as FQE. The only environment in which the autoregressive model has negative rank correlation is manipulator insert ball. In addition, a major advantage of our model-based approach over FQE is that the model only needs to be trained once per environment—we do not need to perform additional policy-specific optimization, whereas FQE needs to optimize a separate Q-function approximator per policy." }, { "heading": "5.4 AUTOREGRESSIVE DYNAMICS MODELS FOR OFFLINE POLICY OPTIMIZATION", "text": "Policy evaluation is an integral part of reinforcement learning. Improvement in policy evaluation can therefore be adapted for policy optimization. In this section, we explore two possibilities of using models to improve offline reinforcement learning. In all experiments, we use Critic Regularized Regression (CRR) as a base offline reinforcement learning algorithm (Wang et al., 2020).\nFirst, we utilize the model during test time for planning by using a modified version of Model Predictive Path Integral (MPPI) (Williams et al., 2015). Unlike MPPI, we truncate the planning process after 10 steps of rollout and use the CRR critic to evaluate future discounted returns. We provide additional details in the appendix. Secondly, we use the model to augment the transition dataset to learn a better critic for CRR. More precisely, given sit ∼ D, and the current policy π, we can generate additional data using the following process: âit ∼ π(·|sit), ŝit+1, r̂it+1 ∼ p(·, ·|sit, ât). These two options are orthogonal and can be applied jointly. We implemented both techniques on top of the CRR exp variant (Wang et al., 2020) and show their combined effect in Figure 5. The\nfigure shows that autoregressive dynamics models also outperform feedforward ones in the policy optimization context. Notably, in the case of cheetah run and fish swim, using autoregressive models for planning as well as data augmentation enables us to outperform the previous state-of-the-art on these offline datasets. Additionally, when using autoregressive dynamics models, both techniques improve performance. In the appendix, we show this result as well as more ablations." }, { "heading": "6 CONCLUSION", "text": "This paper shows the promise of autoregressive models in learning transition dynamics for continuous control, showing strong results for off-policy policy evaluation and offline policy optimization. Our contributions to offline model-based policy optimization are orthogonal to prior work that uses ensembles to lower the values when ensemble components disagree (Kidambi et al., 2020). Incorporating conservative value estimation into our method is an interesting avenue for future research. We use relatively primitive autoregressive neural architectures in this paper to enable a fair comparison with existing feedforward dynamics models. That said, it will be exciting to apply more sophisticated autoregressive neural network architectures with cross attention (Bahdanau et al., 2014) and self-attention (Vaswani et al., 2017) to Model-based RL for continuous control.\nAcknowledgements We thank Jimmy Ba, William Chan, Rishabh Agarwal, Dale Schuurmans, and Silviu Pitis for fruitful discussions on our work. We are also grateful for the helpful comments from Lihong Li, Jenny Liu, Harris Chan, Keiran Paster, Sheng Jia, and Tingwu Wang on earlier drafts." }, { "heading": "A OFFLINE POLICY EVALUATION", "text": "We use the baseline results in Fu et al. (2021). For convenience, we replicate their description of the OPE baselines and metrics.\nA.1 OPE METRICS\nTo evaluate the OPE algorithms, we compute three different metrics between the estimated returns and the ground truth returns:\n1. Rank correlation This metric assesses how well estimated values rank policies. It is equal to the correlation between the ranking (sorted order) by the OPE estimates and the ranking by the ground truth values.\n2. Absolute Error: This metric measures the deviations of the estimates from the ground truth and does not directly access the usefulness for ranking.\n3. Regret@k This metric measures how much worse the best policies identified by the estimates are than the best policy in the entire set. Regret@k is the difference between the actual expected return of the best policy in the entire set, and the actual value of the best policy in the top-k set.\nA.2 OPE BASELINES\nFitted Q-Evaluation (FQE) As in Le et al. (2019), we train a neural network to estimate the value of the evaluation policy πe by bootstrapping fromQ(s′, πe(s′)). We tried two different implementations, one from Kostrikov and Nachum (2020) and another from Paine et al. (2020).\nImportance Sampling (IS) We perform importance sampling with a learned behavior policy. We use the implementation from Kostrikov and Nachum (2020), which uses self-normalized (also known as weighted) step-wise importance sampling (Liu et al., 2018; Nachum et al., 2019). Since the behavior policy is not known explicitly, we learn an estimate of it via a max-likelihood objective over the dataset D, as advocated by Hanna et al. (2019). In order to be able to compute log-probabilities when the target policy is deterministic, we add artificial Gaussian noise with standard deviation 0.01 for all deterministic target policies.\nDoubly-Robust (DR) We perform weighted doubly-robust policy evaluation based on Thomas and Brunskill (2016b) and using the implementation of Kostrikov and Nachum (2020). Specifically, this method combines the IS technique above with a value estimator for variance reduction. The value estimator is learned according to Kostrikov and Nachum (2020), using deep FQE with an L2 loss function.\nDICE This method uses a saddle-point objective to estimate marginalized importance weights dπ(s, a)/dπB (s, a); these weights are then used to compute a weighted average of reward over the offline dataset, and this serves as an estimate of the policy’s value in the MDP. We use the implementation from Yang et al. (2020) corresponding to the algorithm BestDICE.\nVariational Power Method (VPM) This method runs a variational power iteration algorithm to estimate the importance weights dπ(s, a)/dπB (s, a) without the knowledge of the behavior policy. It then estimates the target policy value using weighted average of rewards similar to the DICE method. Our implementation is based on the same network and hyperparameters for OPE setting as in Wen et al. (2020). We further tune the hyperparameters including the regularization parameter λ, learning rates αθ and αv, and number of iterations on the Cartpole swingup task using ground-truth policy value, and then fix them for all other tasks.\nA.3 ENSEMBLING\nAs in Chua et al. (2018); Janner et al. (2019), we can form an ensemble using our best-performing models. We generate rollouts using the procedure detailed in Janner et al. (2019), forming an ensemble with 4 models. We see some improvement in policy evaluation results, as shown in Figure A.1. Ensembling could likely be further improved by forcing unique hyperparameter settings and seeds.\nAlgorithm 2 Model Predictive Path Integral Planning\nRequire: state s, policy π, dynamics model p, critic Q, temperature β, and noise variance σ2. for m = 1, ...,M do\nfor n = 1, ..., N do s0 ← s Rn ← 0 for τ = 0, ...,H − 1 do aτn ∼ π(·|sτn) sτ+1n , r τ+1 n ∼ π(·, ·|sτn, aτn)\nRn ← Rn + γτrτ+1n end for aH ∼ π(·|sH) Rn ← Rn + γHQ(sHn , aHn )\nend for Re-define π such that π(·|ŝτ ) = ∑ n exp(Rn/β)∑ m exp(Rm/β)\nN (·|aτn, σ2I). (π depends on τ and not ŝ.) end for sample final action a ∼ ∑ n exp(Rn/β)∑ m exp(Rm/β) δ(a0n) return a" }, { "heading": "B ADDITIONAL DETAILS REGARDING POLICY OPTIMIZATION", "text": "To test dynamic models for policy optimization, we implement the two methods discussed in Section 5.4 on top of CRR exp, one of the CRR variants (Wang et al., 2020). We use the RL Unplugged datasets (Gulcehre et al., 2020) for all environments studied in this section. When using data augmentation, we adopt a 1-to-1 ratio between the original dataset and the augmented dataset.\nTo take advantage of the dynamics models at test time, we use a variant of Model Predictive Path Integral (MPPI) for planning. To reduce the planning horizon, we truncate the model rollout using CRR critics. The details of the planning procedure is summarized in Algorithm 2. All hyperparameter tuning for the planning process is conducted on the “cartpole swingup” task. The hyperparameters used in the planning process are M = 3, N = 16, H = 10, β = 0.1, and σ2 = 0.01. To match the temperature used in the planning component, we choose β = 0.1 for the CWP component of CRR. This change, however, does not impact the baseline CRR agent performance much. With the exception of β and the planning component, all hyperparameters are kept the same as CRR exp.\nWe compare the agents’ performance with and without the planning procedure to test its effects. As shown in Figure B.2, planning using an autoregressive model significantly increases performance.\nData augmentation does not change the agents’ performance on cartpole swingup, fish swim, or finger turn hard. It, however, boosts performance considerably on cheetah run. In Figure B.3, we show the effects of data augmentation on cheetah run." } ]
2,021
null
SP:64282a23a9df8092c2fc9737045a96d1ac64f4ac
[ "The motivation of this study is to estimate the distribution of desired data from the entire data distribution. And the proposed solution extends existing GAN solutions by introducing an additional pairwise loss on the discriminator, e.g., its scores on the desired instances should be higher than the undesired ones. The idea is natural and neat, and it is also proved to be effective in the reported experiments. ", "The authors introduce DiCGAN, an algorithm to learn a generative model that comes up with samples whose likelihood is based on a real dataset but adjusted given user preferences. They train the critic to assign high values to samples with higher preference values and thus the generator tends to move its samples towards these points. The idea is nice and reasonably novel in my opinion, but the paper has quite a few problems." ]
This paper proposes Differential-Critic Generative Adversarial Network (DiCGAN) to learn the distribution of user-desired data when only partial instead of the entire dataset possesses the desired properties. Existing approaches select the desired samples first and train regular GANs on the selected samples to derive the userdesired data distribution. DiCGAN introduces a differential critic that can learn the preference direction from the pairwise preferences over the entire dataset. The resultant critic guides the generation of the desired data instead of the whole data. Specifically, apart from the Wasserstein GAN loss, a ranking loss of the pairwise preferences is defined over the critic. It endows the difference of critic values between each pair of samples with the pairwise preference relation. The higher critic value indicates that the sample is preferred by the user. Thus training the generative model for higher critic values encourages the generation of userpreferred samples. Extensive experiments show that our DiCGAN can learn the user-desired data distributions.
[ { "affiliations": [], "name": "YOU WANT" } ]
[ { "authors": [ "Martin Arjovsky", "Soumith Chintala", "Léon Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Jesse Engel", "Matthew Hoffman", "Adam Roberts" ], "title": "Latent constraints: Learning to generate conditionally from unconditional generative models", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "William Fedus", "Ian J Goodfellow", "Andrew M Dai" ], "title": "MaskGAN: Better Text Generation via Filling in the", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Rafael Gómez-Bombarelli", "Jennifer N Wei", "David Duvenaud", "José Miguel Hernández-Lobato", "Benjamı́n Sánchez-Lengeling", "Dennis Sheberla", "Jorge Aguilera-Iparraguirre", "Timothy D Hirzel", "Ryan P Adams", "Alán Aspuru-Guzik" ], "title": "Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules", "venue": "ACS Central Science,", "year": 1021 }, { "authors": [ "Ian J Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron C Courville", "Yoshua Bengio" ], "title": "Generative Adversarial Nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Anvita Gupta", "James Zou" ], "title": "Feedback gan for dna optimizes protein functions", "venue": "Nature Machine Intelligence,", "year": 2019 }, { "authors": [ "Alexia Jolicoeur-Martineau" ], "title": "The relativistic discriminator: a key element missing from standard GAN", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Nathan Killoran", "Leo J Lee", "Andrew Delong", "David Duvenaud", "Brendan J Frey" ], "title": "Generating and designing dna with deep generative models", "venue": "arXiv preprint arXiv:1712.06148,", "year": 2017 }, { "authors": [ "Y. Lecun", "L. Bottou", "Y. Bengio", "P. Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Tyler Lu", "Craig Boutilier" ], "title": "Learning mallows models with pairwise preferences", "venue": "In International Conference on Machine Learning,", "year": 2011 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint arXiv:1411.1784,", "year": 2014 }, { "authors": [ "Augustus Odena", "Christopher Olah", "Jonathon Shlens" ], "title": "Conditional image synthesis with auxiliary classifier gans", "venue": "In International conference on machine learning,", "year": 2017 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Bernhard Schölkopf", "Alexander Smola", "Klaus-Robert Müller" ], "title": "Kernel principal component analysis", "venue": "In International conference on artificial neural networks,", "year": 1997 }, { "authors": [ "Carl Vondrick", "Hamed Pirsiavash", "Antonio Torralba" ], "title": "Generating videos with scene dynamics", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Ke Zhou", "Gui-Rong Xue", "Hongyuan Zha", "Yong Yu" ], "title": "Learning to rank with ties", "venue": "In Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval,", "year": 2008 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Learning a good generative model for high-dimensional natural signals, such as images (Zhu et al., 2017), video (Vondrick et al., 2016) and audio (Fedus et al., 2018) has long been one of the key milestones of machine learning. Powered by the learning capabilities of deep neural networks, generative adversarial networks (GANs) (Goodfellow et al., 2014) have brought the field closer to attaining this goal. Currently, GANs are applied in a setting where the whole training dataset is of user interest. Therefore, regular GANs no longer meet our requirement when only partial instead of the entire training dataset possesses the desired properties (Killoran et al., 2017). It is more challenging when the given dataset has a small number of desired data.\nAdapting vanilla GAN to this setting, a naive way is to first select the samples possessing the desired properties and then perform regular GAN training only on the selected samples to derive the desired distribution. However, vanilla GAN fails when the desired samples are limited. FBGAN overcomes the limited data problem by iteratively introducing desired samples from the generation into the training data. Specifically, FBGAN is pretrained with all training data using the vanilla GAN. In each training epoch, the generator first generates certain amounts of samples. The generated samples possessing the desired properties are selected by an expert selector and used to replace the old training data. Then, regular WGAN is trained with the updated training data. Since the ratio of the desired samples gradually increases in the training data, all training data will be replaced with the desired samples. Finally, FBGAN would derive the desired distribution when convergence. However, bluntly eliminating undesired samples may lead to a biased representation of the real desired data distribution. Because the undesired samples can also reveal useful clues about what is not desired. Suppose we want to generate old face images, however the training data contains only a few old face images whereas it has many young face images. In this case, the young face images can be used as negative sampling (Mikolov et al., 2013) to learn the subtle aging features (e.g. wrinkles, pigmented skin, etc.), which guides the generation of the desired old face images. The conditional variants of GAN, such as CGAN (Mirza and Osindero, 2014) and ACGAN (Odena et al., 2017) can be also applied in this setting by introducing condition variables to model the conditional desired data distribution. However, the generation performance of condition-based GAN is governed by the respective conditions with sufficient training observations. When the desired data is limited, the conditional modeling is dominated by the major classes, i.e., undesired data, resulting in a failure\nto capture the desired distribution. All the literature methods require user-defined criteria to select the desired data in order to learn the distribution of the desired data, which may not exist in real applications.\nInstead of soliciting a ready-to-use criteria, we consider a more general setting where GAN can be guided towards the distribution of user-desired data by the user preference. In particular, pairwise preferences are the most popular form of user preference due to their simplicity and easy accessibility (Lu and Boutilier, 2011). Therefore, our target is to incorporate pairwise preferences into the learning process of GAN, so as to guide the generation of the desired data.\nRelativistic GAN (RGAN) (Jolicoeur-Martineau, 2019) is a variant of regular GAN and is proposed to learn the whole data distribution. It considers the critic value as the indicator of sample quality and defines the discriminator using the difference in the critic values. The critic value in RGAN is similar to the ranking score, but it is used to describe sample quality. Motivated by this, we consider taking the critic value as the ranking score and define the ranking loss for pairwise preferences based on the critic value directly. In particular, the difference in critic values for each pair of samples reflects the user’s preference over the samples. This is why we call our critic the differential critic, and we propose Differential-Critic GAN (DiCGAN) for learning the user-desired data distribution. As shown in Fig. 1, the differential critic incorporates the user preference direction, which pushes the original critic direction towards the real desired data region instead of the entire real data region. The main contributions are summarized as follows:\n• We propose DiCGAN to learn the distributions of the desired data from the entire data using pairwise preferences. To the best of our knowledge, this is the first work to promote the ratio of the desired data by incorporating user preferences directly into the data generation.\n• We introduce the differential critic by defining an additional pairwise ranking loss on the WGAN’s critic. It endows the difference in the critic values between each pair of samples with user preferences.\n• The empirical study shows that DiCGAN learns the distribution of user-desired data and the differential critic can derive the preference direction even from a limited umber of preferences." }, { "heading": "2 GENERATIVE ADVERSARIAL NETWORKS", "text": "Generative Adversarial Network (GAN) (Goodfellow et al., 2014) performs generative modeling by learning a map from low-dimensional input space Z to data space X , i.e., Gθ : Z → X , given samples from the training data distribution, namely, x ∼ pr(x). The goal is to find θ which achieves pθ(x) = pr(x), where pθ(x) is the fake data distribution x = Gθ(z). Let p(z) be the input noise distribution and G indicate Gθ. GAN defines a discriminator D that is trained to discriminate real data from fake data to guide the learning of G.\nWasserstein GAN (WGAN) (Arjovsky et al., 2017) proposes to use the Wasserstein metric as a critic, which measures the quality of fake data in terms of the distance between the real data distribution and the fake data distribution. The Wasserstein distance (W-distance) is approximated by the difference in the average critic values between the real data and the fake data. The empirical experiments show\nthat the W-distance between two distributions corresponds well to the quality of the generated data. WGAN’s objective function is defined as follows:\nmin G max D Epr(x) [D(x)]− Epθ(x) [D (x)] , (1)\nwhere D is the critic and satisfies 1-Lipschitz." }, { "heading": "3 DICGAN FOR USER-DESIRED DISTRIBUTION", "text": "No longer learning the distribution of the whole dataset, GAN is applied in a new scenario, where the distribution of the partial dataset is what we desire. User-desired data may refer to some certain class of data among multiple class datasets, or observations with/without some particular attributes. Such data can be induced from the user preference, which can be represented as an ordering relation between two or more samples in terms of the desired properties. We propose differential-critic GAN to learn the desired data distribution from the user preferences along with the whole dataset." }, { "heading": "3.1 LEARNING THE DISTRIBUTION OF USER-DESIRED DATA", "text": "Following the score-based ranking literature, we suppose that there exists a numeric score associated with each sample, reflecting the user’s preference for the sample. A higher score indicates that its corresponding sample is preferred by the user. In detail, let f denote a score function that maps sample x to score f(x). Then, if sample x is desired by the user, its score f(x) exceeds a predefined threshold , namely, I(f(x) > ) = 1. I is a sign function, which equals 1 if its condition is true and 0 otherwise. For the sake of explanation, we use pr(x), pd(x), pu(x) to denote the distribution of the whole data, the user-desired data and the undesired data, respectively.\nFBGAN (Gupta and Zou, 2019) was proposed to learn the distribution of the desired data pd(x). FBGAN executes alternatively between two steps: (1) construct the desired dataset Xd = {x|I(f(x) > ) = 1, x ∼ pr(x)}; (2) train GAN on Xd to derive pd(x). However, the assumption that the score function f is predefined in FBGAN may be too restrictive for real applications, where no universal and explicit criteria exists. Further, the definitions of the desired/undesired samples are highly dependent on the choice of the threshold . The removal of the so-called undesired samples may result in a biased representation of real desired data distribution.\nInstead of relying on a predefined score function, we propose to learn the desired data distribution in a straightforward manner from the user preference. Here, we consider a general auxiliary information, i.e., the pairwise preferences, to represent the user preference, due to its simplicity and easy accessibility. For any two samples x1, x2 ∼ pr(x), let x1 x2 denote that x1 is preferred over x2 according to the user-defined criteria. Let X be the training samples, i.e., X = {xi ∼ pr(x)}. A collection of pairwise preferences S is obtained by:\nS = { s = (x1, x2)|x1 x2, x1, x2 ∈ X } . (2)\nDefinition 1 (Problem Setting). Given the training samples X and the pairwise preferences S, the target is to learn a generative model pθ(x) that is identical to the distribution of the desired data pd(x), i.e., pθ(x) = pd(x)." }, { "heading": "3.2 DIFFERENTIAL CRITIC GAN", "text": "Instead of WGAN’scritic for quality assessment, we present the differential critic for modelling pairwise preferences. The differential critic can guide the generation of the user-desired data." }, { "heading": "3.2.1 PAIRWISE PREFERENCE", "text": "In this section, we consider incorporating the pairwise preference into the training of GAN.\nThe score-based ranking model (Zhou et al., 2008) is used to model the pairwise preferences. It learns the score function f , of which the score value, called ranking score in the model, is the indicator of the user preference. Further, the difference of ranking scores can indicate the pairwise preference relation. That is, for any pair of samples x1, x2, if x1 x2 then f(x1)− f(x2) > 0 and vice versa. For any pairwise preference s : x1 x2, the ranking loss we consider is as follows:\nh(s) = max (0,− (f (x1)− f (x2)) +m) , (3)\nwhere m is the ranking margin. For other forms of ranking losses, the reader can refer to (Zhou et al., 2008).\nInstead of learning the score function independent of GAN’s training, we consider incorporating it into GAN’s training, guiding GAN towards the generation of the desired data. The critic in RGAN (Jolicoeur-Martineau, 2019) is similar to the score function, where the critic values are used to describe the sample quality. We are motivated to take the critic value as the ranking score and define the ranking loss on the critic value directly. In particular, the difference in the critic values for each pair of samples reflects the user’s preference over the samples.\nRemark 1 (Pairwise regularization to the generator). It is possible to consider a pairwise regularization to the generator. As the target is to learn the desired distribution, the regularization to the generator can be used to make the critic values of the generated samples larger than those of the undesired samples. We construct the regularization with the principle similar as FBGAN. Specifically, a selector is first applied to give a full ranking for the training data and then bottom K samples are picked up as the undesired samples. The pairwise preferences are then defined over the generated samples and the undesired samples." }, { "heading": "3.2.2 LOSS FUNCTION", "text": "We build DiCGAN based on WGAN and the pairwise ranking loss is defined over the WGAN’s critic. The loss function for DiCGAN is defined as:\nmin G max D\nEpr(x) [D(x)]− Epθ(x) [D (x)]− λ 1 |S| ∑ s∈S [h (s)] . (4)\nwhere h(s) is the pairwise ranking loss (equation 3). λ is a balance factor, which will be discussed further in section 3.3. Similar to WGAN, we formulate the objective for the differential critic LD and the generator LG as:\nLD = 1\nb b∑ i=1 D(xi)−D(G(zi))− λ 1 ns ns∑ j=1 h(sj), LG = 1 b b∑ i=1 −D(G(zi)), (5)\nwhere b is the batch size. This is the same for the fake samples. ns is the number of pairs sampling from S.\nThe advantages of DiCGAN are twofold: (1) The introduced ranking loss in DiCGAN is defined on the critic value. Apart from WGAN, it can be easily applied to other GAN variants developed based on the critic, e.g., RGAN. (2) DiCGAN leverages the entire dataset. The pairwise preferences are constructed on the whole dataset. Thus the undesired samples are also utilized during the training.\nIn the following, we argue that the differential critic in DiCGAN can guide the generator to learn the user-desired distribution from two aspects. (1) As shown in Fig. 1, the differential critic in DiCGAN provides the direction towards the real desired data. We denote the critic direction as the moving direction of fake data, which is orthogonal to the decision boundary of the critic. Referring to equation 4, DiCGAN’s critic loss consists of two terms: the vanilla WGAN loss and the ranking loss. The vanilla WGAN loss imposes the critic direction from the fake data to the real data. Meanwhile, the ranking loss induces a user preference direction, which points from the undesired data to the desired data. Combining these two effects, the critic direction of DiCGAN targets for the region of the real desired data only. (2) DiCGAN assigns high critic values for user-desired data and promotes the generation of samples with high critic values. The vanilla WGAN loss encourages the critic to assign high critic values for real data and low critic values for fake data. Meanwhile, the ranking loss encourages high critic values to be assigned to the real desired data while low critic values are assigned to real undesired data. Therefore, the real desired data achieves high critic values from the critic. Similar to WGAN, the training paradigm of DiCGAN promotes the generation of samples with high critic values, which is equivalent to encouraging the generation of user-desired data in DiCGAN." }, { "heading": "3.3 REFORMULATING DICGAN TO ENSURE DATA QUALITY", "text": "Let us revisit the objective of DiCGAN (equation 4). The first two terms of equation 4 can be considered as the WGAN regularisation, which ensures the generated data distribution is close to the\nAlgorithm 1 Training algorithm of DiCGAN 1: input: training data X, pairwise preferences S 2: initilization: balance factor λ, #generated samples ng, #pairs ns, batchsize b, #iterations per epoch ni,\n#critic iterations per generator iteration ncritic 3: Pretrain D and G 4: repeat 5: % Shift to the user-preferred distribution 6: Generate samples using equation 7 7: Replace old samples in X with Xg using equation 8 8: Obtain pairwise preferences R using equation 2 9: % Training of D and G at an epoch\n10: for i = 1, . . . , ni do 11: for t = 1, . . . , ncritic do 12: Sample {xi}bi=1 from X, {zi ∼ p(z)}bi=1 13: Sample {sj}nsj=1 from S. 14: Train the differential critic D using LD in equation 5 15: end for 16: Train the generator G using LG in equation 5 17: end for 18: until converge\nwhole real data distribution, i.e., pθ ≈ pr. The third term serves as a correction of WGAN, which makes WGAN slightly biased to our target of learning the desired data distribution, i.e., pθ = pd.\nTherefore, the WGAN regularisation serves as the cornerstone of our DiCGAN. Particularly, if the desired data distribution is close to the entire data distribution, the rank loss easily corrects the WGAN to achieve the desired data distribution. Otherwise, the satisfactory performance of DiCGAN may require the online hyperparameter tuning of λ during the training process.\nAccording to the above analysis, we consider reformulating the objective of DiCGAN, i.e., equation 4 into an equivalent objective with a hard WGAN constraint:\nmin G max D − ∑ s∈S [h (s)] , s.t. d(pr, pθ) = ∣∣Epr(x) [D(x)]− Epθ(x) [D (x)]∣∣ ≤ ε. (6)\nwhere ε > 0. Note that we impose an explicit non-negative constraint on d(pr, pθ), to highlight that it is a distance metric. It is still equivalent to WGAN loss from its definition. Therefore, equation 4 is the Lagrangian function. Since equation 6 imposes a hard constraint on the WGAN loss, it is more difficult to optimize compared to equation 4. However, more efficient solutions of DiCGAN can be explored by analyzing equation 6 regarding the hard constraint on d(pr, pθ).\nIn terms of a minor correction situation, this means the desired data distribution pd is close to the real data distribution pr. Therefore, the hard constraint dominates the training goal of DiCGAN. By assigning a proper λ to ensure the constraint is satisfied, equation 4 can learn the distribution of the user-desired data while ensuring data quality.\nIn terms of a major correction situation, this means the desired data distribution pd is quite diverse from the real data distribution pr. Therefore, DiCGAN needs to achieve an equilibrium between the correction, imposed by the ranking loss, and the hard constraint, imposed by the WGAN loss. However, a large correction may not ensure the quality of the generated data, since the WGAN loss, used to guarantee the image quality, is defined between the generated data and the whole real data. To avoid the major correction, we propose to break the major correction into a sequence of minor corrections to ensure data quality. Namely, at each epoch, we first use the generator G to generate ng samples, denoted as Xg:\nXeg ← {Ge(z1), . . . , Ge(zng)}, {zi ∼ p(z)} ng i=1, (7)\nwhere e is the epoch index. Then we replace the old training samples with the generated samples:\nXe+1 ← Xe \\Xeo ∪Xeg, (8) where Xeo are the old (least-recently added) ng samples in X e.\nDue to the ranking loss, the generated data distribution peθ is closer to the desired data distribution pd, compared to the constructed per at each epoch. Therefore, the iterative replacement (equation 8)\ncan gradually shift the real data distribution pr towards the desired data distribution pd. Namely, d(pr, pd) > · · · > d(per , pd) > d(pe+1r , pd) > · · · . So that only a minor correction needs to be imposed on peθ through optimizing equation 4 at each epoch. Iteratively, the generated distribution pθ shifts towards pd. The training algorithm is summarized in Algorithm 1. For the sake of easy optimization, we pretrain the differential critic D and the generator G using WGAN." }, { "heading": "4 CASE STUDY ON SYNTHETIC DATA", "text": "To gain an intuitive understanding of the difference between our DiCGAN and WGAN regarding the critic and the generator, we conduct a case study on the synthetic dataset.\nThe synthetic dataset consists of two concentric circles by adding Gaussian noise with a standard deviation of 0.05 (See Fig. 2a). The samples located on the inner circle are considered to be the desired data, while the samples on the outer circle are defined as the undesired data. By labeling the desired data as y = 1 and the undesired data as y = 0, we can construct the pairwise preference for two samples x1 and x2 based on their labels. Namely x1 x2 if y1 = 1 ∧ y2 = 0, and vice versa. The pairs are constructed within each mini-batch. Our target is to learn the distribution of the desired data (i.e., samples on the inner circle), using the whole data along with the constructed pairwise preferences." }, { "heading": "4.1 WGAN VS DICGAN ON CRITIC", "text": "Experiment setting: we fix the generator and simulate the fake data as the 2D Gaussian blob with a standard deviation of 0.05 (green pluses). We first train the critic to converge. Then, we project the output on the second last layer of the critic into 1D space using kernel principal components analysis (Schölkopf et al., 1997), to derive the projected features. To explore the difference between the critics of WGAN and DiCGAN, we draw the curve of the critic values versus the projected features for WGAN and DiCGAN, respectively (Fig. 2b, 2c).\nFrom Fig. 2b, 2c, we can see: (1) in terms of the real data and the fake data, the critic of both WGAN and DiCGAN can achieve perfect discrimination. Meanwhile, the projected features of the real data and those of the fake data are also completely separated; (2) in terms of the real desired data and the real undesired data, the critic of DiCGAN assigns higher values to the desired samples, compared to the undesired samples. This is because our ranking loss expects a higher ranking score (i.e., critic values) for the corresponding desired data. (3) In contrast, the critic of WGAN assigns lower values to the desired data, since the desired data is closer to the fake data compared to the undesired data." }, { "heading": "4.2 WGAN VS DICGAN ON GENERATOR", "text": "Experiment setting: we train the critic and the generator following the regular GANs’ training procedure. The generation results of WGAN and DiCGAN is shown in Fig. 2d, 2e.\nDiCGAN (shown in Fig. 2e) only generates the user-desired data, i.e., generated data covering the inner circles, while WGAN (shown in Fig. 2d) generates all data, i.e., generated data covering the inner and outer circles. As the critic in DiCGAN can guide the fake data towards the real data region and away from the undesired data region, the generator thus produces data which is similar to the real desired data. Because the critic in WGAN pushes the fake data to the real data region only, the generator finally produces whole real-alike data." }, { "heading": "5 EXPERIMENTAL STUDY", "text": "DiCGAN is applied to learn the distribution of the desired data on MNIST (Lecun et al., 1998) and CelebA-HQ (Karras et al., 2018) datasets. Due to the limited space, we present more experiment results in the appendix.\nNetworks & Hyperparameters The balance factor λ and the ranking margin m is set to 1. We adopt the same network structures as in (Gulrajani et al., 2017). See the appendix for other settings.\nEvaluation Metric To evaluate the performance of learning the desired data distribution, we calculate the percentage of user-desired data in GAN’s generation, i.e., the ratio of the desired generated data to the whole generated data (D/W). We use inception score (IS) (Salimans et al., 2016) and multiscale structural similarity (MS-SSIM) (Odena et al., 2017) to evaluate the quality and intra-class diversity for GANs’ generation, respectively.\nBaselines We compare DiCGAN with WGAN (Gulrajani et al., 2017), CWGAN (Mirza and Osindero, 2014) and FBGAN (Gupta and Zou, 2019). WGAN is only trained with the desired data to derive the desired distribution. CWGAN is the extension of GAN with a conditional label c. To train CWGAN, we split the training data into the desired class (c = 1) and the undesired class based on a predefined user criteria (c = 0). Then p(x|c = 1) is the desired data distribution. FBGAN adopts an iterative training paradigm to derive the desired data distribution. At each training epoch, FBGAN resorts to an extra selector to select the desired samples from the generated samples and performs regular GAN training using the selected desired samples. ACGAN (Odena et al., 2017) also shows poor results like CWGAN when the desired data is limited, so we do not report here." }, { "heading": "5.1 LEARNING THE DISTRIBUTION OF DESIRED DIGITS", "text": "We design the experiment to learn the distribution of small digits in MNIST. We use 50K 28× 28 images as the training data. Zero is the smallest digit in MNIST, thus as the desired data,\nAs for WGAN and CWGAN, zero digits in the training data are regarded as the desired samples (c = 1), whose size is 4, 950. The other digits are labeled as the undesired samples, whose size is 45, 050 (c = 0). WGAN is only trained with the constructed desired data. CWGAN conditions on c to model a conditional data distribution p(x|c) for MNIST dataset. FBGAN and our DiCGAN resort to a classifier, pre-trained for digit classification, to obtain the labels for the generated samples. At every training epoch, FBGAN generates 50000 samples and requests the classifier to label them. Then the images are ranked using the predicted label, with the smaller digit ranked higher. The generated images with digits ranked in the top 50%, i.e., small digits, are selected as the desired data to replace old training data. As for DiCGAN, the pairwise comparison can be obtained for two images x1 and x2 according to their predicated label y1 and y2, namely x1 x2 if y1 < y2, and vice versa. At each iteration, we construct 32 pairwise preferences for each mini-batch with 64 training samples.\nFig. 3 presents the generated MNIST images randomly sampled from the generator of DiCGAN. It shows that the generated MNIST digits gradually shift to smaller digits during the training, and converge to the digit zero. We sample 50K samples from the generators of various GANs and respectively calculate the percentage of digit zero and digits zero to four among the generated digits for a quantitative evaluation. In Table 1, only small digits are generated by DiCGAN and FBGAN; WGAN and CWGAN can also learn the distribution of the desired digit since the dataset is simple and has relatively sufficient data for the desired digit. However, WGAN and CWGAN do not exhibit a smooth convergence to digit zero like FBGAN and DiCGAN (See Fig. 10 in the appendix). In addition, when the dataset is complex and the desired data is insufficient, WGAN and CWGAN fail, which is described in the next section." }, { "heading": "5.1.1 COMPARISON OF DICGAN AND FBGAN", "text": "Though FBGAN achieves good performance in learning the desired data distribution, it requires a lot of supervision information from the selector. We calculate the number of effective pairs (#EP) used in DiCGAN and FBGAN, respectively. #EP in DiCGAN denotes the total number of explicitly constructed pairs during the training, i.e., #EP = ∑ne i=1 ∑ni j=1 ns. FBGAN selects the desired samples from the generated samples. #EP can be induced by the implicit pairs implied by the desired samples versus the undesired samples, i.e., #EP = ∑ne i=1 ngd × ngu, where ne is the number of training epochs. where ngd and ngu denote the number of desired samples and undesired samples in the generation, respectively.\nFig. 4a shows that (1) the #EP used in DiCGAN is much smaller than that in FBGAN at each training epoch; (2) the total #EP used in DiCGAN is significantly less than that in FBGAN, which can be reflected from the shadow area. In total, DiCGAN used 9.53e4 effective pairs while FBGAN used 2.02e8 effective pairs. Our DiCGAN is scalable to the large training dataset, e.g. MNIST. #EP in DiCGAN is linearly correlated to the training size. In contrast, #EP in FBGAN is determined by ngd and ngu, which are both linearly correlated to the training size. Thus #EP in FBGAN thus is quadratically correlated to the training size." }, { "heading": "5.1.2 COMPARING DICGAN AND FBGAN GIVEN THE LIMITED SUPERVISION", "text": "We compare DiCGAN and FBGAN on MNIST dataset given the limited supervision. Specifically, the query amount of resorting to the pre-trained classifier to obtain the prediction of the generated samples is restricted to 5, 000 for both FBGAN and DiCGAN.\nFig. 4d shows that DiCGAN can learn the desired distribution while FBGAN fails, only generating 10.3% digit zero, which is consistent with the visual results in Fig. 4b and Fig. 4c. This supports the claim that the negative samples are beneficial to learn the user-desired distribution. Particularly, the preference direction can be captured by our differential critic even the supervision is limited, which guides the generation towards the desired data." }, { "heading": "5.2 LEARNING THE DISTRIBUTION OF FACE WITH THE DESIRED ATTRIBUTE", "text": "We consider old face images as the desired data and design the experiment to learn the distribution of old face images on CelebA-HQ. CelebA-HQ dataset is the high quality version of a subset from Celeb Faces Attributes (CelebA) dataset, which consists of 30K images face images of celebrities, annotated with 40 binary attributes such as age. We resize the images to 64× 64. The training setting for each GAN is similar to those mentioned above. (See the appendix for more details).\nIn Fig. 5, we visualize the generated face images randomly sampled from the generator of each model. (1) WGAN has poor generation since its training data, i.e., the desired subset is insufficient. (2) CWGAN has good quality of generation but fails to capture the desired distribution. There is only two old face image out of 9 randomly sampled images. (3) All sampled images from FBGAN and our DiCGAN are the desired old faces. We sample 10K samples from the generator and calculate the percentage of old\nface images among the generated samples for quantitative evaluation. In Table 1, almost all the images generated by DiCGAN and FBGAN are old face images. WGAN and CWGAN both fail to\ncapture the distribution of old face images. As for the quality and the diversity, (1) GAN shows the lowest IS, meaning a poor-quality of generation. The generations of other GANs except for WGAN present similar IS and MS-SSIM. (2) We calculate the IS of the old faces in the training data (denoted as “original” in Table 1. The IS and MS-SSIM of the generations do not exhibit a big difference from the “original”, which means that the quality and the diversity of the generations are relatively good." }, { "heading": "6 DISCUSSIONS", "text": "Current GAN (Goodfellow et al., 2014; Mirza and Osindero, 2014; Odena et al., 2017; Gupta and Zou, 2019) based methods require user-defined criteria to select the desired data in order to derive the distribution of the desired data. There are two limitations for these methods: (1) the criteria are not always accessible in real applications; (2) eliminating the handcrafted undesired samples loses useful information about what is not desired. Other works (Gómez-Bombarelli et al., 2018; Engel et al., 2018) proposed to find the subset of the latent space corresponding to the desired data and generates data from the latent subset. However, they also rely on user-defined criteria for labeling the desired data or a ready-to-use score function to find the subset of the latent space with high scores.\nThis paper proposes DiCGAN to learn the distributions of the user-desired data from the entire data using the pairwise preferences. We empirically demonstrate the efficacy of DiCGAN in terms of promoting samples with user-desired properties on MNIST and CelebA-HQ datasets, respectively. One promising future direction for DiCGAN could be the minority population promotion for imbalanced data tasks, such as imbalanced classification problems, few-shot learning, one-shot learning or even an open-set problem. Another interesting direction of DiCGAN could be the desired policy generation in imitation learning if the pairwise comparison between the policies can be properly designed." }, { "heading": "A COMPARISON OF DICGAN AND FBGAN", "text": "We plot the ratio of generated zero digit to the whole generated data (D/W) of DiCGAN and FBGAN during the training process in Fig. 6a. It shows that DiCGAN converges faster than FBGAN.\nWe explore the gap in the performance between DiCGAN and FBGAN evolves as the number of supervision increases on MNIST. Specifically,the query amount of resorting to the pre-trained classifier to obtain the prediction of the generated samples is restricted to 5K, 50K, 100K, 150K, 180K, 200K, 500K for both FBGAN and DiCGAN.\nFig. 6b plots D/W versus the number of supervision for FBGAN and DiCGAN, respectively. It shows that (1) DiCGAN always learns the desired data distribution even given the limited supervision; (2) when given the limited supervision, FBGAN fails to learn the desired distribution, i.e., achieving a small D/W; (3) FBGAN performs better and achieves a higher D/W, narrowing the performance gap with DiCGAN as the number of supervision increases." }, { "heading": "B ABLATION STUDY", "text": "The objective in our DiCGAN (equation 4) consists of two components, i.e., the WGAN loss, which serves as the cornerstone of DiCGAN, and the ranking loss, which serves as the correction for WGAN. Meanwhile, we introduce the operation of replacement (equation 8) during the model training.\nTo analyze the effects of the correction for WGAN (the third term in equation 5) and the replacement operation, we plot the percentage of desired samples (D/W) versus the training epoch for DiCGAN (λ = 0), DiCGAN (ng = 0) and DiCGAN in Fig. 7a, 7b. Meanwhile, the the converged percentage of desired samples (D/W) are reported in Fig. 7c. It can be seen that\n1. Without the correction term (λ = 0), DiCGAN cannot learn the desired data distribution. The percentage of desired samples (D/W) from DiCGAN (λ = 0) remains constant during training on MNIST (Fig. 7a, 7b) compared with the original datasets (Fig. 7c). This is because that the remaining WGAN term in DiCGAN(λ = 0) focuses on learning the training data distribution.\n2. Without the replacement (ng = 0), DiCGAN makes a minor correction to the generated distribution. In Fig. 7a, 7b, the D/W of DiCGAN (ng = 0) slightly increases compared with\nthe original datasets. This is consistent with our analysis that the correction term would drive the generation towards the desired distribution.\n3. DiCGAN learns the desired data distribution with a sequential minor correction. The D/W of DiCGAN grows with training and reaches almost 100% when convergence. The correction term drives DiCGAN’s generation towards the desired data slightly at each epoch. With the iterative replacement, the minor correction sequentially accumulates and finally the generated distribution shifts to the desired data distribution." }, { "heading": "C PAIRWISE REGULARIZATION ON THE GENERATOR", "text": "As discussed in Remark 1, the pairwise regularization is possibly added to the generator. We consider two cases of adding the regularization to the generator. First, we only add the pairwise regularization to the generator (PRG-1). Second, we add the regularization to the generator together with the regularization on the critic (PRG-2).\nThe objective for PRG-1 is as follows:\nLD = Epr(x) [D(x)]− Epθ(x) [D (x)] , (9)\nLG = Epθ(x) [D (x)]− λg 1 |S′ | ∑ s∈S′ [h (s)] . (10)\nwhere h(s) is equation 3. S ′\nis the pairwise preferences constructed between the generated data and the undesired data, i.e., S ′ = { s = (x1, x2)|x1 x2, x1 ∼ pθ(x), x2 ∼ pu(x) } . Now the generator consists of two terms, the original WGAN loss on the generator aims to achieve Epθ(x) [D (x)] > Epr(x) [D(x)], while the regularization aims to achieve Epθ(x) [D (x)] > Epu(x) [D(x)]. Since the undesired data is subset of the real data, i.e., {x|x ∼ pu(x)} ⊆ {x|x ∼ pr(x)}, the WGAN loss always dominates the training of the generator. Therefore, PRG-1 degenerates to WGAN.\nThe objective for PRG-2 is as follows:\nLD = Epr(x) [D(x)]− Epθ(x) [D (x)]− λ 1 |S| ∑ s∈S [h (s)] , (11)\nLG = Epθ(x) [D (x)]− λg 1 |S′ | ∑ s∈S′ [h (s)] , (12)\nwhere S is constructed based on equation 2. Although the generator consists of two terms, same to our analysis about PRG-1, the extra pairwise regularization on the generator is invalid. Meanwhile, the extra pairwise regularization on the critic works like that in DiCGAN. Therefore, the whole framework degenerates to DiCGAN.\nWe conducted the experiments on MNIST to show the effectiveness of these two methods. λ and λg are both set to 1. Fig. 8 shows the generated digits of PRG-1 and PRG-2 during the training process. PRG-1 failed to learning the desired distribution. PRG-2 can learn the desired distribution. The quantitative results are consistent with the visual results, with 13.9% and 99.4% D/W, respectively." }, { "heading": "D LEARNING THE DISTRIBUTION OF DESIRED OBJECTS", "text": "We consider cars as the desired objects and design the experiment to learn the distribution of cars in CIFAR.\nSample selection in FBGAN: A classifier, pretrained for classifying cars and planes, is adopted for selection. The generated objects, classified to car, are selected to replace the old training data.\nPairwise preferences construction in DiCGAN: Denoting label of CIFAR image as y, the pairwise preference between two images x1 and x2 are x1 x2 when y1 = “car”, y2 = “plane”, and vice versa. At each iteration, we construct 32 pairs by random sampling pairs from the mini-batch 64 samples.\nIn Fig. 9, we visualize the generated CIFAR images randomly sampled from the generator of DiCGAN. It shows that DiCGAN gradually generates cars, as we desired.\nMeanwhile, we sample 10K samples from the generator and calculate the percentage of car images among the generated samples for quantitative evaluation. In Table 2, (1) almost all images generated by DiCGAN and FBGAN are car images; (2) the percentage of car images generated by WGAN is similar to the training dataset." }, { "heading": "E EXPERIMENT SETTINGS", "text": "Hyperparameter The batch size b is set to 50 for MNIST, 64 for CIFAR and CelebA-HQ datasets. The #generated samples ng is set to 50K for MNIST, 1K for CIFAR and 3, 000 for CelebA-HQ, respectively. Other hyperparameters are adopted the same as in Gulrajani et al. (2017).\nWe construct pairwise preferences using the minibatch samples at each iteration based on the classification labels. We construct the pairs by randomly selecting two samples from the minibatch samples, respectively. The pairs, in which two samples belong to the same class, i.e., same digits or same objects, are removed.\nCelebA-HQ training setting WGAN is only trained with the constructed desired dataset. CWGAN conditions on c to model a conditional data distribution p(x|c). There are 6, 632 samples labeled as desired and 23, 368 samples labeled as undesired in the training data. A classifier, pre-trained for classifying young faces and old faces, is adopted for predicting the labels for the generated face images. At every training epoch, FBGAN generates 3, 000 images and those classified as the old are selected to replace the old training data. As for DiCGAN, the generated face image classified with the old attribute is preferred over the face image classified with the young attribute. At each iteration, we construct 32 pairs by random sampling pairs from the mini-batch 64 samples.\nF MORE VISUAL RESULTS" } ]
2,020
DIFFERENTIAL-CRITIC GAN: GENERATING WHAT
SP:18ce50996a98836e07d8cb448adbff5cb039b285
[ "This paper follows up on the work (Zhou et al.) on establishing the importance of knoweldge distillation (KD) from a pretrained autoregressive translation model (AT) to train effective non-autoregresstive translation (NAT) models. Specifically, KD is helpful because it reduces the data complexity which allows successful training of NAT models. This paper shows that KD has an undesirable effect on training of NAT models in terms of poor performance on translation into infrequent tokens and further suggests a remedy for regularizing the NAT training with an additional lexical translation loss based upon a prior translation table obtained via word alignment.", "This paper analyzes the side effect of knowledge distillation in NAT where the lexical choice errors on low-frequency words are propagated to the student model from the teacher. Tackling on this, the paper then proposes to expose raw data to restore such information. In my view, the submission is well motivated and the designed experiments and results are meaningful and convincing which deserves an accept. However, as the paper focuses on analyzing a specific point (lexical choice) in a very constrained setting (NAT), the overall contribution might be incremental compared to other works in general at such a venue like ICLR." ]
Knowledge distillation (KD) is essential for training non-autoregressive translation (NAT) models by reducing the complexity of the raw data with an autoregressive teacher model. In this study, we empirically show that as a side effect of this training, the lexical choice errors on low-frequency words are propagated to the NAT model from the teacher model. To alleviate this problem, we propose to expose the raw data to NAT models to restore the useful information of low-frequency words, which are missed in the distilled data. To this end, we introduce an extra Kullback-Leibler divergence term derived by comparing the lexical choice of NAT model and that embedded in the raw data. Experimental results across language pairs and model architectures demonstrate the effectiveness and universality of the proposed approach. Extensive analyses confirm our claim that our approach improves performance by reducing the lexical choice errors on low-frequency words. Encouragingly, our approach pushes the SOTA NAT performance on the WMT14 English-German and WMT16 Romanian-English datasets up to 27.8 and 33.8 BLEU points, respectively.
[ { "affiliations": [], "name": "Liang Ding" }, { "affiliations": [], "name": "Longyue Wang" }, { "affiliations": [], "name": "Xuebo Liu" }, { "affiliations": [], "name": "Derek F. Wong" }, { "affiliations": [], "name": "Dacheng Tao" }, { "affiliations": [], "name": "Zhaopeng Tu" } ]
[ { "authors": [ "Philip Arthur", "Graham Neubig", "Satoshi Nakamura" ], "title": "Incorporating discrete translation lexicons into neural machine translation", "venue": "In EMNLP,", "year": 2016 }, { "authors": [ "Nitesh V. Chawla", "Nathalie Japkowicz", "Aleksander Kotcz" ], "title": "Editorial: Special issue on learning from imbalanced data sets", "venue": "SIGKDD Explor. Newsl.,", "year": 2004 }, { "authors": [ "Heeyoul Choi", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Context-dependent word representation for neural machine translation", "venue": "Computer Speech & Language,", "year": 2017 }, { "authors": [ "Michael Collins", "Philipp Koehn", "Ivona Kučerová" ], "title": "Clause restructuring for statistical machine translation", "venue": "In ACL,", "year": 2005 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "In NAACL,", "year": 2019 }, { "authors": [ "Liang Ding", "Longyue Wang", "Di Wu", "Dacheng Tao", "Zhaopeng Tu" ], "title": "Context-aware cross-attention for non-autoregressive translation", "venue": "In COLING,", "year": 2020 }, { "authors": [ "Chris Dyer", "Victor Chahuneau", "Noah A Smith" ], "title": "A simple, fast, and effective reparameterization of ibm model 2", "venue": "In NAACL,", "year": 2013 }, { "authors": [ "Tommaso Furlanello", "Zachary Lipton", "Michael Tschannen", "Laurent Itti", "Anima Anandkumar" ], "title": "Born again neural networks", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Marjan Ghazvininejad", "Omer Levy", "Yinhan Liu", "Luke Zettlemoyer" ], "title": "Mask-predict: Parallel decoding of conditional masked language models", "venue": "In EMNLP,", "year": 2019 }, { "authors": [ "Jiatao Gu", "James Bradbury", "Caiming Xiong", "Victor OK Li", "Richard Socher" ], "title": "Non-autoregressive neural machine translation", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Shuhao Gu", "Jinchao Zhang", "Fandong Meng", "Yang Feng", "Wanying Xie", "Jie Zhou", "Dong Yu" ], "title": "Token-level adaptive training for neural machine translation", "venue": "In EMNLP,", "year": 2020 }, { "authors": [ "Junliang Guo", "Xu Tan", "Di He", "Tao Qin", "Linli Xu", "Tie-Yan Liu" ], "title": "Non-autoregressive neural machine translation with enhanced decoder input", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Yongchang Hao", "Shilin He", "Wenxiang Jiao", "Zhaopeng Tu", "Lyu Michael", "Xing Wang" ], "title": "Multi-task learning with shared encoder for non-autoregressive machine translation", "venue": "In NAACL,", "year": 2021 }, { "authors": [ "Hany Hassan", "Anthony Aue", "Chang Chen", "Vishal Chowdhary", "Jonathan Clark", "Christian Federmann", "Xuedong Huang", "Marcin Junczys-Dowmunt", "William Lewis", "Mu Li" ], "title": "Achieving human parity on automatic chinese to english news translation", "venue": "In arXiv,", "year": 2018 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeffrey Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "In NIPS Deep Learning and Representation Learning Workshop,", "year": 2015 }, { "authors": [ "Jungo Kasai", "James Cross", "Marjan Ghazvininejad", "Jiatao Gu" ], "title": "Parallel machine translation with disentangled context transformer", "venue": "In arXiv,", "year": 2020 }, { "authors": [ "Yoon Kim", "Alexander M Rush" ], "title": "Sequence-level knowledge distillation", "venue": "In EMNLP,", "year": 2016 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Philipp Koehn", "Rebecca Knowles" ], "title": "Six challenges for neural machine translation", "venue": "In WMT,", "year": 2017 }, { "authors": [ "Jason Lee", "Elman Mansimov", "Kyunghyun Cho" ], "title": "Deterministic non-autoregressive neural sequence modeling by iterative refinement", "venue": "In EMNLP,", "year": 2018 }, { "authors": [ "Jason Lee", "Dustin Tran", "Orhan Firat", "Kyunghyun Cho" ], "title": "On the discrepancy between density estimation and sequence generation", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Yang Liu", "Sheng Shen", "Mirella Lapata" ], "title": "Noisy self-knowledge distillation for text summarization, 2020", "venue": null, "year": 2020 }, { "authors": [ "Dabiao Ma", "Zhiba Su", "Wenxuan Wang", "Yu-Hao Lu" ], "title": "Fpets: Fully parallel end-to-end text-tospeech system", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Xuezhe Ma", "Chunting Zhou", "Xian Li", "Graham Neubig", "Eduard Hovy" ], "title": "Flowseq: Nonautoregressive conditional sequence generation with generative flow", "venue": null, "year": 2019 }, { "authors": [ "Benjamin Marie", "Raphael Rubino", "Atsushi Fujita" ], "title": "Tagged back-translation revisited: Why does it really work", "venue": "In ACL,", "year": 2020 }, { "authors": [ "Makoto Morishita", "Jun Suzuki", "Masaaki Nagata" ], "title": "Ntt neural machine translation systems at wat 2017", "venue": "In IJCNLP,", "year": 2017 }, { "authors": [ "Toan Nguyen", "David Chiang" ], "title": "Improving lexical choice in neural machine translation", "venue": "In NAACL,", "year": 2018 }, { "authors": [ "Franz Josef Och", "Hermann Ney" ], "title": "A systematic comparison of various statistical alignment models", "venue": "Computational Linguistics,", "year": 2003 }, { "authors": [ "Myle Ott", "Michael Auli", "David Grangier", "Marc‘‘Aurelio Ranzato" ], "title": "Analyzing uncertainty in neural machine translation", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "Bleu: a method for automatic evaluation of machine translation", "venue": "In ACL,", "year": 2002 }, { "authors": [ "Qiu Ran", "Yankai Lin", "Peng Li", "Jie Zhou" ], "title": "Guiding non-autoregressive neural machine translation decoding with reordering information", "venue": "In arXiv,", "year": 2019 }, { "authors": [ "Yi Ren", "Jinglin Liu", "Xu Tan", "Zhou Zhao", "Sheng Zhao", "Tie-Yan Liu" ], "title": "A study of non-autoregressive model for sequence generation", "venue": "In ACL,", "year": 2020 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units", "venue": "In ACL,", "year": 2016 }, { "authors": [ "Chenze Shao", "Jinchao Zhang", "Yang Feng", "Fandong Meng", "Jie Zhou" ], "title": "Minimizing the bag-ofngrams difference for non-autoregressive neural machine translation", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Mitchell Stern", "William Chan", "Jamie Kiros", "Jakob Uszkoreit" ], "title": "Insertion transformer: Flexible sequence generation via insertion", "venue": null, "year": 2019 }, { "authors": [ "Zhiqing Sun", "Zhuohan Li", "Haoqing Wang", "Di He", "Zi Lin", "Zhihong Deng" ], "title": "Fast structured decoding for sequence models", "venue": null, "year": 2019 }, { "authors": [ "Aleš Tamchyna" ], "title": "Lexical and morphological choices in machine translation", "venue": null, "year": 2017 }, { "authors": [ "Jiaxi Tang", "Rakesh Shivanna", "Zhe Zhao", "Dong Lin", "Anima Singh", "Ed H. Chi", "Sagar Jain" ], "title": "Understanding and improving knowledge distillation, 2020", "venue": null, "year": 2020 }, { "authors": [ "Zhaopeng Tu", "Zhengdong Lu", "Yang Liu", "Xiaohua Liu", "Hang Li" ], "title": "Modeling coverage for neural machine translation", "venue": "In ACL,", "year": 2016 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": null, "year": 2017 }, { "authors": [ "Yiren Wang", "Fei Tian", "Di He", "Tao Qin", "ChengXiang Zhai", "Tie-Yan Liu" ], "title": "Non-autoregressive machine translation with auxiliary regularization", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Bingzhen Wei", "Mingxuan Wang", "Hao Zhou", "Junyang Lin", "Xu Sun" ], "title": "Imitation learning for non-autoregressive neural machine translation", "venue": "In ACL,", "year": 2019 }, { "authors": [ "Chunting Zhou", "Graham Neubig", "Jiatao Gu" ], "title": "Understanding knowledge distillation in nonautoregressive machine translation", "venue": "In ICLR,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "When translating a word, translation models need to spend a substantial amount of its capacity in disambiguating its sense in the source language and choose a lexeme in the target language which adequately express its meaning (Choi et al., 2017; Tamchyna, 2017). However, neural machine translation (NMT) has a severe problem on lexical choice, since it usually has mistranslation errors on low-frequency words (Koehn & Knowles, 2017; Nguyen & Chiang, 2018; Gu et al., 2020).\nIn recent years, there has been a growing interest in non-autoregressive translation (NAT, Gu et al., 2018), which improves decoding efficiency by predicting all tokens independently and simultaneously. Well-performed NAT models are generally trained on synthetic data distilled by autoregressive translation (AT) teachers instead of the raw training data (Figure 1(a)) (Stern et al., 2019; Lee et al., 2018; Ghazvininejad et al., 2019; Gu et al., 2019; Hao et al., 2021). Recent studies have revealed that knowledge distillation (KD) reduces the modes (i.e. multiple lexical choices for a source word) in the raw data by re-weighting the training examples (Furlanello et al., 2018; Tang et al.,\n2020), which lowers the intrinsic uncertainty (Ott et al., 2018) and learning difficulty for NAT (Zhou et al., 2020; Ren et al., 2020). However, the side effect of KD has not been fully studied. In this work,\n∗Work was done when Liang Ding and Xuebo Liu were interning at Tencent AI Lab.\nwe investigate this problem from the perspective of lexical choice, which is at the core of machine translation.\nWe argue that the lexical choice errors of AT teacher can be propagated to the NAT model via the distilled training data. To verify this hypothesis, we qualitatively compare raw and distilled training corpora. Table 1 lists all samples whose source sentences contain the place name “纽马基特”. In the raw corpus (“RAW-TGT”), this low-frequency word totally occurs three times and corresponds to correct translation “Newmarket”. However, in the KD corpus (“KD-TGT”), the word is incorrectly translated into a person name “Newmargot” (Margot Robbie is an Australian actress) or organization name “Newmarquette” (Marquette is an university in Wisconsin) or even invalid one “Newmarquite”.\nMotivated by this finding, we explore NAT from the lexical choice perspective. We first validate our hypothesis by analyzing the lexical choice behaviors of NAT models (§3). Concretely, we propose a new metric AoLC (accuracy of lexical choice) to evaluate the lexical translation accuracy of a given NAT model. Experimental results across different language pairs show that NAT models trained on distilled data have higher accuracy of global lexical translation (AoLC↑), which results in better sequence generation. However, fine-grained analyses revealed that although KD improves the accuracy on high-frequency tokens, it meanwhile harms performance on low-frequency ones (Low freq. AoLC↓). And with the improvement of teacher models, this issue becomes more severe. We conclude that the lexical choice of the low-frequency tokens is a typical kind of lost information when using knowledge distillation from AT model.\nIn order to rejuvenate this lost information in raw data, we propose to expose the raw data to the training of NAT models, which augments NAT models the ability to learn the lost knowledge by themselves. Specifically, we propose two bi-lingual lexical-level data-dependent priors (Word Alignment Distribution and Self-Distilled Distribution) extracted from raw data, which is integrated into NAT training via Kullback-Leibler divergence. Both approaches expose the lexical knowledge in the raw data to NAT, which makes it learn to restore the useful information of low-frequency words to accomplish the translation.\nWe validated our approach on several datasets that widely used in previous studies (i.e. WMT14 En-De, WMT16 Ro-En, WMT17 Zh-En, and WAT17 Ja-En) and model architectures (i.e. MaskPredict (Ghazvininejad et al., 2019) and Levenshtein Transformer (Gu et al., 2019)). Experimental results show that the proposed method consistently improve translation performance over the standard NAT models across languages and advanced NAT architectures. The improvements come from the better lexical translation accuracy (low-frequency tokens in particular) of NAT models (AoLC↑), which leads to less mis-translations and low-frequency words prediction errors. The main contributions of this work are:\n• Our study reveals the side effect of NAT models’ knowledge distillation on low-frequency lexicons, which makes the standard NAT training on the distilled data sub-optimal.\n• We demonstrate the necessity of letting NAT models learn to distill lexical choices from the raw data by themselves.\n• We propose an simple yet effective approach to accomplish this goal1, which are robustly applicable to several model architectures and language pairs." }, { "heading": "2 PRELIMINARIES", "text": "" }, { "heading": "2.1 NON-AUTOREGRESSIVE TRANSLATION", "text": "The idea of NAT has been pioneered by Gu et al. (2018), which enables the inference process goes in parallel. Different from AT models that generate each target word conditioned on previously generated ones, NAT models break the autoregressive factorization and produce target words in parallel. Given a source sentence x, the probability of generating its target sentence y with length T is calculated as:\np(y|x) = pL(T |x; θ) T∏\nt=1\np(yt|x; θ) (1)\n1Code is available at: https://github.com/alphadl/LCNAT\nwhere pL(·) is a separate conditional distribution to predict the length of target sequence. During training, the negative loglikelihood loss function of NAT is accordingly LNAT(θ) = − log p(y|x). To bridge the performance gap between NAT and AT models, a variety approaches have been proposed, such as multi-turn refinement mechanism (Lee et al., 2018; Ghazvininejad et al., 2019; Gu et al., 2019; Kasai et al., 2020), rescoring with AT models (Wei et al., 2019; Ma et al., 2019; Sun et al., 2019), adding auxiliary signals to improve model capacity (Wang et al., 2019; Ran et al., 2019; Guo et al., 2019; Ding et al., 2020), and advanced training objective (Wei et al., 2019; Shao et al., 2019; Ma et al., 2020). Our work is complementary to theirs: while they focus on improving NAT models trained on the distilled data, we refine the NAT models by exploiting the knowledge in the raw data.\nSentence-Level Knowledge Distillation NAT models suffer from the multimodality problem, in which the conditional independence assumption prevents a model from properly capturing the highly multimodal distribution of target translations. For example, one English source sentence “Thank you.” can be accurately translated into German as any one of “Danke.”, “Danke schön.” or “Vielen Dank.”, all of which occur in the training data.\nTo alleviate this problem, Gu et al. (2018) applied sequence-level KD (Kim & Rush, 2016) to construct a synthetic corpus, whose target sentences are generated by an AT model trained on the raw data, as shown in Figure 1(a). The NAT model is only trained on distilled data with lower modes, which makes it easily acquire more deterministic knowledge (e.g. one lexical choice for each source word). While separating KD and model training makes the pipeline simple and efficient, it has one potential threat: the re-weighted samples distilled with AT model may have lost some important information. Lee et al. (2020) show that distillation benefits the sequence generation but harms the density estimation. In this study, we exploit to bridge this gap by exposing the raw data to the training of NAT models, as shown in Figure 1(b)." }, { "heading": "2.2 EXPERIMENTAL SETUP", "text": "Datasets Experiments were conducted on four widely-used translation datasets: WMT14 EnglishGerman (En-De, Vaswani et al. 2017), WMT16 Romanian-English (Ro-En, Gu et al. 2018), WMT17 Chinese-English (Zh-En, Hassan et al. 2018), and WAT17 Japanese-English (Ja-En, Morishita et al. 2017), which consist of 4.5M, 0.6M, 20M, and 2M sentence pairs, respectively. We use the same validation and test datasets with previous works for fair comparison. To avoid unknown words, we preprocessed data via BPE (Sennrich et al., 2016) with 32K merge operations. The GIZA++ (Och & Ney, 2003) was employed to build word alignments for the training datasets. We evaluated the translation quality with BLEU (Papineni et al., 2002).\nNAT Models We validated our research hypotheses on two SOTA NAT models:\n• MaskPredict (MaskT, Ghazvininejad et al. 2019) that uses the conditional mask LM (Devlin et al., 2019) to iteratively generate the target sequence from the masked input. We followed its optimal settings to keep the iteration number be 10 and length beam be 5, respectively.\n• Levenshtein Transformer (LevT, Gu et al. 2019) that introduces three steps: deletion, placeholder prediction and token prediction. The decoding iterations in LevT adaptively depends on certain conditions.\nFor regularization, we tune the dropout rate from [0.1, 0.2, 0.3] based on validation performance in each direction, and apply weight decay with 0.01 and label smoothing with = 0.1. We train batches of approximately 128K tokens using Adam (Kingma & Ba, 2015). The learning rate warms up to 5× 10−4 in the first 10K steps, and then decays with the inverse square-root schedule. We followed the common practices (Ghazvininejad et al., 2019; Kasai et al., 2020) to evaluate the translation performance on an ensemble of top 5 checkpoints to avoid stochasticity.\nAT Teachers We closely followed previous works on NAT to apply sequence-level knowledge distillation (Kim & Rush, 2016) to reduce the modes of the training data. More precisely, to assess the effectiveness of our method under different of AT teachers, we trained three kinds of Transformer (Vaswani et al., 2017) models, including Transformer-BASE, Transformer-BIG and Transformer-STRONG. The main results employ LARGE for all directions except Ro-En, which is distilled by BASE. The architectures of Transformer-BIG and Transformer-STRONG are unchanged, but STRONG utilizes a large batch (458K tokens) training strategy." }, { "heading": "3 UNDERSTANDING LEXICAL CHOICE IN NAT MODELS", "text": "" }, { "heading": "3.1 EVALUATING LEXICAL CHOICE OF NAT MODELS", "text": "Recently, Zhou et al. (2020) argue that knowledge distillation is necessary for the uncertain nature of the machine translation task. Accordingly, they propose a metric to estimate the complexity of the data (CoD), which is driven from an external word alignment model. They reveal that the distilled data is indeed less complex, which facilitates easier training for the NAT model. Inspired by this, we propose a metric to measure the lexical level accuracy of model predictions.\nAccuracy of Lexical Choice (AoLC) evaluates the accuracy of target lexicon chosen by a trained NAT model M for each source word. Specifically, the model M takes a source word f as the input, and produce a hypothesis candidate list with their corresponding word confidence:\nPMf = {PM (e1|f), . . . , PM (e|Vtrg||f)} (2)\nwhere Vtrg is the target side vocabularies over whole corpus. The AoLC score is calculated by averaging the probability of the gold target word ef of each source word f :\nAoLC =\n∑ f∈Vtestsrc\nPM (ef |f) |Vtestsrc |\n(3)\nwhere Vtestsrc is the set of source side tokens in test set. Each gold word ef is chosen with the help of the word alignment model PAf . The chosen procedure is as follows: Step 1) collecting the references of the source sentences that contains source word f , and generating the target side word bag Bf with these references. Step 2) Descending PAf in terms of alignment probabilities and looking up the word that first appears in Bf as the gold word until the Bf is traversed. Step 3) If the gold word is still not found, let the word with the highest alignment probability in PAf as the gold word. Generally, higher accuracy of lexical translation represents more confident of the predictions. We discuss the reliability of word alignment-based AoLC in Appendix A.1." }, { "heading": "3.2 GLOBAL EFFECT OF KNOWLEDGE DISTILLATION ON LEXICAL CHOICE", "text": "In this section, we analyze the lexical choice behaviors of NAT models with our proposed AoLC. In particular, We evaluated three MaskT models, which are respectively trained on the raw data, AT-BASE and AT-BIG distilled data. We compared the AoLC with other two metrics (i.e. BLEU and CoD) on three different datasets (i.e. En-De, Zh-En and Ja-En). As shown in Table 2, KD is able to\nimprove translation quality of NAT models (BLEU: KD(BIG) >KD(BASE) >Raw) by increasing the lexical choice accuracy of data (AoLC: KD(BIG) >KD(BASE) >Raw). As expected, NAT models trained on more deterministic data (CoD↓) have lower lexical choice errors (AoLC↑) globally, resulting in better model generation performance (BLEU↑)." }, { "heading": "3.3 DISCREPANCY BETWEEN HIGH- AND LOW-FREQUENCY WORDS ON LEXICAL CHOICE", "text": "To better understand more detailed lexical change within data caused by distillation, we break down the lexicons to three categories in terms of frequency. And we revisit it from two angles: training data and translated data.\nWe first visualize the changing of training data when adopting KD in terms of words frequency density.\nAs shown in Figure 2, we find that the kurtosis of KD data distribution is higher than that of raw, which becomes more significant when adopting stronger teacher. The side effect is obvious, that is, the original high- / low-frequency words become more / fewer, making the distribution of training data more imbalance and skewed, which is problematic in data mining field (Chawla et al., 2004). This discrepancy may erode the translation performance of low-frequency words and generalization performance on other domains. Here we focus on lowfrequency words, and generalization performance degradation will be exploited in future work.\nIn order to understand the detailed change during inference, we then analyze the lexical accuracy with different frequencies in the test set. We make the comprehensive comparison cross languages based on our proposed AoLC. As shown in Figure 3, as the teacher model becomes better, i.e. KD(base)→KD(big), the lexical choice of high-\nfrequency words becomes significantly more accurate (AoLC ↑) while that of low-frequency words becomes worse (AoLC ↓). Through fine-grained analysis, we uncover this interesting discrepancy between high- and low- frequency words. The same phenomena (lexical choice errors on lowfrequency words propagated from teacher model) also can be found in general cases, e.g. distillation when training smaller AT models. Details can be found in Appendix A.2. To keep the accuracy of high-frequency words and compensate for the imbalanced low-frequency words caused by KD, we present a simple yet effective approach below.\nFrequency\n(% ) o f V oc ab ul ar y 0 20 40 60 80 100\nEn-De Zh-En Ja-En\nHigh Med. Low\nEn-De\nA cc\nur ac\ny (%\n)\n60\n65\n70\n75\n80\n85\nRaw KD (Base) KD (Big)\nZh-En\nA cc\nur ac\ny (%\n)\n60\n65\n70\n75\n80\n85\nRaw KD (Base) KD (Big)\nHigh Medium Low\nJa-En\nA cc\nur ac\ny (%\n)\n60\n65\n70\n75\n80\n85\nRaw KD (Base) KD (Big)\n(a) En-De\nFrequency\n(% ) o\nf V oc\nab ul\nar y\n0\n20\n40\n60\n80\n100\nEn-De Zh-En Ja-En\nHigh Med. Low\nEn-De\nA cc\nur ac\ny (%\n)\n60\n65\n70\n75\n80\n85\nRaw KD (Base) KD (Big)\nZh-En\nA cc\nur ac\ny (%\n)\n60\n65\n70\n75\n80\n85\nRaw KD (Base) KD (Big)\nHigh Medium Low\nJa-En\nA cc\nur ac\ny (%\n)\n60\n65\n70\n75\n80\n85\nRaw KD (Base) KD (Big)\n(b) Zh-En\nFrequency\n(% ) o\nf V oc\nab ul\nar y\n0\n20\n40\n60\n80\n100\nEn-De Zh-En Ja-En\nHigh Med. Low\nEn-De\nA cc\nur ac\ny (%\n)\n60\n65\n70\n75\n80\n85\nRaw KD (Base) KD (Big\nZh-En\nA cc\nur ac\ny (%\n)\n60\n65\n70\n75\n80\n85\nRaw KD (Base) KD (Big\nHigh Medium Low\nJa-En\nA cc\nur ac\ny (%\n)\n60\n65\n70\n75\n80\n85\nRaw KD (Base) KD (Big\n(c) Ja-En\nFigure 3: Accuracy of lexical choice (AoLC) for source words of different frequency." }, { "heading": "4 IMPROVING LEXICAL CHOICE IN NAT MODELS", "text": "" }, { "heading": "4.1 METHODOLOGY", "text": "Our goal is to augment NAT models to learn needed lexical choices from the raw data to achieve better performance. To this end, we introduce an extra bilingual data-dependent prior objective to augment the current NAT models to distill the required lexical choices from the raw data. Specifically, we use Kullback-Leibler divergence to guide the probability distribution of model predictions PM (e|f) to match the prior probability distributions Q(·):\nLprior = − ∑ e∈e KL ( Q(e|f) || PM (e|f) ) (4)\nwhere f is the source sentence, and e is the target sentence. The bilingual prior distribution Q(·) is derived from the raw data, which is independent of the model M and will be described later. The final objective for training the NAT model becomes:\nL = (1− λ)LNAT + λLprior (5)\nin which the imitation rate λ follows the logarithmic decay function:\nλ(i) =\n{ log(I/(2(i+1)))\nlog(I/2) i ≤ I/2 0 others\n(6)\nwhere i is the current step, I is the total training step for distilled data. Accordingly, the NAT model is merely fed with the priori knowledge derived from the raw data at beginning. Along with training, the supervision signal of the prior information is getting weaker while that of the distilled data gradually prevails in the training objective. We run all models for 300K steps to ensure adequate training, thus the bilingual prior distributions will be exposed at the first 150K steps.\nChoices of Prior Distribution Q(·) The goal of the prior objective is to guide the NAT models to learn to distill the lexical choices itself from the raw data. For each target word e, we use the external word alignment to select the source word f with the maximum alignment probability, and Q(·) is rewritten as:\nQ(e|f) = Q(e|f) (7)\nSpecifically, we use two types of bilingual prior distributions:\n• Word Alignment Distribution (WAD) is the distribution derived from the external word alignment PDf = {PD(e1|f), . . . , PD(eN |f)} where {e1, . . . , eN} are the set of target words aligned to the source word in the training data. We follow Hinton et al. (2015) to use the softmax temperature mechanism to map PDf over the whole target vocabulary:\nQ(e|f) = P̂Df = exp(PDf /τ)∑ Vtgt exp(PDf /τ) (8)\nWe tune the temperature from [0.5, 1, 2, 5] on WMT14 En-De dataset and use τ = 2 as the default setting for incorporating word alignment distribution in all datasets.\n• Self-Distilled Distribution (SDD) is the probability distribution for the source word f , which is produced by a same NAT model pre-trained on raw data. Specifically, the model M takes a source word f as input and produces a probability distribution over whole words in target vocabulary:\nPMf = {PM (e1|f), . . . , PM (e|Vtrg||f)} (9)\nThis prior distribution signal can be characterized as self-distilled lexicon level “born-again networks” (Furlanello et al., 2018) or self-knowledge distillation (Liu et al., 2020), where the teacher and student have the same neural architecture and model size, and yet surprisingly the student is able to surpass the teacher’s accuracy." }, { "heading": "AT Models", "text": "" }, { "heading": "Existing NAT Models", "text": "" }, { "heading": "Our NAT Models", "text": "" }, { "heading": "4.2 EXPERIMENTAL RESULTS", "text": "Ablation Study on Raw Data Prior Table 3 shows the results of our proposed two bilingual data dependent prior distributions across language pairs. The word alignment distribution (WAD) and self-distilled distribution (SDD) variants consistently improves performance over the vanilla two-step training scheme NAT model (“NAT+KD”) when used individually (averagely +0.5 BLEU point), and combining them (“+Both”) by simply averaging the two distributions can achieve a further improvement (averagely +0.9 BLEU point). The improvements on translation performance are due to a increase of AoLC, especially for low-frequency tokens (averagely +3.2), which reconfirms our claim. Notably, averaging the two prior distributions could rectify each other, thus leading to a further increase. We explore the complementarity of two prior schemes in Section 4.3. In the following experiments, we use the combination of WAD and SDD as the default bilingual data dependent prior.\nComparison with Previous Work Table 4 lists the results of previously competitive studies (Gu et al., 2018; Lee et al., 2018; Kasai et al., 2020; Ghazvininejad et al., 2019; Gu et al., 2019) on the widely-used WMT14 En-De and WMT16 Ro-En datasets. Clearly, our bilingual data-dependent prior significantly improves translation (BLEU↑) by substantially increasing the lexical choice accuracy (AoLC↑). It is worth noting that our approaches merely modify the training process, thus does not increase any latency (“Speed”), maintaining the intrinsic advantages of non-autoregressive generation.\nFrequency En-De Zh-En Ja-En High +1.3% +0.3% +1.3% Medium +0.2% +0.1% +0.9% Low +5.9% +5.8% +3.3% All +2.4% +1.8% +1.7%\nTable 7: Improvement of our approach over the MaskT+KD model on AoLC.\nModel En-De Zh-En Ja-En NAT 10.3% 6.7% 9.4%\n+KD 7.6% 4.2% 6.9% +Ours 9.8% 6.1% 8.5%\nTable 8: Ratio of low-frequency target words in the MaskT model generated translations." }, { "heading": "Comparison with Data Manipulation Strategies", "text": "Instead of using the proposed priors, we also investigate two effective data manipulation strategies, i.e. Data Mixing and Curriculum Learning, to force the NAT model learns from both the raw and distilled data. For data mixing, we design two settings: a) Mix: simply combine the raw and distilled data, and then shuffle the mixed dataset. b) Tagged Mix: Inspired by successes of tagged back-translation (Caswell et al., 2019; Marie et al., 2020), we add tags to distinguish between KD and Raw sentences in the mixed dataset. For decay curriculum schedule, the NAT models learn more from raw data at the beginning and then learn more from KD\nas the training goes on. The details of curriculum can be found in Appendix A.3. As seen in Table 5, data mixing and decay curriculum schedule improve performance on both AoLC and BLEU, which confirm the necessity of exposing raw data to NAT models during training. Besides, our approach still outperforms those effective strategies, demonstrating the superiority of our learning scheme.\n4.3 EXPERIMENTAL ANALYSIS\nIn this section, we conducted extensive analyses on the lexical choice to better understand our approach. Unless otherwise stated, results are reported on the MaskPredict models in Table 3.\nOur approach improves translation performance by reducing mis-translation errors. The lexical choice ability of NAT models correlates to mistranslation errors, in which wrong lexicons are chosen to translate source words. To better understand whether our method alleviates the mis-translation problem, we\nassessed system output by human judgments. In particular, we randomly selected 50 sentences from the Zh-En testset, and manually labelled the words with lexical choice error. We defined the lexical choice error rate as E/N , where E is the number of lexical choice errors and N is the number of content words in source sentences, since such errors mainly occur in translating content words. As seen in Table 6, our approache consistently improves BLEU scores by reducing the lexical choice errors, which confirm our claim. Additionally, AoLC metric correlates well with both the automatic BLEU score and the subjective evaluation, demonstrating its reasonableness.\nOur approach significantly improves the accuracy of lexical choice for low-frequency source words. As aforementioned discrepancy between high- & low-frquency words in Section 3.3, we focus on revealing the fine-grained lexical choice accuracy w.r.t our proposed AoLC. In Table 7, the majority of improvements is from the low-frequency words, confirming our hypothesis.\nOur approach generates translations that contain more low-frequency words. Besides improving the lexical choice of low-frequency words, our method results in more low-frequency words being recalled in the translation. In Table 8, although KD improves the translation, it biases the NAT\nmodel towards generating high-frequency tokens (Low freq.↓) while our method can not only correct this bias (averagely +32% relative change), but also enhance translation (BLEU↑ in Table 3).\nOur proposed two priors complement each other by facilitating different tokens. As aforementioned in Table 3, combining two individual schemes can further increase the NAT performance. To explain how they complement each other, especially for low-frequency tokens, we classify low-frequency tokens into two categories according to their linguistic roles: content words (e.g. noun, verb, and adjective) and function words (e.g. preposition, determiner, and punctuation). The results are listed in Table 9. We show that WAD facilitates more on the understanding and generation of content tokens, while SDD brings more gains for function (i.e. content-\nfree) tokens. We leave a more thorough exploration of this aspect for future work.\nEffect of Word Alignment Quality on Model Performance. Both the proposed AoLC and priors depend heavily on the quality of word alignment, we therefore design two weaker alignment scenarios to verify the robustness of our method.\nFirst, We adopt fast-align (Dyer et al., 2013), which is slightly weaker than GIZA++. Using fastalign, our methods can still achieve +0.6 and +0.7 improvements in terms of BLEU on En-De and Zh-En datasets, which are marginally lower than that using GIZA++ (i.e. +0.8 and +1.0 BLEU). Encouragingly, we find that the improvements in translation accuracy on low-frequency words still hold (+5.5% and +5.3% vs. +5.9% and +5.8%), which demonstrates the robustness of our approach.\nIn addition, we insert noises into the alignment distributions to deliberately reduce the alignment quality (Noise injection details can be found in Appendix A.4. The performances still significantly outperform the baseline, indicating that our method can tolerate alignment errors and maintain model performance to some extent.\nEffect of AT Teacher To further dissect the different effects when applying different AT teachers, we employ three teachers. Table 10 shows our method can enhance NAT models under variety of teacher-student scenarios, including base, big and strong teacher-guided models. Our approach obtains averagely +0.7 BLEU points, potentially complementary to the majority of existing work on improving knowledge distillation for NAT models." }, { "heading": "5 RELATED WORK", "text": "Understanding Knowledge Distillation for NAT Knowledge distillation is a crucial early step in the training of most NAT models. Ren et al. (2020) reveal that the difficulty of NAT heavily depends on the strongness of dependency among target tokens, and knowledge distillation reduces the token dependency in target sequence and thus improves the accuracy of NAT models. In the pioneering work of NAT, Gu et al. (2018) claim that NAT suffers from the multi-modality problem (i.e. multiple lexical translations for a source word), and knowledge distillation can simplify the dataset, which is empirically validated by Zhou et al. (2020). We confirm and extend these results, showing that the AT-distilled dataset indeed leads to more deterministic predictions but propagates the low-frequency\nlexical choices errors. To this end, we enhance the NAT lexical predictions by making them learn to distill knowledge from the raw data.\nLexical Choice Problem in NMT Models Benefiting from continuous representations abstracted from the training data, NMT models have advanced the state of the art in the machine translation community. However, recent studies have revealed that NMT models suffer from inadequate translation (Tu et al., 2016), in which mis-translation error caused by the lexical choice problem is one main reason. For AT models, Arthur et al. (2016) alleviate this issue by integrating a count-based lexicon, and Nguyen & Chiang (2018) propose an additional lexical model, which is jointly trained with the AT model. The lexical choice problem is more serious for NAT models, since 1) the lexical choice errors (low-resource words in particular) of AT distillation will propagate to NAT models; and 2) NAT lacks target-side dependencies thus misses necessary target-side context. In this work, we alleviate this problem by solving the first challenge." }, { "heading": "6 CONCLUSION", "text": "In this study, we investigated effects of KD on lexical choice in NAT. We proposed a new metric to evaluate lexical translation accuracy of NAT models, and found that 1) KD improves global lexical predictions; and 2) KD benefits the accuracy of high-frequency words but harms the low-frequency ones. There exists a discrepancy between high- and low-frequency words after adopting KD. To bridge this discrepancy, we exposed the useful information in raw data to the training of NAT models. Experiments show that our approach consistently and significantly improves translation performance across language pairs and model architectures. Extensive analyses reveal that our method reduces mistranslation errors, improves the accuracy of lexical choices for low-frequency source words, recalling more low-frequency words in the translations as well, which confirms our claim." }, { "heading": "7 ACKNOWLEDGMENTS", "text": "This work was supported by Australian Research Council Projects under grants FL-170100117, DP-180103424, and IC-190100031. Xuebo and Derek were supported in part by the Science and Technology Development Fund, Macau SAR (Grant No. 0101/2019/A2), and the Multi-year Research Grant from the University of Macau (Grant No. MYRG2020-00054-FST). We also thank the anonymous reviewers for their insightful comments." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 DISCUSSION ON THE RELIABILITY OF WORD ALIGNMENT-BASED AOLC", "text": "We randomly select 20 sentence pairs from the Zh-En test set, which contains 576 source tokens. We use the trained word alignment model to produce alignments for the 20 sentence pairs, and then perform the gold word chosen procedure as described in Section 3.1. We manually evaluate these bilingual lexicons, and find that 551 out of 576 source words are aligned to reasonable equivalences (i.e. 96% accuracy). This demonstrates that it is reliable to calculate AoLC based on automatic word alignments." }, { "heading": "A.2 GENERAL CASES OF THE SIDE-EFFECT OF KNOWLEDGE DISTILLATION", "text": "To verify the universality of our findings that lexical choice error will propagate from teacher model, we conduct the following experiments.\nIn particular, we experiment AT-Base and AT-Small models on the En-De data, which are distilled by the AT-Strong model. Note that the AT-Small model consists of 256 model dimensions, 4 heads, 3 encoder and 3 decoder layers. As shown in Table 11, the same phenomena can be found in AT models when distillation is used. We leave a thorough exploration of this aspect for future work." }, { "heading": "Model BLEU AoLC on LFT Ratio of LFT", "text": "" }, { "heading": "A.3 DECAY CURRICULUM SETUP", "text": "Specifically, the training process is divided into 5 phases, which differ at the constituent of training data. At Phase 1, all training examples are from the raw data; and at Phase 2, 75% of the training examples are from the raw data and the other 25% are from the distilled data (note that the two kinds of training examples should cover all source sentences). Similarly, the constituent ratios at the later phases are (50%, 50%), (25%, 75%), and (0%, 100%)." }, { "heading": "A.4 NOISE INJECTION SETUP", "text": "We swap the maximal probability tokens with other random tokens under the change ratio of N%. With 2% and 5% noises, our method respectively decreased by -0.1 and -0.2 BLEU scores on En-De. The improvements in translation accuracy on low-frequency words are +5.7% and +5.3%, which is comparable to non-noisy one (i.e. +5.9%)." } ]
2,021
null
SP:21d29b68bb3e7cf18e699a98f7be35f9e12bdaaf
[ "This paper proposed a new regularization method via patch level interpolation. During the training, images within a batch will be used to construct an image graph. For example, for a certain image, its nearest neighbors in the feature spaces will be used. Then patches from its neighbors will be used to interpolate to each patch in that given image. Thus a straightforward application for such regularization is semi-supervised training. Moreover, in this paper it has demonstrated such regularization can be extended with virtual adversarial training and mixup training. ", "The paper proposes a general regularizer called the Patch-level Neighborhood Interpolation (Pani) that constructs patch-level graphs at different levels of neural networks. Specifically, it is based on the k-nearest patch neighbors at each layer and linear interpolation for each patch. By applying this proposed regularizer framework into two special cases and get Pani VAT and Pani MixUp. Numerical experiments are comprehensive and convincing. " ]
Regularization plays a crucial role in machine learning models, especially for deep neural networks. The existing regularization techniques mainly rely on the i.i.d. assumption and only consider the knowledge from the current sample, without the leverage of neighboring relationship between samples. In this work, we propose a general regularizer called Patch-level Neighborhood Interpolation (Pani) that conducts a non-local representation in the computation of network. Our proposal explicitly constructs patch-level graphs in different network layers and then linearly interpolates neighborhood patch features, serving as a general and effective regularization strategy. Further, we customize our approach into two kinds of popular regularization methods, namely Virtual Adversarial Training (VAT) and MixUp as well as its variants. The first derived Pani VAT presents a novel way to construct non-local adversarial smoothness by employing patch-level interpolated perturbations. In addition, the second derived Pani MixUp method extends the original MixUp regularization and its variant to the Pani version, achieving a significant improvement in the performance. Finally, extensive experiments are conducted to verify the effectiveness of our Patch-level Neighborhood Interpolation approach in both supervised and semi-supervised settings.
[]
[ { "authors": [ "David Berthelot", "Nicholas Carlini", "Ekin D Cubuk", "Alex Kurakin", "Kihyuk Sohn", "Han Zhang", "Colin Raffel" ], "title": "Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring", "venue": "arXiv preprint arXiv:1911.09785,", "year": 2019 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ian Goodfellow", "Nicolas Papernot", "Avital Oliver", "Colin Raffel" ], "title": "Mixmatch: A holistic approach to semi-supervised learning", "venue": "Conference on Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Antoni Buades", "Bartomeu Coll", "J-M Morel" ], "title": "A non-local algorithm for image denoising", "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05),", "year": 2005 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Fan Yang", "William W Cohen", "Ruslan R Salakhutdinov" ], "title": "Good semisupervised learning that requires a bad gan", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Vincent Dumoulin", "Ishmael Belghazi", "Ben Poole", "Olivier Mastropietro", "Alex Lamb", "Martin Arjovsky", "Aaron Courville" ], "title": "Adversarially learned inference", "venue": "arXiv preprint arXiv:1606.00704,", "year": 2016 }, { "authors": [ "Murat Dundar", "Balaji Krishnapuram", "Jinbo Bi", "R Bharat Rao" ], "title": "Learning classifiers when the training data is not iid", "venue": "In IJCAI,", "year": 2007 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Geoffrey E Hinton", "Nitish Srivastava", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan R Salakhutdinov" ], "title": "Improving neural networks by preventing co-adaptation of feature detectors", "venue": "arXiv preprint arXiv:1207.0580,", "year": 2012 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Konstantinos Kamnitsas", "Daniel C Castro", "Loic Le Folgoc", "Ian Walker", "Ryutaro Tanno", "Daniel Rueckert", "Ben Glocker", "Antonio Criminisi", "Aditya Nori" ], "title": "Semi-supervised learning via compact latent space clustering", "venue": "arXiv preprint arXiv:1806.02679,", "year": 2018 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Abhishek Kumar", "Prasanna Sattigeri", "Tom Fletcher" ], "title": "Semi-supervised learning with gans: Manifold invariance with improved inference", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Samuli Laine", "Timo Aila" ], "title": "Temporal ensembling for semi-supervised learning", "venue": "arXiv preprint arXiv:1610.02242,", "year": 2016 }, { "authors": [ "Bruno Lecouat", "Chuan-Sheng Foo", "Houssam Zenati", "Vijay R Chandrasekhar" ], "title": "Semi-supervised learning with gans: Revisiting manifold regularization", "venue": "arXiv preprint arXiv:1805.08957,", "year": 2018 }, { "authors": [ "Dong-Hyun Lee" ], "title": "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks", "venue": "In Workshop on Challenges in Representation Learning, ICML,", "year": 2013 }, { "authors": [ "Chongxuan Li", "Kun Xu", "Jun Zhu", "Bo Zhang" ], "title": "Triple generative adversarial nets", "venue": "arXiv preprint arXiv:1703.02291,", "year": 2017 }, { "authors": [ "Yucen Luo", "Jun Zhu", "Mengxi Li", "Yong Ren", "Bo Zhang" ], "title": "Smooth neighbors on teacher graphs for semi-supervised learning", "venue": "arXiv preprint arXiv:1711.00258,", "year": 2017 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Takeru Miyato", "Andrew M Dai", "Ian Goodfellow" ], "title": "Adversarial training methods for semisupervised text classification", "venue": "International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Masanori Koyama", "Shin Ishii" ], "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "venue": "arXiv preprint arXiv:1704.03976,", "year": 2017 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Masanori Koyama", "Shin Ishii" ], "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Guo-Jun Qi", "Liheng Zhang", "Hao Hu", "Marzieh Edraki", "Jingdong Wang", "Xian-Sheng Hua" ], "title": "Global versus localized generative adversarial nets", "venue": "In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Nir Sochen", "Ron Kimmel", "Ravi Malladi" ], "title": "A general framework for low level vision", "venue": "IEEE transactions on image processing,", "year": 1998 }, { "authors": [ "Ke Sun", "Zhouchen Lin", "Hantao Guo", "Zhanxing Zhu" ], "title": "Virtual adversarial training on graph convolutional networks in node classification", "venue": "In Chinese Conference on Pattern Recognition and Computer Vision (PRCV),", "year": 2019 }, { "authors": [ "Jan Svoboda", "Jonathan Masci", "Federico Monti", "Michael M Bronstein", "Leonidas Guibas" ], "title": "Peernets: Exploiting peer wisdom against adversarial attacks", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Antti Tarvainen", "Harri Valpola" ], "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Carlo Tomasi", "Roberto Manduchi" ], "title": "Bilateral filtering for gray and color images", "venue": "In Sixth international conference on computer vision (IEEE Cat. No. 98CH36271),", "year": 1998 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ], "title": "Robustness may be at odds with accuracy", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Vladimir N Vapnik", "A Ya Chervonenkis" ], "title": "On the uniform convergence of relative frequencies of events to their probabilities", "venue": "In Measures of complexity,", "year": 2015 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Bing Yu", "Jingfeng Wu", "Jinwen Ma", "Zhanxing Zhu" ], "title": "Tangent-normal adversarial regularization for semi-supervised learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P Xing", "Laurent El Ghaoui", "Michael I Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "Conference on Neural Information Processing Systems,", "year": 2017 } ]
[ { "heading": null, "text": "Regularization plays a crucial role in machine learning models, especially for deep neural networks. The existing regularization techniques mainly rely on the i.i.d. assumption and only consider the knowledge from the current sample, without the leverage of neighboring relationship between samples. In this work, we propose a general regularizer called Patch-level Neighborhood Interpolation (Pani) that conducts a non-local representation in the computation of network. Our proposal explicitly constructs patch-level graphs in different network layers and then linearly interpolates neighborhood patch features, serving as a general and effective regularization strategy. Further, we customize our approach into two kinds of popular regularization methods, namely Virtual Adversarial Training (VAT) and MixUp as well as its variants. The first derived Pani VAT presents a novel way to construct non-local adversarial smoothness by employing patch-level interpolated perturbations. In addition, the second derived Pani MixUp method extends the original MixUp regularization and its variant to the Pani version, achieving a significant improvement in the performance. Finally, extensive experiments are conducted to verify the effectiveness of our Patch-level Neighborhood Interpolation approach in both supervised and semi-supervised settings." }, { "heading": "1 INTRODUCTION", "text": "In the statistical learning theory, regularization techniques are typically leveraged to achieve the trade-off between empirical error minimization and the control of model complexity (Vapnik & Chervonenkis, 2015). In contrast to the classical convex empirical risk minimization where regularization can rule out trivial solutions, regularization plays a rather different role in deep learning due to its highly non-convex optimization nature (Zhang et al., 2016). Among all the explicit and implicit regularization, regularization with stochastic transformation, perturbations and randomness, such as adversarial training (Goodfellow et al., 2014), dropout and MixUp (Zhang et al., 2017), play a key role in the deep learning models due to their superiority in the performance (Berthelot et al., 2019b; Zhang et al., 2017; Miyato et al., 2018; Berthelot et al., 2019a). In this section, we firstly review two kinds of effective and prestigious regularization branches for deep neural networks, which can elegantly generalize from supervised learning to semi-supervised setting.\nAdversarial Training (Goodfellow et al., 2014; Madry et al., 2017) can provide an additional regularization beyond that provided by other generic regularization strategies, such as dropout, pretraining and model averaging. However, recent works (Zhang et al., 2019; Tsipras et al., 2018) demonstrated that this kind of training method holds a trade-off between the robustness and accuracy, limiting the efficacy of the adversarial regularization. Besides, Virtual Adversarial Training (VAT) (Miyato et al., 2018) can be regarded as a natural extension of adversarial training to semi-supervised setting through adversarially smoothing the posterior output distribution with the leverage of unlabeled data. This strategy has achieved great success in image classification (Miyato et al., 2018), text classification (Miyato et al., 2016) and node classification (Sun et al., 2019). Tangent-Normal Adversarial Regularization (TNAR) (Yu et al., 2019) extended VAT by taking the data manifold into consideration and applied VAT along the tangent space and the orthogonal normal space of the data manifold, outperforming previous semi-supervised approaches.\nMixUp (Zhang et al., 2017) augmented the training data by incorporating the prior knowledge that linear interpolation of input vectors should lead to linear interpolation of the associated targets, accomplishing consistent improvement of generalization on image, speech and tabular data. MixMatch (Berthelot et al., 2019b) extended MixUp to semi-supervised tasks by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeled data using MixUp. In contrast with VAT, MixMatch (Berthelot et al., 2019b) utilizes one specific form of consistency regularization, i.e., using the standard data augmentation for images, such as random horizontal flips, rather than computing adversarial perturbations to smooth the posterior distribution of the classifier.\nNevertheless, the vast majority of regularization methods, including the aforementioned approaches, assume that the training samples are drawn independently and identically from an unknown data generating distribution. For instance, Support Vector Machine (SVM), Back-Propagation (BP) for Neural Networks, and many other common algorithms implicitly make this assumption as part of their derivation. However, this i.i.d. assumption is commonly violated in realistic scenarios where batches or sub-groups of training samples are likely to have internal correlations. In particular, Dundar et al. (2007) demonstrated that accounting for the correlations in real-world training data leads to statistically significant improvements in accuracy. Similarly, Peer-Regularized Networks (PeerNet) (Svoboda et al., 2018) applied graph convolutions (Velickovic et al., 2017; Kipf & Welling, 2016) to harness information of peer samples, and verified its effectiveness on defending adversarial attacks. Motivated by these facts, we aim to design a general regularization strategy that can fully utilize the internal relationship between samples by explicitly constructing a graph within a minibatch in order to consistently improve the generalization of deep neural networks in both semi- and supervised settings.\nIn this paper, we propose the Patch-level Neighborhood Interpolation (Pani) for deep neural networks, serving as a simple yet effective non-local regularization. We firstly construct a patch-level graph in each mini-batch during the stochastic gradient decent training process. Then we apply linear interpolation on the neighboring patch features and the resulting non-local representation additionally captures the relationship of neighboring patch features in different layers, serving as a general and effective regularization. Furthermore, to prove the generality and superiority of our Pani method, we explicitly customize our approach into two kinds of popular and general regularization strategies, i.e., Virtual Adversarial Regularization and MixUp, resulting in Pani VAT and Pani MixUp. For the Pani VAT, we reformulate the construction of adversarial perturbations, transforming from solely depending on the current sample to the linear interpolation of neighboring patch features. This non-local adversarial perturbations can leverage the information of neighboring correlation from all samples within a batch, providing more informative adversarial smoothness in semisupervised setting. Besides, in the Pani MixUp, we extend MixUp and its variant MixMatch from image to patch level by mixing fine-grained patch features and corresponding supervised signals. Finally, we conduct extensive experiments to demonstrate that both of the two derived regularization strategies can outperform other state-of-the-art approaches in both supervised and semi-supervised tasks. More importantly, these successful cases verify the generality and superiority of our Patchlevel Neighborhood Interpolation method. Our contributions can be summarized as follow:\n• We propose a general interpolation strategy either in input or feature space, i.e., Patch-level Neighborhood Interpolation, helping to improve the generalization of deep neural networks on both semi- and supervised scenarios. This strategy can serve as an effective graph-based representation method and has much potential to be leveraged in a wider range of tasks.\n• Based on our method, the customized approaches Pani VAT and Pani MixUP as well as Pani MixMatch can boost the generalization performance significantly, and thus provide a guidance to the deployment of our Pani strategy into more regularization methods." }, { "heading": "2 OUR METHOD: PATCH-LEVEL NEIGHBORHOOD INTERPOLATION", "text": "Before introducing our approach, we highly recommend readers to go through some preliminary knowledge about VAT (Miyato et al., 2017), MixUP (Zhang et al., 2017) and PeerNet (Svoboda et al., 2018) in Appendix A. For our method, one related work is PeerNet (Svoboda et al., 2018) that designed graph-based layers to defend against adversarial attacks, but unfortunately the construction of pixel-level K-NN graphs in PeerNet (Svoboda et al., 2018) is costly in computation. By contrast, our motivation is to develop a general regularization that can consistently boost the performance of\ndeep neural networks in both semi- and supervised settings rather than the adversarial scenario. Besides, the construction way of a non-local layer in our method is more flexible and can be determined by the specific objective function, as elaborated in Section 2.1 and 2.2. Moreover, our patch-level method can achieve computational advantage over pixel-level regularization, and incorporates more meaningful semantic correlations in different layers. Particularly, a flexible patch size can be chosen according to the size of receptive field in different layers, yielding more informative graph-based representation and better regularization performance.\nConcretely, as our Patch-level Neighborhood Interpolation (Pani) shown in Figure 1, in the first step we determine the candidate peer images set Si for each image i. This can be achieved by random matching or computing the semantically nearest image neighbors using e.g. the cosine distance. Next, we construct the whole patches set Pi on the candidate peer images set Si for each image i by clipping the corresponding patches in the different locations on an input or a feature map. Following the establishment of patch set Pi, we construct K nearest neighbor patch graphs based on the distance of patch features in order to find the neighbors of each patch in patch set Pi for ∀i = 1, .., N . Mathematically, following the definition in the PeerNet, let zip be the p-th patch on the input or feature map Zi for the i-th image within one batch. Then denote the k-th nearest patch neighbor for zip as z jk qk\ntaken from the patch qk of the peer image jk in the candidate set Si. Next, in order to leverage the knowledge from neighbors, different from graph attention mechanism in PeerNet, we apply a more straightforward linear interpolation on the neighboring patches for the current patch zip. Then, the general formulation of our Patch-level Neighborhood Interpolation can be presented as follows:\nz̃ip = z i p + K∑ k=1 ηipk(z jk qk − zip), (1)\nwhere ηipk is the combination coefficient for the p-th patch of i-th image w.r.t its k-th patch neighbor, which can be computed through the power iteration similar to the manner of VAT, or through random sampling from a specific distribution in randomness-based regularization, e.g., Mixup and its variants. Moreover, the choice of linear interpolation in Eq. 1 enjoys great computational advantage over the nonlinear GAT form in PeerNet in the computation of networks. Finally, after the patchlevel linear interpolation on patch features, we can obtain the refined graph-based representation Z̃i for i-th image, ∀i = 1, ..., N . Note that our proposed method can explicitly combine the advantages of manifold regularization and non-local filtering in a flexible way, which we have a more detailed discussion about in Appendix B. Besides, to further demonstrate the generality and effectiveness of our Pani method, we provide Pani-version of two typical regularization strategies, i.e., Virtual adversarial Training and Mixup as\nwell its variant MixMatch, and verify the superiority of our Pani strategy on the significant boosting of accuracy." }, { "heading": "2.1 PANI VAT", "text": "Based on our Patch-level Neighborhood Interpolation framework, we can construct a novel Pani VAT that utilizes the combination or interpolation of patch neighbors for each sample to manipulate the non-local perturbations, thus providing more informative adversarial smoothness in semi-supervised setting. Consider a more general composite function form of the classifier f , i.e.,f(x) = g(z) and z = h(x) where z denotes the hidden feature of input x or the input itself when the reduced form happens. Combining VAT formulation, i.e., Eq. 7 in Appendix A, and Pani formulation, i.e., Eq. 1, we reformulate our Pani VAT with perturbations on L layers in a deep neural network as follows:\nmax η D[g(z), g(z̃(η))]\ns.t. L∑ l=1 w2l ‖η(l)‖2 ≤ 2, (2)\nwhere D measures the divergence between two distributions. η = {ηipk} denotes the generic perturbations from our Pani method and η(l) indicates the perturbations in l-th layer of network. z̃(η) = {z̃ip} represents the smoothed feature map imposed by perturbation η considering all patches in the way shown in Eq. 1. In particular, when L = 1, adversarial perturbations are only imposed on the input feature, which is similar to the traditional virtual adversarial perturbations. Additionally, wl is the hyper-parameter, adjusting the weight of perturbation η(l) in different layers with the overall perturbations restrained in an -ball.\nNext, we still utilize the similar power iteration and finite difference proposed in VAT (Miyato et al., 2017) to compute the desired perturbation η∗. Then the resulting full loss function is defined as:\nmin θ L0 + βEx∼DRvadv(x, η∗; θ), (3)\nwhere L0 is the original supervised loss and β controls the degree of adversarial smoothness. Rvadv(x, y, η∗) = D[g(z), g(z̃(η∗))] can be attained after solving the optimization problem in Eq. 2. For the implementation details, we describe them in Algorithm 1.\nAlgorithm 1 : Pani VAT within a Batch 1: Input: neighbors K1 and K2, classifier f , batch size B, perturbed layers L 2: Initialization: combination coefficient η 3: ComputeK1 nearest image neighbors based on the distance of the second last layer output from f and obtain K1 (K1 ≤ B) peer images set Si for each image i.\n4: for l = 1 to L do: 5: Compute the patch set Pi for all K1 peer images on layer l for each image i . 6: Construct a K2 nearest patch neighbors graph for each patch in each image i. 7: Conduct Patch-level Neighborhood Interpolation via Eq. 1 for each patch. 8: end for 9: Conduct power iteration and finite difference in VAT to compute η∗ constrained by Eq. 2.\n10: ReturnRvadv(x, η∗; θ)\nRemark. As shown in the adversarial part of Figure 1, the rationality of our Pani VAT method lies in the fact that the constructed perturbations can entail more non-local information coming from the neighbors of current sample. Through the delicate patch-level interpolation among neighbors of each patch, the resulting non-local virtual adversarial perturbations are expected to provide more informative smoothness, thus enhancing the performance of classifier in the semi-supervised setting." }, { "heading": "2.2 PANI MIXUP", "text": "Next, we leverage Patch-level Neighborhood Interpolation to derive Pani MixUp. The core formulation of Pani MixUp can be written as:\nz̃ip = (1− K∑ k=1 ηipk)z i p + K∑ k=1 ηipkz jk qk\nỹi = (1− K∑ k=1 P∑ p=1 ηipk P )yi + K∑ k=1 P∑ p=1 ηipk P yjk , s.t. λ = 1− K∑ k=1 P∑ p=1 ηipk P ,\n(4)\nwhere (zi, yi) are the feature-target pairs randomly drawn from the training data. P is the number of patches in each image and λ ∼ Beta(a, b) represents the importance of the current input or target while conducting MixUp. To compute ηipk, we firstly sample λ from Beta(a, b) and η0ipk from a uniform distribution respectively, then we normalize η0ipk according to the ratio of λ to satisfy the constraint in Eq. 4 and thus obtain ηipk. It should be noted that due to the unsymmetric property of λ in our framework, we should tune both a and b in our experiments. For simplicity, we fix b = 1 and only consider the a as the hyper-parameter to pay more attention to the importance of current patch, which is inspired by the similar approach in MixMatch (Berthelot et al., 2019b). Here we reformulate Eq. 4 to illustrate that Pani MixUp is naturally derived from our Pani framework through additionally considering several constraints:\nz̃ip = z i p + K∑ k=1 ηipk(z jk qk − zip)\ns.t. λ = 1− K∑ k=1 P∑ p=1 ηipk P ,∀i = 1, ..., N\nλ ∼ Beta(a, b), ηipk ∈ [0, 1],∀i, p, k\n(5)\nwhere the first constraint in Eq. 5 can be achieved through normalization via λ. Meanwhile, we impose ηipk ∈ [0, 1] as ηipk represents the interpolation coefficient. Further, we elaborate the procedure of Pani MixUp in Algorithm 2.\nAlgorithm 2 : Pani MixUp within a Batch 1: Input: neighbors K, classifier f , batch size B, perturbed layers L, parameter a 2: Compute peer images by random matching and obtain peer images set Si for each image i. 3: for l = 1 to L do: 4: Compute the patch set Pi on layer l for each image i . 5: Construct a K nearest patch neighbors graph for each patch in each image i. 6: Sample initial coefficient η(l)0 = {η0ipk} from U(0, 1) and λ from Beta(a, 1). 7: Normalize η(l)0 according to the ratio λ via Eq. 5 to compute η\n(l). 8: Conduct Pani MixUp over patch features and labels via Eq. 5 for each patch. 9: end for\n10: Return supervised loss based on mixed features and labels.\nRemark. Different from the role of η in the aforementioned Pani VAT where η serves as the interpolated perturbations, the physical meaning of η in our Pani MixUp approach is the linear interpolation coefficient to conduct MixUp. Despite this distinction, both of the two extended regularization methods are naturally derived from our Patch-level Neighborhood Interpolation framework, further demonstrating the generality and superiority of our Pani strategy." }, { "heading": "3 EXPERIMENTS", "text": "In this section, we conduct extensive experiments for Pani VAT and Pani MixUp and its variant Pani MixMatch on both semi- and supervised settings." }, { "heading": "3.1 PANI VAT", "text": "Implementation Details. For fair comparison with VAT and its variants, e.g., VAT + SNTG (Luo et al., 2017) and TNAR (Yu et al., 2019), we choose the standard large convolutional network as the classifier as in (Miyato et al., 2018). For the option of dataset, we focus on the standard semisupervised setting on CIFAR-10 with 4,000 labeled data. Unless otherwise noted, all the experimental settings in our method are the identical with those in the Vanilla VAT (Miyato et al., 2018). In particular, we conduct our Pani VAT on input layer and one additional hidden layer, yielding two variants Pani VAT (input) and Pani VAT (+hidden). More details can refer to Appendix C.\nOur Results. Table 1 presents the state-of-the-art performance achieved by Pani VAT (+hidden) compared with other baselines on CIFAR-10. We focus on the baseline methods especially along the direction of variants of VAT and refer to the results from TNAR (with generative models) method (Yu et al., 2019), the previous state-of-the-art variant of VAT that additionally leverages the data manifold by generative models to decompose the directions of virtual adversarial smoothness. It is worthy of remarking that the performance of relevant GAN-based approaches, such as Localized GAN (LGAN) (Qi et al., 2018) as well as TNAR (with generative models) in Table 1, heavily rely on the established data manifold by the generative models. It is well-known that one might come across practical difficulties while implementing and deploying these generative models. By contrast, without the requirement of generative models, our approach can eliminate this disturbance and can still outperform these baselines. In addition, our Pani VAT (+hidden) achieves slight improvement compared with Pani VAT (input), which serves as an ablation study, and thus verifies the superiority of manifold regularization mentioned in our Pani framework part. Overall, the desirable flexibility along with desirable stability (lower standard deviation shown in Table 1) of our Pani VAT further demonstrates the effectiveness of our Pani strategy.\nAnalysis of Computational Cost. Another noticeable advantage of our approach is the negligible increase of computation cost compared with Vanilla VAT. In particular, one crucial operation in our approach is the construction of patch set P and it can be accomplished efficiently by as.strided function in Python or through the specific convolution operation in Pytorch or TensorFlow. Additionally, the index of K nearest neighbor graph can be efficiently attained through topk operation. We conduct further\nsensitivity analysis on the computational cost of our method with respect to other parameters, i.e., K1 (number of peer images), K2 (number of patch neighbors), L (number of perturbed layers) and patch size s.\nAs shown in Figure 2, the variation of all parameters has negligible impact on the training time each epoch compared with Vanilla VAT except the number of perturbed layers. The increasing of computational cost presents an almost linear tendency with the increasing of the number of perturbed layer as the amount of floating-point calculation is proportional to the number of perturbation elements, i.e., η, if we temporarily neglect the difference of time in the back propagation process for different layers. Combining results from Table 1 and Figure 2, we argue that the better performance can be expected if we construct perturbations on more hidden layers despite the increase of computation." }, { "heading": "3.2 PANI MIXUP", "text": "Implementation Details The experimental settings in this section are strictly followed by those in Vanilla MixUp (Zhang et al., 2017) and Vanilla MixMatch (Berthelot et al., 2019b) to pursue fair comparison on CIFAR-10, CIFAR-100 and TinyImageNet datasets. In particular, we compare ERM (Empirical Risk Minimization), MixUp training and our approach for different neural architectures. For fair comparison with input MixUp, we conduct our approach only on input layer and the better performance can be expected naturally if we consider more layers. Besides, we introduce mask mechanism on η to avoid overfitting. More details can refer to Appendix C.\nOur Results. Table 2 presents the consistent superiority of Pani MixUp over ERM (normal training) as well as Vanilla MixUp over different deep neural network architectures. It is worthy of noting that the superiority of our approach in the setting without data augmentation can be more easily observed than that with data augmentation. Another interesting phenomenon is that MixUp suffers from one kind of collapse on the performance as the accuracy of MixUp is even inferior to the ERM on CIFAR-100 and TinyImageNet on the setting without data augmentation. By contrast, our approach exhibits consistent advantage ERM and MixUp across various settings and network architectures.\nAnalysis of Computational Cost. To provide a comprehensive understanding about the computation cost of our method, we plot the tendency between training time under 200 epoch and the test accuracy as shown in Figure 3, in which we can better observe the computational efficiency as well as the better performance of our approach. To be more specific, we choose ResNet-18 as the basic test model and conduct the experiment about the variation of test accuracy while training to compare the efficacy of different approaches. From Figure 3, we can easily observe the consistent advantage of performance of our approach and comparable training time under the same number of epochs. One interesting point about the “collapse” phenomenon shown in the fourth subplot of Figure 3 reveals the process of this issue. After the learning rate decay around 50-th epoch, the performance of MixUp surprisingly drops steadily to the final result that is even inferior to original ERM. By contrast, our Pani MixUp method achieves consistent improvement on the generalization without the disturbance by any “collapse” issue.\nFurther Extension to MixMatch. To further demonstrate the superiority of our Neighborhood Interpolation MixUp, we embed our approach into MixMatch (Berthelot et al., 2019b), the current state-ofthe-art approach that naturally extends MixUp to semi-supervised setting. The resulting approach, Pani MixMatch, elegantly replaces the MixUp part in the MixMatch with our Pani MixUp, thus imposing Pani Mixup by additionally incorporating patch neighborhood correlation knowledge. Results shown in Table 3 demonstrate that Pani MixMatch can\nfurther improve the performance of MixMatch in the standard semi-supervised setting, thus verifying the effectiveness and flexibility of our Patch-level Neighborhood Interpolation." }, { "heading": "4 DISCUSSION AND CONCLUSION", "text": "The recent tendency of the design of regularization attaches more importance to the consistency and flexibility on various kinds of settings. Along this way, we focus on the proposal of a general regularization motivated by additional leverage of neighboring information existing in the sub-group of samples, e.g., within one batch, which can elegantly extend previous prestigious regularization approaches and generalize well in a wider range of scenarios.\nIn this paper, we firstly analyze the benefit of leveraging the knowledge from the non-i.i.d relationship while developing more efficient regularization for deep neural networks, thus proposing a general and flexible non-local regularizer called Patch-level Neighborhood Interpolation by interpolating the neighboring patch features in the computation process of network. Furthermore, we customize our Patch-level Neighborhood Interpolation into VAT and MixUp as well as its variant, respectively. Extensive experiments have verified the effectiveness of the two derived approaches, therefore demonstrating the benefit of our Patch-level Neighborhood Interpolation. Our work paves a way toward better understanding and leveraging the knowledge of relationship between samples to design better regularization and improve generalization over a wide range of settings. Since the proposed Pani framework is general and flexible, more regularizations and applications could be considered in the future, such as more traditional regularization methods and the application in natural language processing tasks. Also, the theoretical properties of Pani should also be analyzed." } ]
null
PATCH-LEVEL NEIGHBORHOOD INTERPOLATION: A GENERAL AND EFFECTIVE GRAPH-BASED REGU-
SP:7a6904083c223c746197e75e6b24d84107b50ab3
[ "In trust-region-based policy optimization methods such as TRPO and PPO, it is difficult to tune and lots of approximations are required. The authors try to solve this issue by introducing the closed-form derivation of trust regions for Gaussian policies with three different types of divergence (or distance). Based on the theoretical derivation, the differentiable layer is proposed, where the layer is built upon “old” policy during the trust-region-based policy updates. The difference comes from the use of various divergences (or distances) are given in theoretical and empirical ways. ", "The paper proposes a way to impose trust region restrictions via projections when doing policy optimisation in Reinforcement Learning. The projections have a closed form and enforce a trust region for each state individually. The authors propose three types of projections based on Frobenius, Wasserstein distances and KL divergence. They compare them to the existing methods (PPO, PAPI) and provide some insights about their behaviour." ]
Trust region methods are a popular tool in reinforcement learning as they yield robust policy updates in continuous and discrete action spaces. However, enforcing such trust regions in deep reinforcement learning is difficult. Hence, many approaches, such as Trust Region Policy Optimization (TRPO) and Proximal Policy Optimization (PPO), are based on approximations. Due to those approximations, they violate the constraints or fail to find the optimal solution within the trust region. Moreover, they are difficult to implement, often lack sufficient exploration, and have been shown to depend on seemingly unrelated implementation choices. In this work, we propose differentiable neural network layers to enforce trust regions for deep Gaussian policies via closed-form projections. Unlike existing methods, those layers formalize trust regions for each state individually and can complement existing reinforcement learning algorithms. We derive trust region projections based on the Kullback-Leibler divergence, the Wasserstein L2 distance, and the Frobenius norm for Gaussian distributions. We empirically demonstrate that those projection layers achieve similar or better results than existing methods while being almost agnostic to specific implementation choices. The code is available at https://git.io/Jthb0.
[ { "affiliations": [], "name": "Fabian Otto" } ]
[ { "authors": [ "A. Abdolmaleki", "R. Lioutikov", "J Peters", "N. Lau", "L. Reis", "G. Neumann" ], "title": "Model-based relative entropy stochastic search", "venue": "In Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Abbas Abdolmaleki", "Jost Tobias Springenberg", "Yuval Tassa", "Remi Munos", "Nicolas Heess", "Martin Riedmiller" ], "title": "Maximum a posteriori policy optimisation", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Akshay Agrawal", "Brandon Amos", "Shane Barratt", "Stephen Boyd", "Steven Diamond", "Zico Kolter" ], "title": "Differentiable Convex Optimization Layers", "venue": "Advances in Neural Information Processing Systems,", "year": 1910 }, { "authors": [ "Takuya Akiba", "Shotaro Sano", "Toshihiko Yanase", "Takeru Ohta", "Masanori Koyama" ], "title": "Optuna: A Next-generation Hyperparameter Optimization Framework", "venue": "In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 1907 }, { "authors": [ "Riad Akrour", "Joni Pajarinen", "Jan Peters", "Gerhard Neumann" ], "title": "Projections for approximate policy iteration algorithms", "venue": "In Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Brandon Amos", "J. Zico Kolter" ], "title": "OptNet: Differentiable Optimization as a Layer in Neural Networks", "venue": "In 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Marcin Andrychowicz", "Anton Raichuk", "Piotr Stańczyk", "Manu Orsini", "Sertan Girgin", "Raphael Marinier", "Léonard Hussenot", "Matthieu Geist", "Olivier Pietquin", "Marcin Michalski", "Sylvain Gelly", "Olivier Bachem" ], "title": "What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study", "venue": "In arXiv preprint,", "year": 2020 }, { "authors": [ "Oleg Arenz", "Mingjun Zhong", "Gerhard Neumann" ], "title": "Efficient Gradient-Free Variational Inference using Policy Search", "venue": "In Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "Yinlam Chow", "Ofir Nachum", "Aleksandra Faust", "Mohammad Ghavamzadeh", "Edgar DuenezGuzman" ], "title": "Lyapunov-based Safe Policy Optimization for Continuous Control", "venue": "In ICML Workshop RL4RealLife Submission,", "year": 2019 }, { "authors": [ "Gal Dalal", "Krishnamurthy Dvijotham", "Matej Vecerik", "Todd Hester", "Cosmin Paduraru", "Yuval Tassa" ], "title": "Safe Exploration in Continuous Action Spaces", "venue": "In arXiv preprint,", "year": 2018 }, { "authors": [ "Yan Duan", "Xi Chen", "Rein Houthooft", "John Schulman", "Pieter Abbeel" ], "title": "Benchmarking Deep Reinforcement Learning for Continuous Control", "venue": "33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Logan Engstrom", "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Firdaus Janoos", "Larry Rudolph", "Aleksander Madry" ], "title": "Implementation Matters in Deep Policy Gradients: A Case Study on PPO and TRPO", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Sham Kakade" ], "title": "A natural policy gradient", "venue": "Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and Synthetic,", "year": 2001 }, { "authors": [ "Sham M. Kakade", "John C. Langford" ], "title": "Approximately Optimal Approximate Reinforcement Learning", "venue": "In Proceedings of the Nineteenth International Conference on Machine Learning, pp", "year": 2002 }, { "authors": [ "Sergey Levine" ], "title": "Reinforcement learning and control as probabilistic inference: Tutorial and review", "venue": "CoRR, abs/1805.00909,", "year": 2018 }, { "authors": [ "Sergey Levine", "Chelsea Finn", "Trevor Darrell", "Pieter Abbeel" ], "title": "End-to-End Training of Deep Visuomotor Policies", "venue": "In The Journal of Machine Learning Research,", "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Aldo Pacchiano", "Jack Parker-Holder", "Yunhao Tang", "Anna Choromanska", "Krzysztof Choromanski", "Michael I Jordan" ], "title": "Learning to Score Behaviors for Guided Policy Optimization", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Joni Pajarinen", "Hong Linh Thai", "Riad Akrour", "Jan Peters", "Gerhard Neumann" ], "title": "Compatible Natural Gradient Policy Search", "venue": "Machine Learning,", "year": 2019 }, { "authors": [ "J. Peters", "K. Muelling", "Y. Altun" ], "title": "Relative entropy policy search", "venue": "In Proceedings of the TwentyFourth National Conference on Artificial Intelligence (AAAI), Physically Grounded AI Track,", "year": 2010 }, { "authors": [ "Jan Peters", "Stefan Schaal" ], "title": "Reinforcement learning of motor skills with policy gradients", "venue": "Neural Networks,", "year": 2008 }, { "authors": [ "K.B. Petersen", "M.S. Pedersen" ], "title": "The matrix cookbook", "venue": "URL http://www2.compute", "year": 2012 }, { "authors": [ "Pierre H. Richemond", "Brendan Maginnis" ], "title": "On Wasserstein Reinforcement Learning and the Fokker-Planck equation", "venue": "In arXiv preprint,", "year": 2017 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust Region Policy Optimization", "venue": "In Proceedings of Machine Learning Research, pp. 1889–1897,", "year": 2015 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "HighDimensional Continuous Control Using Generalized Advantage Estimation", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal Policy Optimization Algorithms", "venue": "In arXiv preprint,", "year": 2017 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel", "Timothy Lillicrap", "Karen Simonyan", "Demis Hassabis" ], "title": "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm", "venue": "In arXiv preprint,", "year": 2017 }, { "authors": [ "H. Francis Song", "Abbas Abdolmaleki", "Jost Tobias Springenberg", "Aidan Clark", "Hubert Soyer", "Jack W. Rae", "Seb Noury", "Arun Ahuja", "Siqi Liu", "Dhruva Tirumala", "Nicolas Heess", "Dan Belov", "Martin Riedmiller", "Matthew M. Botvinick" ], "title": "V-mpo: On-policy maximum a posteriori policy optimization for discrete and continuous control", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jun Song", "Chaoyue Zhao" ], "title": "Optimistic Distributionally Robust Policy Optimization", "venue": "In arXiv preprint,", "year": 2006 }, { "authors": [ "Asuka Takatsu" ], "title": "Wasserstein geometry of Gaussian measures", "venue": "Osaka Journal of Mathematics,", "year": 2011 }, { "authors": [ "Voot Tangkaratt", "Abbas Abdolmaleki", "Masashi Sugiyama" ], "title": "Guide actor-critic for continuous control", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Cédric Villani" ], "title": "Optimal transport: old and new, volume 338", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "Yuhui Wang", "Hao He", "Xiaoyang Tan", "Yaozhong Gan" ], "title": "Trust region-guided proximal policy optimization", "venue": "In Advances in Neural Information Processing Systems", "year": 1901 }, { "authors": [ "Tsung-Yen Yang", "Justinian Rosca", "Karthik Narasimhan", "Peter J Ramadge" ], "title": "Projection-Based Constrained Policy Optimization", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Kolter" ], "title": "2017) and differentiate the KKT conditions around the optimal Lagrangian multipliers computed during the forward pass. The KKT Conditions of the dual are given by", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep reinforcement learning has shown considerable advances in recent years with prominent application areas such as games (Mnih et al., 2015; Silver et al., 2017), robotics (Levine et al., 2015), and control (Duan et al., 2016). In policy search, policy gradient (PG) methods have been highly successful and have gained, among others, great popularity (Peters & Schaal, 2008). However, often it is difficult to tune learning rates for vanilla PG methods, because they tend to reduce the entropy of the policy too quickly. This results in a lack of exploration and, as a consequence, in premature or slow convergence. A common practice to mitigate these limitations is to impose a constraint on the allowed change between two successive policies. Kakade & Langford (2002) provided a theoretical justification for this in the approximate policy iteration setting. Two of the arguably most favored policy search algorithms, Trust Region Policy Optimization (TRPO) (Schulman et al., 2015a) and Proximal Policy Optimization (PPO) (Schulman et al., 2017), follow this idea using the KullbackLeibler divergence (KL) between successive policies as a constraint.\nWe propose closed-form projections for Gaussian policies, realized as differentiable neural network layers. These layers constrain the change in successive policies by projecting the updated policy onto trust regions. First, this approach is more stable with respect to what Engstrom et al. (2020) refer to as code-level optimizations than other approaches. Second, it comes with the benefit of imposing constraints for individual states, allowing for the possibility of state-dependent trust regions. This allows us to constrain the state-wise maximum change of successive policies. In this we differ from previous works, that constrain only the expected change and thus cannot rely on exact guarantees of monotonic improvement. Furthermore, we propose three different similarity measures, the KL\n∗Correspondence to fabian.otto@bosch.com\ndivergence, the Wasserstein L2 distance, and the Frobenius norm, to base our trust region approach on. The last layer of the projected policy is now the the trust region layer which relies on the old policy as input. This would result in a ever-growing stack of policies, rendering this approach clearly infeasible. To circumvent this issue we introduce a penalty term into the reinforcement learning objective to ensure the input and output of the projection stay close together. While this still results in an approximation of the trust region update, we show that the trust regions are properly enforced. We also extend our approach to allow for a controlled evolution of the entropy of the policy, which has been shown to increase the performance in difficult exploration problems (Pajarinen et al., 2019; Akrour et al., 2019).\nWe compare and discuss the effect of the different similarity measures as well as the entropy control on the optimization process. Additionally, we benchmark our algorithm against existing methods and demonstrate that we achieve similar or better performance." }, { "heading": "2 RELATED WORK", "text": "Approximate Trust Regions. Bounding the size of the policy update in policy search is a common approach. While Kakade & Langford (2002) originally focused on a method based on mixing policies, nowadays most approaches use KL trust regions to bound the updates. Peters et al. (2010) proposed a first approach to such trust regions by formulating the problem as a constraint optimization and provided a solution based on the dual of that optimization problem. Still, this approach is not straightforwardly extendable to highly non-linear policies, such as neural networks. In an attempt to transfer those ideas to deep learning, TRPO (Schulman et al., 2015a) approximates the KL constraint using the Fisher information matrix and natural policy gradient updates (Peters & Schaal, 2008; Kakade, 2001), along with a backtracking line search to enforce a hard KL constraint. Yet, the resulting algorithm scales poorly. Thus, Schulman et al. (2017) introduced PPO, which does not directly enforce the KL trust region, but clips the probability ratio in the importance sampling objective. This allows using efficient first-order optimization methods while maintaining robust training. However, Engstrom et al. (2020) and Andrychowicz et al. (2020) recently showed that implementation choices are essential for achieving state-of-the-art results with PPO. Code-level optimizations, such as reward scaling as well as value function, observation, reward, and gradient clipping, can even compensate for removing core parts of the algorithm, e. g. the clipping of the probability ratio. Additionally, PPO heavily relies on its exploration behavior and might get stuck in local optima (Wang et al., 2019). Tangkaratt et al. (2018) use a closed-form solution for the constraint optimization based on the method of Lagrangian multipliers. They, however, require a quadratic parametrization of the Q-Function, which can limit the performance. Pajarinen et al. (2019) introduced an approach based on compatible value function approximations to realize KL trust regions. Based on the reinforcement learning as inference paradigm (Levine, 2018), Abdolmaleki et al. (2018) introduced an actor-critic approach using an Expectation-Maximization based optimization with KL trust regions in both the E-step and M-step. Song et al. (2020) proposed an on-policy version of this approach using a similar optimization scheme and constraints.\nProjections for Trust Regions. Akrour et al. (2019) proposed Projected Approximate Policy Iteration (PAPI), a projection-based solution to implement KL trust regions. Their method projects an intermediate policy, that already satisfies the trust region constraint, onto the constraint bounds. This maximizes the size of the update step. However, PAPI relies on other trust region methods to generate this intermediary policy and cannot operate in a stand-alone setting. Additionally, the projection is not directly part of the policy optimization but applied afterwards, which can result in sub-optimal policies. In context of computational complexity, both, TRPO and PAPI, simplify the constraint by leveraging the expected KL divergence. Opposed to that, we implement the projections as fully differentiable network layers and directly include them in the optimization process. Additionally, our projections enforce the constraints per state. This allows for better control of the change between subsequent policies and for state-dependent trust regions.\nFor the KL-based projection layer we need to resort to numerical optimization and implicit gradients for convex optimizations (Amos & Kolter, 2017; Agrawal et al., 2019). Thus, we investigate two alternative projections based on the Wasserstein L2 and Frobenius norm, which allow for closed form solutions. Both, Wasserstein and Frobenius norm, have found only limited applications in reinforcement learning. Pacchiano et al. (2020) use the Wasserstein distance to score behaviors of\nagents. Richemond & Maginnis (2017) proposed an alternative algorithm for bandits with Wasserstein based trust regions. Song & Zhao (2020) focus on solving the trust region problem for distributional policies using both KL and Wasserstein based trust regions for discrete action spaces. Our projections are applicable independently of the underlying algorithm and only assume a Gaussian policy, a common assumption for continuous action spaces.\nSeveral authors (Dalal et al., 2018; Chow et al., 2019; Yang et al., 2020) used projections as network layers to enforce limitations in the action or state space given environmental restrictions, such as robotic joint limits.\nEntropy Control. Abdolmaleki et al. (2015) introduced the idea of explicitly controlling the decrease in entropy during the optimization process, which later was extended to deep reinforcement learning by Pajarinen et al. (2019) and Akrour et al. (2019). They use either an exponential or linear decay of the entropy during policy optimization to control the exploration process and escape local optima. To leverage those benefits, we embed this entropy control mechanism in our differentiable trust region layers." }, { "heading": "3 PRELIMINARIES AND PROBLEM STATEMENT", "text": "We consider the general problem of a policy search in a Markov Decision Process (MDP) defined by the tuple (S,A, T ,R,P0, γ). We assume the state space S and action space A are continuous and the transition probabilities T : S × A × S → [0, 1] describe the probability transitioning to state st+1 ∈ S given the current state st ∈ S and action at ∈ A. We denote the initial state distributions as P0 : S → [0, 1]. The reward returned by the environment is given by a function R : S × A → R and γ ∈ [0, 1) describes the discount factor. Our goal is to maximize the expected accumulated discounted reward Rγ = ET ,P0,π [ ∑∞ t=0 γ\ntR(st, at)]. To find the optimal policy, traditional PG methods often make use of the likelihood ratio gradient and an importance sampling estimator. Moreover, instead of directly optimizing the returns, it has been shown to be more effective to optimize the advantage function as this results in an unbiased estimator of the gradient with less variance\nmax θ Ĵ(πθ, πθold) = max θ E(s,a)∼πθold [ πθ(a|s) πθold(a|s) Aπθold (s, a) ] , (1)\nwhere Aπ(s, a) = E [Rγ |s0 = s, a0 = a;π] − E [Rγ |s0 = s;π] describes the advantage function, and the expectation is w.r.t πθold , i.e. s\n′ ∼ T (·|s, a), a ∼ πθold(·|s), s0 ∼ P0(s0), s ∼ ρπθold where ρπθold is a stationary distribution of policy πθold . The advantage function is commonly estimated bygeneralized advantage estimation (GAE) (Schulman et al., 2015b). Trust region methods use additional constraints for the given objective. Using a constraint on the maximum KL over the states has been shown to guarantee monotonic improvement of the policy (Schulman et al., 2015a). However, since all current approaches do not use a maximum KL constraint but an expected KL constraint, the guarantee of monotonic improvement does not hold exactly either. We are not aware of such results for the W2 distance or the Frobenius norm.\nFor our projections we assume Gaussian policies πθold(at|st) = N (at|µold(st),Σold(st)) and πθ(at|st) = N (at|µ(st),Σ(st)) represent the old as well as the current policy, respectively. We explore three trust regions on top of Equation 1 that employ different similarity measures between old and new distributions, more specifically the frequently used reverse KL divergence, the Wasserstein L2 distance, and the Frobenius norm. Reverse KL Divergence. The KL divergence between two Gaussian distributions with means µ1 and µ2 and covariances Σ1 and Σ2 can generally be written as\nKL({µ1,Σ1} ‖ {µ2,Σ2}) = 1\n2\n[ (µ2 − µ1)TΣ−12 (µ2 − µ1) + log\n|Σ2| |Σ1|\n+ tr{Σ−12 Σ1} − d ] ,\nwhere d is the dimensionality of µ1, µ2. The KL uses the Mahalanobis distance to measure the similarity between the two mean vectors. The difference of the covariances is measured by the difference in shape, i.e., the difference in scale, given by the log ratio of the determinants, plus the difference in rotation, given by the trace term. Given the KL is non-symmetric, it is clearly not a distance, yet still a frequently used divergence between distributions. We will use the more common reverse KL for our trust region, where the first argument is the new policy and the second is the old policy.\nWasserstein Distance. The Wasserstein distance is a distance measure based on an optimal transport formulation, for more details see Villani (2008). The Wasserstein-2 distance for two Gaussian distributions can generally be written as\nW2 ({µ1,Σ1} , {µ2,Σ2}) = |µ1 − µ2|2 + tr ( Σ1 + Σ2 − 2 ( Σ 1/2 2 Σ1Σ 1/2 2 )1/2) .\nA key difference to the KL divergence is that the Wasserstein distance is a symmetric distance measure, i. e.,W2(q, p) = W2(p, q). Our experiments also revealed that it is beneficial to measure the W2 distance in a metric space defined by the covariance of the old policy distribution, denoted here as Σ2, as the distance measure is then more sensitive to the data-generating distribution. The W2 distance in this metric space reads\nW2,Σ2 ({µ1,Σ1} , {µ2,Σ2}) =(µ2 − µ1)TΣ−12 (µ2 − µ1)\n+ tr ( Σ−12 Σ1 + I− 2Σ−12 ( Σ 1/2 2 Σ1Σ 1/2 2 )1/2) .\nFrobenius Norm. The Frobenius norm is a matrix norm and can directly be applied to the difference of the covariance matrices of the Gaussian distributions. To measure the distance of the mean vectors, we will, similar to the KL divergence, employ the Mahalanobis distance as this empirically leads to an improved performance in comparison to just taking the squared distance. Hence, we will denote the following metric as Frobenius norm between two Gaussian distributions\nF ({µ1,Σ1} , {µ2,Σ2}) = (µ2 − µ1)TΣ−12 (µ2 − µ1) + tr ( (Σ2 − Σ1)T (Σ2 − Σ1) ) .\nThe Frobenius norm also constitutes a symmetric distance measure." }, { "heading": "4 DIFFERENTIABLE TRUST-REGION LAYERS FOR GAUSSIAN POLICIES", "text": "We present projections based on the three similarity measures, i. e., Frobenius norm, Wasserstein L2 distance, and KL divergence. These projections realize state-wise trust regions and can directly be integrated in the optimization process as differentiable neural network layers. Additionally, we extend the trust region layers to include an entropy constraint to gain control over the evolution of the policy entropy during optimization. The trust regions are defined by a distance or divergence d(π(·|s), πold(·|s)) between probability distributions. Complementing Equation 1 with the trust region constraint leads to\nmax θ Ĵ(πθ, πθold) s.t. d(πθold(·|s)), πθ(·|s)) ≤ ∀s ∈ S. (2)\nWhile, in principle, we want to enforce the constraint for every possible state, in practice, we can only enforce them for states sampled from rollouts of the current policy.\nTo solve the problem in Equation 2, a standard neural network will output the parameters µ,Σ of a Gaussian distribution πθ, ignoring the trust region bounds. These parameters are provided to the trust region layers, together with the mean and covariance of the old policy and a parameter specifying the size of the trust region . The new policy is then given by the output of the trust region layer. Since the old policy distribution is fixed, all distances or divergences used in this paper can be decomposed into a mean and a covariance dependent part. This enables us to use separate trust regions as well as bounds for mean and covariance, allowing for more flexibility in the algorithm. The trust region layers aim to project πθ into the trust region by finding parameters µ̃ and Σ̃ that are closest to the original parameters µ and Σ while satisfying the trust region constraints. The projection is based on the same distance or divergence which was used to define the respective trust region. Formally, this corresponds to the following optimization problems for each s\narg min µ̃s\ndmean (µ̃s, µ(s)) , s.t. dmean (µ̃s, µold(s)) ≤ µ, and (3)\narg min Σ̃s dcov\n( Σ̃s,Σ(s) ) , s.t. dcov ( Σ̃s,Σold(s) ) ≤ Σ, (4)\nwhere µ̃s and Σ̃s are the optimization variables for state s. Here, dmean is the mean dependent part and dcov is the covariance dependent part of the employed distance or divergence. For brevity of notation we will neglect all dependencies on the state in the following. We denote the projected policy as π̃(a|s) = N (a|µ̃, Σ̃)." }, { "heading": "4.1 PROJECTION OF THE MEAN", "text": "For all three trust region objectives we make use of the same distance measure for the mean, the Mahalanobis distance. Hence, the optimization problem for the mean is given by\narg min µ̃\n(µ− µ̃)T Σ−1old (µ− µ̃) s.t. (µold − µ̃) T Σ−1old (µold − µ̃) ≤ µ. (5)\nBy making use of the method of Lagrangian multipliers (see Appendix B.2), we can formulate the dual and solve it for the projected mean µ̃ as\nµ̃ = µ+ ωµold\n1 + ω with ω =\n√ (µold − µ)T Σ−1old (µold − µ)\nµ − 1. (6)\nThis equation can directly be used as mean for the Gaussian policy, while it easily allows to compute gradients. Note, that for the mean part of the KL we would need to use the Σ−1 instead of Σ−1old in the objective of Equation 5. Yet, this objective still results in a valid trust region problem which is much easier to optimize." }, { "heading": "4.2 PROJECTION OF THE COVARIANCE", "text": "Frobenius Projection. The Frobenius projection formalizes the trust region for the covariance with the squared Frobenius norm of the matrix difference, which yields\narg min Σ̃\ntr ( (Σ− Σ̃)T (Σ− Σ̃) ) , s.t. tr ( (Σold − Σ̃)T (Σold − Σ̃) ) ≤ Σ.\nWe again use the method of Lagrangian multipliers (see Appendix B.3) and get the covariance Σ̃ as\nΣ̃ = Σ + ηΣold\n1 + η with η =\n√ tr ((Σold − Σ)T (Σold − Σ))\nΣ − 1, (7)\nwhere η is the corresponding Lagrangian multiplier.\nWasserstein Projection. Deriving the Wasserstein projection follows the same procedure. We obtain the following optimization problem\narg min Σ̃ tr\n( Σ−1old Σ + Σ −1 old Σ̃− 2Σ−1old ( Σ 1/2Σ̃Σ 1/2 )1/2) ,\ns.t. tr ( I + Σ−1old Σ̃− 2Σ−1old ( Σ 1/2 old Σ̃Σ 1/2 old )1/2) ≤ Σ,\n(8)\nwhere I is the identity matrix. A closed form solution to this optimization problem can be found by using the methods outlined in Takatsu (2011). However, we found the resulting solution for the projected covariance matrices to be numerically unstable. Therefore, we made the simplifying assumption that both the current Σ and the old covariance Σold commute with Σ̃. Under the common premise of diagonal covariances, this commutativity assumption always holds. For the more general case of arbitrary covariance matrices, we would need to ensure the matrices are sufficiently close together, which is effectively ensured by Equation 8. Again, we introduce Lagrange multipliers and solve the dual problem to obtain the optimal primal and dual variables (see Appendix B.4). Note however, that here we chose the square root of the covariance matrix1 as primal variable. The corresponding projection for the square root covariance Σ̃1/2 is then\nΣ̃ 1/2 =\nΣ1/2 + ηΣ 1/2 old\n1 + η with η =\n√√√√ tr ( I + Σ−1old Σ− 2Σ −1/2 old Σ 1/2 )\nΣ − 1, (9)\nwhere η is the corresponding Lagrangian multiplier. We see the same pattern emerging as for the Frobenius projection. The chosen similarity measure reappears in the expression for the Lagrangian multiplier and the primal variables are weighted averages of the corresponding parameters of the old and the predicted Gaussian.\n1We assume the true matrix square root Σ = Σ1/2Σ1/2 and not a Cholesky factor Σ = LLT since it naturally appears in the expressions for the projected covariance from the original Wasserstein formulation.\nKL Projection. Identically to the previous two projections, we reformulate Equation 4 as\narg min Σ̃\ntr ( Σ−1Σ̃ )\n+ log |Σ| |Σ̃|\n, s.t. tr ( Σ−1old Σ̃ ) − d+ log |Σold|\n|Σ̃| ≤ Σ, (10)\nwhere d is the dimensionality of the action space. It is impossible to acquire a fully closed form solution for this problem. However, following Abdolmaleki et al. (2015), we can obtain the projected precision Λ̃ = Σ̃−1 by interpolation between the precision matrices of the old policy πold and the current policy π\nΛ̃ = η∗Λold + Λ η∗ + 1 , η∗ = arg min η g(η), s.t. η ≥ 0, (11)\nwhere η is the corresponding Lagrangian multiplier and g(η) the dual function. While this dual cannot be solved in closed form, an efficient solution exists using a standard numerical optimizer, such as BFGS, since it is a 1-D convex optimization. Regardless, we want a differentiable projection and thus also need to backpropagate the gradients through the numerical optimization. To this end, we follow Amos & Kolter (2017) and compute those gradients by taking the differentials of the KKT conditions of the dual. We refer to Appendix B.5 for more details and derivations.\nEntropy Control. Previous works (Akrour et al., 2019; Abdolmaleki et al., 2015) have shown the benefits of introducing an entropy constraint H(πθ) ≥ β in addition to the trust region constraints. Such a constraint allows for more control over the exploration behavior of the policy. In order to endow our algorithm with this improved exploration behavior, we make use of the results from Akrour et al. (2019) and scale the standard deviation of the Gaussian distribution with a scalar factor exp {(β −H(πθ)) /d}, which can also be individually computed per state." }, { "heading": "4.3 ANALYSIS OF THE PROJECTIONS", "text": "It is instructive to compare the three projections. The covariance update is an interpolation for all three projections, but the quantities that are interpolated differ. For the Frobenius projection we directly interpolate between the old and current covariances (Equation 7), for the W2 projection between their respective matrix square roots (Equation 9), and for the KL projection between their inverses (Equation 11). In other words, each projection suggests which parametrization to use for the covariance matrix. The different interpolations also have an interesting effect on the entropy of the resulting covariances which can be observed in Figure 1. Further, we can prove the following theorem about the entropy of the projected distributions\nTheorem 1 Let πθ and πθold be Gaussian and η ≥ 0, then for the entropy of the projected distribution H(π̃) it holds that H(π̃) ≥ minimum(H(πθ),H(πθold)) for the Frobenius (Equation 7) and the Wasserstein projection (Equation 9), as well as, H(π̃) ≤ maximum(H(πθ),H(πθold)) for the KL projection (Equation 11).\nThe proof is based on the multiplicative version of the Brunn-Minkowski inequality and can be found in Appendix B.1. Intuitively, this implies that the Frobenius and Wasserstein projections act more aggressively, i. e., they rather yield a higher entropy, while the KL projection acts more conservatively, i. e., it rather yields a smaller entropy. This could also explain why many KL based trust region methods lose entropy too quickly and converge prematurely. By introducing an explicit entropy control, those effects can be mitigated." }, { "heading": "4.4 SUCCESSIVE POLICY UPDATES", "text": "The above projections can directly be implemented for training the current policy. Note, however, that at each epoch i the policy πi predicted by the network before the projection layer does not respect the constraints and thus relies on calling this layer. The policy of the projection layer π̃i not only depends on the parameters of πi but also on the old policy network πi,old = π̃i−1. This would result in an ever-growing stack of policy networks becoming increasingly costly to evaluate. In other words, π̃i is computed using all stored networks of πi, πi−1, . . . , π0. We now discuss the parametrization of π̃ via amortized optimization.\nWe need to encode the information of the projection layer into the parameters θ of the next policy, i.e. π̃(a|s; θ) = p◦πθ(a|s) is a composition function in which p denotes the projection layer. The output of πθ is (µ,Σ), and p computes (µ̃, Σ̃) according Equations 6, 7, 9, or 11. Formally, we aim to find a set of parameters θ∗ = arg minθ Es∼ρπold [d (π̃(·|s), πθ(·|s))], where ρπold is the state distribution of the old policy and d is the similarity measure used for the projection, such that we minimize the expected distance or divergence between the projection and the current policy prediction.\nThe most intuitive way to solve this problem is to use the existing samples for additional regression steps after the policy optimization. Still, this adds a computational overhead. Therefore, we propose to concurrently optimize both objectives during training by penalizing the main objective, i. e.,\narg min θ E(s,a)∼πθold [ π̃(a|s; θ) πθold(a|s) Aπold(s, a) ] − αEs∼pπold [d (π̃(·|s; θ), πθ(·|s))] . (12)\nNote that the importance sampling ratio is computed based on a Gaussian distribution generated by the trust region layer and not directly from the network output. Furthermore, the gradient for the regression penalty does not flow through the projection, it is solely acting as supervised learning signal. As appropriate similarity measures d for the penalty, we resort to the measures used in each projection. For a detailed algorithmic view see Appendix A." }, { "heading": "5 EXPERIMENTS", "text": "Mujoco Benchmarks We evaluate the performance of our trust region layers regarding sample complexity and final reward in comparison to PAPI and PPO on the OpenAI gym benchmark suite (Brockman et al., 2016). We explicitly did not include TRPO in the evaluation, as Engstrom et al. (2020) showed that it can can achieve similar performance to PPO. For our experiments, the PAPI projection and its conservative PPO version are executed in the setting sent to us by the author. The hyperparameters for all three projections and PPO have been selected with Optuna (Akiba et al., 2019). See Appendix D for a full listing of all hyperparameters. We use a shared set of hyperparameters for all environments except for the Humanoid, which we optimized separately. Next\nto the standard PPO implementation with all code-level optimizations we further evaluate PPO-M, which only leverages the core PPO algorithm. Our projections and PPO-M solely use the observation normalization, network architecture, and initialization from the original PPO implementation. All algorithms parametrize the covariance as a non-contextual diagonal matrix. We refer to the Frobenius projection as FROB, the Wasserstein projection as W2, and the KL projection as KL.\nTable 1 gives an overview of the final performance and convergence speed on the Mujoco benchmarks, Figure 4 in the appendix displays the full learning curves. After each epoch, we evaluate five episodes without applying exploration noise to obtain the return values. Note that we initially do not include the entropy projection to provide a fair comparison to PPO. The results show that our trust region layers are able to perform similarly or better than PPO and PAPI across all tasks. While the performance on the Hopper-v2 is comparable, the projections significantly outperform all baselines on the HalfCheetah-v2. The KL projection even demonstrates the best performance on the remaining three environments. Besides that, the experiments present a relatively balanced performance between projections, PPO, and PAPI. The differences are more apparent when comparing the projections to PPO-M, which uses the same implementation details as our projections. The asymptotic performance of PPO-M is on par for the Humanoid-v2, but it convergences much slower and is noticeably weaker on the remaining tasks. Consequently, the approximate trust region of PPO alone is not sufficient for good performance, only paired with certain implementation choices. Still, the original PPO cannot fully replace a mathematically sound trust region as ours, although it does not exhibit a strong performance difference. For this, Figure 2 visualizes the mean KL divergence at the end of each epoch for all methods. Despite the fact that neither W2 nor Frobenius projection use the KL, we leverage it here as a standardizing measure to compare the change in the policy distributions. All projections are characterized by an almost constant change, whereas for PPO-M the changes are highly inconsistent. The code-level optimizations of PPO can mitigate this to some extend but cannot properly enforce the desired constant change in the policy distribution. In particular, we have found that primarily the learning rate decay contributes to the relatively good behavior of PPO. Albeit, PAPI provides a similar principled trust region projection as we do, it still has some inconsistency by approaching the bound iteratively.\nEntropy Control To demonstrate the effect of combining our projections with entropy control, as described in Section 4.2, we evaluate all Mujoco tasks again for this extended setting. The target entropy in each iteration i is computed by exponentially decaying the initial entropy H0 to κ with temperature τ as κ + (H0 − κ)τ 10i N , where N is the total number of training steps. The bottom of Table 1 shows the results for our projections with entropy control. Especially on the more complex tasks with more exploration, all three projections significantly benefit from the entropy control. Their asymptotic performance for the HalfCheetah-v2, Ant-v2, and Humanoid-v2 increases and yields a much faster convergence in the latter. For the other Mujoco tasks the performance remains largely constant since the complexity of these tasks is insufficient to benefit from an explicit entropy control, as also noted by Pajarinen et al. (2019) and Abdolmaleki et al. (2015).\nContextual Covariances. To emphasize the advantage of state-wise trust regions we consider the case of policies with state-dependent covariances. Existing methods, such as PPO and TRPO, are rarely used in this setting. In addition, PAPI cannot project the covariance in the contextual case. Further, Andrychowicz et al. (2020) demonstrated that for the standard Mujoco benchmarks, contextual covariances are not beneficial in an on-policy setting. Therefore, we choose to evaluate on a task motivated from optimal control which benefits from a contextual covariance. We extend the Mujoco Reacher-v2 to a 5-link planar robot, the distance penalty to the target is only provided in the last time step, t = 200, and the observation space also contains the current time step t. This semisparse reward specification imposes a significantly harder exploration problem as the agent is only provided with a feedback at the last time step. We again tuned all hyperparameters using Optuna Akiba et al. (2019) and did not include the entropy projection. All feasible approaches are compared with and without contextual covariances, the results therefor are presented in Figure 2 (right). All three projections significantly outperform the baseline methods with the non-contextual covariance. Additionally, both the W2 and KL projection improve their results in the contextual case. In contrast, all baselines decrease in performance and are not able to leverage the advantage of contextual information. This poor performance mainly originates from incorrect exploitation. PPO reduces the covariance too quickly, whereas PAPI reduces it too slowly, leading to a suboptimal performance for both. The Frobenius projection, however, does not benefit from contextual covariances either, since numerical instabilities arise from too small covariance values close to convergence. Those issues can be mitigated using a smaller covariance bound, but they cannot be entirely avoided. The KL projection, while yielding the best results throughout all experiments, relies on a numerical optimization. Generally, this is computationally expensive, however, by leveraging an efficient C++ implementation this problem can be negated (see Appendix B.5). As a bonus, the KL projection has all properties of existing KL-based trust region methods that have monotonic improvement guarantees. Nevertheless, for quick benchmarks, the W2 is preferred, given it is slightly less prone to hyperparameter choices and does not require a dedicated custom implementation.\nTrust Region Regression Loss. Lastly, we investigate the main approximation of our approach, the trust region regression loss (Equation 12). In the following ablation, we evaluate how different choices of the regression weight α affect the constraint satisfaction. Figure 2 (center) shows the Mahalanobis distance between the unprojected and the old policy means for different α values. In addition, for one run we choose α = 0 and execute the trust region regression separately after each epoch for several iterations. One key observation is that decreasing the penalty up to a certain threshold leads to larger changes in the policy and pushes the mean closer to its maximum bound. Intuitively, this can be explained by the construction of the bound. As the penalty is added only to the loss when the bound is violated, larger changes in the policy are punished while smaller steps do not directly affect the loss negatively. By selecting a larger α, this behavior is reinforced. Furthermore, we can see that some smaller values of α yield a behavior which is similar to the full regression setting. Consequently, it is justified to use a computationally simpler penalty instead of performing a full regression after each epoch." }, { "heading": "6 DISCUSSION AND FUTURE WORK", "text": "In this work we proposed differentiable projection layers to enforce trust region constraints for Gaussian policies in deep reinforcement learning. While being more stable than existing methods, they also offer the benefit of imposing the constraints on a state level. Unlike previous approaches that only constrain the expected change between successive policies and for whom monotonic improvement guarantees thus only hold approximately, we can constrain the maximum change. Our results illustrate that trust regions are an effective tool in policy search for a wide range of different similarity measures. Apart from the commonly used reverse KL, we also leverage the Wasserstein distance and Frobenius norm. We demonstrated the subtle but important differences between those three different types of trust regions and showed our benchmark performance is on par or better than existing methods that use more code-level optimizations. For future work, we plan to continue our research with more exploration-heavy environments, in particular with contextual covariances. Additionally, more sophisticated heuristics or learning methods could be used to adapt the trust region bounds for better performance. Lastly, we are interested in using our trust region layers for other deep reinforcement learning approaches, such as actor-critic methods." }, { "heading": "A ALGORITHM", "text": "Algorithm 1 Differentiable Trust Region Layer. The trust region layer acts as final layer after predicting a Gaussian distribution. It projects this predicted Gaussian onto the trust region in case it is violating the specified bounds. As output it generates a projected mean and covariance that satisfy the respective trust region bound. The entropy control in the last step can be disabled.\nInitialize bounds µ, Σ, temperature τ as well as target κ and initial entropyH0. 1: procedure TRUSTREGIONLAYER(µ,Σ, µold,Σold) 2: if dmean(µ, µold) > µ then 3: Compute µ̃ with Equation 6 4: else 5: µ̃ = µ 6: if dcov(Σ,Σold) > Σ then 7: Compute Σ̃ with Equations 7, 9, or 11 8: else 9: Σ̃ = Σ\n10: β = κ+ (H0 − κ)τ 10i N . (Optional) entropy control as described in Section 4.2 11: ifH(Σ) < β then 12: c = exp {(β −H(Σ)) /dim(a)} 13: Σ̃ = cΣ̃ 14: return µ̃, Σ̃\nAlgorithm 2 Algorithmic view of the proposed Trust Region Projections. The trust region projections itself do not require approximations, the old policy update in the last step is the only point where we introduce an approximation. This update would normally require additional supervised regression steps that minimize the distance between the network output and the projection. However, by leveraging the regression penalty during policy optimization this optimization step can be omitted. Both approaches yield a policy, which is independent of the old policy distribution, i. e. it can act without the projection while maintaining the trust region. However, the penalty does not require additional computation and the policy can directly generate new trajectories, equivalently to other trust region methods, such as PPO.\n1: Initialize policy θ0,0 2: for i = 0, 1, . . . , N do . epoch 3: Collect set of trajectories Di = {τk} with π(θi,0) 4: Compute advantage estimates Ât with GAE 5: for j = 0, 1, . . . ,M do 6: Use π(θi,j) to predict Gaussian action distributions N (µi,j ,Σi,j) for Di 7: π̃ = TRUSTREGIONLAYER(µi,j ,Σi,j , µi,0,Σi,0) 8: Update policy with Adam using the following policy gradient:\nθi,j+1 ← Adam ( ∇θ [ Eπ(θi,0) [ π̃(a|s; θ) π(a|s; θi,0) Ât ] − αEs∼pπ(θi,0) [d (π̃(·|s; θ), π(·|s; θ))] ] ∣∣∣ θ=θi,j )\n9: Successive policy update: θi+1,0 ← θi,M" }, { "heading": "B DERIVATIONS", "text": "B.1 PROOF OF THEOREM 1\nThis section provides a proof for Theorem 1. We mainly used the multiplicative version of the Brunn-Minkowski inequality\nlog(α|Σ1|+ β|Σ2|) ≥ log(|Σ1|)α log(|Σ2|)β\nwhere Σ1,Σ2 are p.s.d, α, β are positive, and α+ β = 1.\nFrobenius Projection H(π̃) = 0.5 log |2πeΣ̃|\n= 0.5 log ∣∣∣∣2πe ( 1\nη + 1 Σ +\nη\nη + 1 Σold\n)∣∣∣∣\n≥ 0.5 log ∣∣∣(2πeΣ) 1 η+1 det (2πeΣold) η η+1 ∣∣∣\n= 1 η + 1 0.5 log |2πeΣ|+ η η + 1 0.5 log |2πeΣold|\n= 1\nη + 1 H(πθ) +\nη\nη + 1 H(πθold) ≥ minimum (H(πθ),H(πθold))\nWasserstein Projection Let k denote the dimensionality of the distributions under consideration. H(π̃) = 0.5 log(2πe)k|Σ̃|\n= 0.5 log(2πe)k ∣∣∣∣ ( 1\nη + 1 Σ0.5 +\nη\nη + 1 Σ0.5old\n)∣∣∣∣ 2\n= 0.5 log(2πe)k + log ∣∣∣∣ 1\nη + 1 Σ0.5 +\nη\nη + 1 Σ0.5old\n∣∣∣∣+ log ∣∣∣∣ 1\nη + 1 Σ0.5 +\nη\nη + 1 Σ0.5old\n∣∣∣∣\n≥ 0.5 log(2πe)k + log ∣∣Σ0.5 ∣∣ 1η+1 ∣∣Σ0.5old ∣∣ ηη+1 + log ∣∣Σ0.5 ∣∣ 1η+1 ∣∣Σ0.5old ∣∣ ηη+1 = 0.5 log(2πe)k + log |Σ| 1η+1 |Σold| η η+1 = 0.5 log ( | (2πeΣ| 1η+1 |2πeΣold| η η+1 )\n= 1 η + 1 0.5 log |2πeΣ|+ η η + 1 0.5 log |2πeΣold|\n= 1\nη + 1 H(πθ) +\nη\nη + 1 H(πθold) ≥ minimum (H(π),H(πold))" }, { "heading": "KL Projection", "text": "H(π̃) = 0.5 log |2πeΣ̃|\n= 0.5 log ∣∣∣∣ 1\nη + 1 (2πeΣ)−1 +\nη\nη + 1 (2πeΣold)\n−1 ∣∣∣∣ −1\n= −0.5 log ∣∣∣∣ 1\nη + 1 (2πeΣ)−1 +\nη\nη + 1 (2πeΣold)\n−1 ∣∣∣∣\n≤ −0.5 log (∣∣(2πeΣ)−1 ∣∣ 1η+1 ∣∣(2πeΣold)−1 ∣∣ ηη+1 ) = 0.5 log ( |2πeΣ| 1η+1 |2πeΣold| η η+1 ) (use the fact that: det(A−1) = 1/ det(A))\n= 1 η + 1 0.5 log |2πeΣ|+ η η + 1 0.5 log |2πeΣold|\n= 1\nη + 1 H(πθ) +\nη\nη + 1 H(πθold) ≤ maximum (H(πθ),H(πold))\nB.2 MEAN PROJECTION\nFirst, we consider only the mean objective\nmin µ̃\n(µ− µ̃)T Σ−1old (µ− µ̃)\ns.t. (µold − µ̃)T Σ−1old (µold − µ̃) ≤ µ, which give us the following dual\nL(µ̃, ω) = (µ− µ̃)T Σ−1old (µ− µ̃) + ω ( (µold − µ̃)T Σ−1old (µold − µ̃)− µ ) . (13)\nDifferentiating w.r.t. µ̃ yields ∂L(µ̃, ω) ∂µ̃\n= 2Σ−1old (µ̃− µ)− 2ωΣ−1old (µ̃− µold) . Setting the derivative to 0 and solving for µ̃ gives\nµ̃∗ = µ+ ωµold\n1 + ω .\nInserting the optimal mean µ̃∗ in Equation 13 results in\nL(ω) =\n( µ+ ωµold\n1 + ω − µ\n)T Σ−1old ( µ+ ωµold\n1 + ω − µ\n) +\n+ ω\n(( µ+ ωµold\n1 + ω − µold\n)T Σ−1old ( µ+ ωµold\n1 + ω − µold\n) − µ\n)\n= ω2 (µ− µold)T Σ−1old (µ− µold) (1 + ω)2 + ω (µ− µold)T Σ−1old (µ− µold) (1 + ω)2 − ω µ.\nThus, differentiating w.r.t ω yields\n∂L(ω) ∂ω = (µ− µold)T Σ−1old (µ− µold) (1 + ω)2 − µ.\nNow solving ∂L(ω)∂ω ! = 0 for ω, we arrive at\nω∗ =\n√ (µ− µold)T Σ−1old (µ− µold)\nµ − 1.\nB.3 FROBENIUS COVARIANCE PROJECTION\nWe consider the following objective for the covariance part\nmin Σ̃\n∥∥∥Σ̃− Σ ∥∥∥ 2\nF\ns.t. ∥∥∥Σ̃− Σold ∥∥∥ 2\nF ≤ Σ\nwith the corresponding Lagrangian\nL(Σ̃, η) = ∥∥∥Σ̃− Σ ∥∥∥ 2\nF + η\n(∥∥∥Σ̃− Σold ∥∥∥ 2\nF − Σ\n) . (14)\nDifferentiating w.r.t. Σ̃ yields ∂L(Σ̃, η) ∂Σ̃ = 2 (( Σ̃− Σ ) + η (Σold − Σ) ) .\nWe can again solve for Σ̃ by setting the derivative to 0, i.e.,\nΣ̃∗ = Σ + ηΣold\n1 + η .\nInserting Σ̃∗ into Equation 14 yields the dual function\ng(η) = ∥∥∥∥ Σ + ηΣold\n1 + η − Σ\n∥∥∥∥ 2\nF\n+ η (∥∥∥∥ Σ + ηΣold\n1 + η − Σold\n∥∥∥∥ 2\nF\n− Σ ) .\nDifferentiating w.r.t. η results in\n∂L(η) ∂η = ‖Σ− Σold‖2F (1 + η)2 − Σ.\nHence, ∂L(η)∂η ! = 0 yields\nη∗ = ‖Σ− Σold‖F√\nΣ − 1.\nB.4 WASSERSTEIN COVARIANCE PROJECTION\nAs described in the main text, the Gaussian distributions to have been rescaled by Σ−1old to measure the distance in the metric space that is defined by the variance of the data. For notational simplicity, we show the derivation of the covariance projection only for the unscaled scenario. The scaled version can be obtained by a simple redefinition of the covariance matrices. For our covariance projection we are interested in solving the following optimization problem\nmin Σ̃ tr\n( Σ̃ + Σ− 2 ( Σ 1/2Σ̃Σ 1/2 )1/2)\ns.t. tr ( Σ̃ + Σold − 2 ( Σ 1/2 old Σ̃Σ 1/2 old )1/2) ≤ Σ,\nwhich leads to the following Lagrangian function\nL(Σ̃, η) = tr ( Σ̃ + Σ− 2 ( Σ 1/2Σ̃Σ 1/2 )1/2)\n+ η ( tr ( Σ̃ + Σold − 2 ( Σ 1/2 old Σ̃Σ 1/2 old )1/2) − Σ ) . (15)\nAssuming that Σ̃ commutes with Σ as well as Σold, Equation 15 simplifies to\nL(Σ̃, η) = tr ( Σ̃ + Σ− 2Σ̃1/2Σ1/2 ) + η ( tr (\nΣ̃ + Σold − 2Σ̃1/2Σ 1/2 old\n) − )\n= tr ( S2 + Σ− 2SΣ1/2 ) + η ( tr ( S2 + Σold − 2SΣ 1/2 old ) − ) , (16)\nwhere S is the unique positive semi-definite root of the positive semi-definite matrix Σ̃, i.e. S = Σ̃1/2. Instead of optimizing the objective w.r.t Σ̃, we optimize w.r.t S in order, which greatly simplifies the calculation. That is, we solve\n∂L(S, η) ∂S\n= (1 + η)2S − 2 ( Σ 1/2 + ηΣ\n1/2 old\n) ! = 0\nfor S, which leads us to\nS∗ = Σ1/2 + ηΣ 1/2 old\n1 + η , Σ̃∗ =\nΣ + η2Σold + 2ηΣ 1/2Σ 1/2 old\n(1 + η)2 .\nInserting this into Equation 16 yields the dual function\ng(η) = η ( tr ( Σ + Σold − 2Σ1/2Σ 1/2 old ))\n1 + η − η Σ\nThe derivative of the dual w.r.t. η is given by\n∂L(η) ∂η\n= tr ( Σ̃ + Σold − 2Σ̃1/2Σ 1/2 old )\n(1 + λ)2 − Σ.\nNow solving ∂L(η)∂η ! = 0 for η, we arrive at\nη∗ =\n√√√√ tr ( Σ̃ + Σold − 2Σ̃1/2Σ 1/2 old )\nΣ − 1\nB.5 KL-DIVERGENCE PROJECTION\nWe derive the KL-Divergence projection in its general form, i.e., simultaneous projection of mean and covariance under an additional entropy constraint\nπ̃∗ = arg min π̃ KL (π̃||πθ) s.t. KL (π̃||πθold) ≤ , H (π̃) ≥ β.\nInstead of working with this minimization problem we consider the equivalent maximization problem\nπ̃∗ = arg max π̃ −KL (π̃||πθ) s.t. KL (π̃||πθold) ≤ , H (π̃) ≥ β, (17)\nwhich is similar to the one considered in Model Based Relative Entropy Stochastic Search (MORE) (Abdolmaleki et al., 2015), with a few distinctions. To see those distinctions let η and ω denote the Lagrangian multipliers corresponding to the KL and entropy constraint respectively and consider the Lagrangian corresponding to the optimization problem in Equation 17\nL = −KL(π̃||πθ) + η ( − KL(π̃||πθold)) + ω (H(π̃)− β) = Eπ̃ [log πθ] + η ( − KL(π̃||πθold)) + (ω + 1)H(π̃)− ωβ.\nOpposed to Abdolmaleki et al. (2015) we are not working with an unknown reward but using the log density of the target distribution π instead. Thus we do not need to fit a surrogate and can directly read off the parameters of the squared reward. They are given by the natural parameters of π, i.e, Λ = Σ−1 and q = Σ−1µ. Additionally, we need to add a constant 1 to ω to account for the additional entropy term in the original objective, similar to (Arenz et al., 2018).\nFollowing the derivations from Abdolmaleki et al. (2015) and Arenz et al. (2018) we can obtain a closed form solution for the natural parameters of π̃, given the Lagrangian multipliers η and ω\nΛ̃ = ηΛold + Λ\nη + 1 + ω and q̃ =\nηqold + q\nη + 1 + ω . (18)\nTo obtain the optimal Lagrangian multipliers we can solve the following convex dual function using gradient descent\ng(η, ω) =η − ωβ + η ( −1\n2 qToldΛ −1 old qold +\n1 2 log det (Λ)− k 2 log(2π)\n)\n+ (η + 1 + ω)\n( 1\n2 q̃T Λ̃−1q − 1 2 log det\n( Λ̃ ) + k\n2 log(2π)\n) + const,\n∂g(η, ω)\n∂η = − KL(π̃||πθold) and\n∂g(η, ω)\n∂ω = H(π̃)− β.\nGiven the optimal Lagrangian multipliers, η∗ and ω∗ we obtain the parameters of the optimal distribution π̃∗ using Equation 18.\nForward Pass For the forward pass we compute the natural parameters of π, solve the optimization problem and compute mean and covariance of π̃∗ from the optimal natural parameters. The corresponding compute graph is given in Figure 3.\nBackward Pass Given the computational graph in Figure 3 gradients can be propagated back though the layer using standard back-propagation. All gradients for the analytical computations (black arrows in Figure 3) are straight forward and can be found in (Petersen & Pedersen, 2012). For the gradients of the numerical optimization of the dual (red arrows in Figure 3) we follow Amos & Kolter (2017) and differentiate the KKT conditions around the optimal Lagrangian multipliers computed during the forward pass. The KKT Conditions of the dual are given by\n∇g(η∗, ω∗) +mT∇ ( −η∗ −ω∗ ) = ( − KL (π̃∗||πθold)−m1 H(π̃∗)− β −m2 ) = 0, (Stationarity) m1(−η∗) = 0 and m2(−ω∗) = 0 (Complementary Slackness)\nhere m = (m1,m2)T denotes the Lagrangian multipliers for the box constraints of the dual (η and ω need to be non-negative). Taking the differentials of those conditions yields the equation system\n −∂KL (π̃ ∗||πθold) ∂η∗ −∂KL (π̃ ∗||πθold) ∂ω∗ −1 0 ∂H(π̃∗) ∂η∗ ∂H(π̃∗) ∂ω∗\n0 −1 −m1 0 −η∗ 0\n0 −m2 0 −ω∗\n dη dω dm1 dm2 \n= ∂KL (π̃∗||πθold) ∂q dq + ∂KL (π̃∗||πθold) ∂Λ dΛ −∂H(π̃ ∗) ∂q dq − ∂H(π̃ ∗) ∂Λ dΛ\n0 0\n \nwhich is (analytically) solved to obtain the desired partial derivatives\n∂η ∂q , ∂η ∂Λ , ∂ω ∂q , and ∂ω ∂Λ .\nImplementation We implemented the whole layer using C++, Armadillo, and OpenMP for parallelization. The implementation saves all necessary quantities for the backward pass and thus a numerical optimization is only necessary during the forward pass. Before we perform a numerical optimization we check whether it is actually necessary. If the target distribution π̃ is within the trust region we immediately can set π̃∗ = πθ, i.e., the forward and backward pass become the identity mapping. This check yield significant speed-ups, especially in early iterations, if the target is still close to the old distribution. If the projection is necessary we use the L-BFGS to optimize the 2D convex dual, which is still fast. For example, for a 17-dimensional action space and a batch size of 512, such as in the Humanoid-v2 experiments, the layer takes roughly 170ms for the forward pass and 3.5ms for the backward pass if the all 512 Gaussians are actually projected2. If none of the Gaussians needs to be projected its less than 1ms for forward and backward pass.\nSimplifications If only diagonal covariances are considered the implementation simplifies significantly, as computationally heavy operations (matrix inversions and cholesky decompositions) simplify to pointwise operations (divisions and square roots). If only the covariance part of the KL is projected, we set µold = µ = µ̃∗ and dµ = 0 which is again a simplification for both the derivations and implementation. If an entropy equality constraint, instead of an inequality constraint, it is sufficient to remove the ω > 0 constraint in the dual optimization.\n2On a 8 Core Intel Core i7-9700K CPU @ 3.60GHz" }, { "heading": "C ADDITIONAL RESULTS", "text": "Figure 4 shows the training curves for all Mujoco environments with a 95% confidence interval. Besides the projections we also show the performance for PAPI and PPO. In Figure 5 the projections also leverage the Entropy control based on the results from from Akrour et al. (2019)." }, { "heading": "D HYPERPARAMETERS", "text": "Tables 2 and 3 show the hyperparameters used for the experiments in Table 1. Target entropy, temperature, and entropy equality are only required when the entropy projection is included in the layer, otherwise those values are ignored." } ]
2,021
DIFFERENTIABLE TRUST REGION LAYERS FOR DEEP REINFORCEMENT LEARNING
SP:e4eac7e23932f7b1c1ac0c281cbeb076a4525a86
[ "This paper proposes to combine model-based and multi-agent reinforcement learning. The authors follow the typical recurrent neural world models setting to generate imagined rollouts for decision-time planning. To tackle the non-stationarity of a multi-agent environment, they build end-to-end differentiable communication channels between agents within a pre-defined neighborhood. The communication message is defined as abstract information encoded from the imagined rollout. Agents then make decisions based on the message they received and the output of recurrent neural world models. Empirical studies are performed to show the superiority of proposed methods over SOTA model-free MARL approaches. Results are shown in two simple environments, which are designed to require communication between agents to solve the task.", "The paper talks about developing a model-based method for cooperative multi-agent reinforcement learning. The proposed approach utilizes communication as a tool for mitigating the partial observability induced by the non-stationary task while also helping agents reason about other agents' behaviors. The authors present their motivation for using language as a medium in model-based RL stemming from early literature in psychology and linguistics." ]
The human imagination is an integral component of our intelligence. Furthermore, the core utility of our imagination is deeply coupled with communication. Language, argued to have been developed through complex interaction within growing collective societies serves as an instruction to the imagination, giving us the ability to share abstract mental representations and perform joint spatiotemporal planning. In this paper, we explore communication through imagination with multi-agent reinforcement learning. Specifically, we develop a model-based approach where agents jointly plan through recurrent communication of their respective predictions of the future. Each agent has access to a learned world model capable of producing model rollouts of future states and predicted rewards, conditioned on the actions sampled from the agent’s policy. These rollouts are then encoded into messages and used to learn a communication protocol during training via differentiable message passing. We highlight the benefits of our model-based approach, compared to a set of strong baselines, by developing a set of specialised experiments using novel as well as well-known multi-agent environments.
[]
[ { "authors": [ "A. Abraham" ], "title": "The Cambridge Handbook of the Imagination", "venue": null, "year": 2020 }, { "authors": [ "D. Shulman" ], "title": "More than Real", "venue": null, "year": 2012 }, { "authors": [ "D. Dor" ], "title": "The instruction of imagination: Language as a social communication technology", "venue": "Foundations of Human Interacti,", "year": 2015 }, { "authors": [ "W. Von" ], "title": "Humboldt, On Language: On the Diversity of Human Language Construction and Its Influence on the Mental Development of the Human Species", "venue": null, "year": 1999 }, { "authors": [ "J.W. Forrester" ], "title": "Counterintuitive behavior of social systems,", "venue": "Theory and decision,", "year": 1971 }, { "authors": [ "L. Chang", "D.Y. Tsao" ], "title": "The code for facial identity in the primate brain,", "venue": "Cell, vol. 169,", "year": 2017 }, { "authors": [ "D.N. Perkins" ], "title": "Reasoning as imagination,", "venue": "Interchange, vol. 16,", "year": 1985 }, { "authors": [ "A.G. Barto" ], "title": "Adaptive critics and the basal ganglia,” Models of information processing in the basal ganglia", "venue": null, "year": 1995 }, { "authors": [ "W. Schultz", "P. Dayan", "P.R. Montague" ], "title": "A neural substrate of prediction and reward,", "venue": "Science, vol. 275,", "year": 1997 }, { "authors": [ "R.S. Sutton", "A.G. Barto" ], "title": "Introduction to reinforcement learning", "venue": "Second edition. MIT press Cambridge,", "year": 2018 }, { "authors": [ "V. Mnih", "K. Kavukcuoglu", "D. Silver", "A.A. Rusu", "J. Veness", "M.G. Bellemare", "A. Graves", "M. Riedmiller", "A.K. Fidjeland", "G. Ostrovski" ], "title": "Human-level control through deep reinforcement learning,", "venue": "nature, vol. 518,", "year": 2015 }, { "authors": [ "D. Silver", "J. Schrittwieser", "K. Simonyan", "I. Antonoglou", "A. Huang", "A. Guez", "T. Hubert", "L. Baker", "M. Lai", "A. Bolton" ], "title": "Mastering the game of go without human knowledge,", "venue": "nature, vol. 550,", "year": 2017 }, { "authors": [ "J. Binas", "L. Luginbuehl", "Y. Bengio" ], "title": "Reinforcement learning for sustainable agriculture,", "venue": "ICML 2019 Workshop Climate Change: How Can AI Help,", "year": 2019 }, { "authors": [ "J. Jeong", "H. Kim" ], "title": "Deep reinforcement learning based renew-able energy error compensable forecasting,", "venue": null, "year": 2020 }, { "authors": [ "R.T.S. Manabe" ], "title": "Wetherald, “Thermal equilibrium of the atmosphere with a given distribution of relative humidity,", "venue": "Journal of the Atmospheric Sciences,", "year": 1967 }, { "authors": [ "J.D. Hays", "J. Imbrie", "N.J. Shackleton" ], "title": "Variations in the earth’s orbit: pacemaker of the ice", "venue": "ages,” science,", "year": 1976 }, { "authors": [ "J. Hansen", "M. Sato", "R. Ruedy" ], "title": "Perception of climate change,", "venue": "Proceedings of the National Academy of Sciences,", "year": 2012 }, { "authors": [ "D. Rolnick", "P.L. Donti", "L.H. Kaack", "K. Kochanski", "A. Lacoste", "K. Sankaran", "A.S. Ross", "N. MilojevicDupont", "A.N. Jaques" ], "title": "Waldman-Brown et al., “Tackling climate change with machine learning,", "venue": "arXiv preprint arXiv:1906.05433,", "year": 2019 }, { "authors": [ "D.A. OroojlooyJadid" ], "title": "Hajinezhad, “A review of cooperative multi-agent deep reinforcement learning,", "venue": "arXiv preprint arXiv:1908.03963,", "year": 2019 }, { "authors": [ "T.M. Moerland", "J. Broekens", "C.M. Jonker" ], "title": "Model-based reinforcement learning: A survey,", "venue": "arXiv preprint arXiv:2006.16712,", "year": 2020 }, { "authors": [ "J. Andreae" ], "title": "Learning machines: A unified view,", "venue": "Encyclopaedia of Linguistics, Information and Control, Pergamon Press, pp. 261–270,", "year": 1969 }, { "authors": [ "R. Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning.", "venue": "Machine Learning, pp", "year": 1992 }, { "authors": [ "P. Auer", "N. Cesa-Bianchi", "P. Fischer" ], "title": "Finite-time analysis of the multiarmed bandit problem,", "venue": "Machine learning,", "year": 2002 }, { "authors": [ "R.S. Sutton", "D.A. McAllester", "S.P. Singh", "Y. Mansour" ], "title": "Policy gradient methods for reinforcement learning with function approximation,", "venue": "Advances in neural information processing systems,", "year": 2000 }, { "authors": [ "M. Hessel", "J. Modayil", "H. Van Hasselt", "T. Schaul", "G. Ostrovski", "W. Dabney", "D. Horgan", "B. Piot", "M. Azar", "D. Silver" ], "title": "Rainbow: Combining improvements in deep reinforcement learning,", "venue": "arXiv preprint arXiv:1710.02298,", "year": 2017 }, { "authors": [ "T. Haarnoja", "A. Zhou", "P. Abbeel", "S. Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "K. Arulkumaran", "M.P. Deisenroth", "M. Brundage", "A.A. Bharath" ], "title": "A brief survey of deep reinforcement learning,", "venue": "arXiv preprint arXiv:1708.05866,", "year": 2017 }, { "authors": [ "Y. Li" ], "title": "Deep reinforcement learning: An overview,", "venue": "arXiv preprint arXiv:1701.07274,", "year": 2017 }, { "authors": [ "R.S. Sutton" ], "title": "Integrated architectures for learning, planning, and reacting based on approximating dynamic programming,", "venue": "Machine learning proceedings 1990. Elsevier,", "year": 1990 }, { "authors": [ "S. Gu", "T. Lillicrap", "I. Sutskever", "S. Levine" ], "title": "Continuous deep q-learning with model-based acceleration,", "venue": "in International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "V. Feinberg", "A. Wan", "I. Stoica", "M.I. Jordan", "J.E. Gonzalez", "S. Levine" ], "title": "Model-based value estimation for efficient model-free reinforcement learning,", "venue": "arXiv preprint arXiv:1803.00101,", "year": 2018 }, { "authors": [ "J. Buckman", "D. Hafner", "G. Tucker", "E. Brevdo", "H. Lee" ], "title": "Sample-efficient reinforcement learning with stochastic ensemble value expansion,", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "M. Janner", "J. Fu", "M. Zhang", "S. Levine" ], "title": "When to trust your model: Model-based policy optimization,", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "D. Hafner", "T. Lillicrap", "I. Fischer", "R. Villegas", "D. Ha", "H. Lee", "J. Davidson" ], "title": "Learning latent dynamics for planning from pixels,", "venue": "in International Conference on Machine Learning. PMLR,", "year": 2019 }, { "authors": [ "D. Hafner", "T. Lillicrap", "J. Ba", "M. Norouzi" ], "title": "Dream to control: Learning behaviors by latent imagination,", "venue": "in International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "A. Byravan", "J.T. Springenberg", "A. Abdolmaleki", "R. Hafner", "M. Neunert", "T. Lampe", "N. Siegel", "N. Heess", "M. Riedmiller" ], "title": "Imagined value gradients: Model-based policy optimization with tranferable latent dynamics models,", "venue": "in Conference on Robot Learning,", "year": 2020 }, { "authors": [ "R. Coulom" ], "title": "Efficient selectivity and backup operators in monte-carlo tree search,", "venue": "in International conference on computers and games. Springer,", "year": 2006 }, { "authors": [ "T. Anthony", "Z. Tian", "D. Barber" ], "title": "Thinking fast and slow with deep learning and tree search,", "venue": "Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "J. Schrittwieser", "I. Antonoglou", "T. Hubert", "K. Simonyan", "L. Sifre", "S. Schmitt", "A. Guez", "E. Lockhart", "D. Hassabis", "T. Graepel" ], "title": "Mastering atari, go, chess and shogi by planning with a learned model,", "venue": "arXiv preprint arXiv:1911.08265,", "year": 2019 }, { "authors": [ "E. Todorov", "W. Li" ], "title": "A generalized iterative lqg method for locally-optimal feedback control of constrained nonlinear stochastic systems,", "venue": "Proceedings of the 2005, American Control Conference,", "year": 2005 }, { "authors": [ "E. Theodorou", "J. Buchli", "S. Schaal" ], "title": "Learning policy improvements with path integrals,", "venue": "Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "A. Nagabandi", "G. Kahn", "R.S. Fearing", "S. Levine" ], "title": "Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning,", "venue": "IEEE International Conference on Robotics and Automation (ICRA). IEEE,", "year": 2018 }, { "authors": [ "K. Chua", "R. Calandra", "R. McAllister", "S. Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models,", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "M. Posa", "C. Cantu", "R. Tedrake" ], "title": "A direct method for trajectory optimization of rigid bodies through contact,", "venue": "The International Journal of Robotics Research,", "year": 2014 }, { "authors": [ "D. Ha", "J. Schmidhuber" ], "title": "Recurrent world models facilitate policy evolution,", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "J. Schmidhuber" ], "title": "On learning to think: Algorithmic information theory for novel combinations of reinforcement learning controllers and recurrent neural world models,", "venue": "arXiv preprint arXiv:1511.09249,", "year": 2015 }, { "authors": [ "D. Ha", "D. Eck" ], "title": "A neural representation of sketch drawings,", "venue": "arXiv preprint arXiv:1704.03477,", "year": 2017 }, { "authors": [ "M. Janner" ], "title": "Model-based reinforcement learning: Theory and practice,", "venue": "Berkeley Artificial Intelligence Research (BAIR) blog.,", "year": 2019 }, { "authors": [ "I. Mordatch", "J. Hamrick" ], "title": "Tutorial on model-based methods in reinforcement learning,", "venue": "International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "M.L. Littman" ], "title": "Markov games as a framework for multi-agent reinforcement learning,", "venue": "Machine learning proceedings 1994. Elsevier,", "year": 1994 }, { "authors": [ "M. Tan" ], "title": "Multi-agent reinforcement learning: Independent vs. cooperative agents,", "venue": "Proceedings of the tenth international conference on machine learning,", "year": 1993 }, { "authors": [ "A. Tampuu", "T. Matiisen", "D. Kodelja", "I. Kuzovkin", "K. Korjus", "J. Aru", "R. Vicente" ], "title": "Multiagent cooperation and competition with deep reinforcement learning,", "venue": "PloS one,", "year": 2017 }, { "authors": [ "C. Claus", "C. Boutilier" ], "title": "The dynamics of reinforcement learning in cooperative multiagent systems,", "venue": "AAAI/IAAI, vol", "year": 1998 }, { "authors": [ "F.A. Oliehoek", "M.T. Spaan", "N. Vlassis" ], "title": "Optimal and approximate q-value functions for decentralized pomdps,", "venue": "Journal of Artificial Intelligence Research,", "year": 2008 }, { "authors": [ "R. Lowe", "Y.I. Wu", "A. Tamar", "J. Harb", "O.P. Abbeel", "I. Mordatch" ], "title": "Multi-agent actor-critic for mixed cooperative-competitive environments,", "venue": "Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "T. Rashid", "M. Samvelyan", "C.S. De Witt", "G. Farquhar", "J. Foerster", "S. Whiteson" ], "title": "Qmix: Monotonic value function factorisation for deep multi-agent reinforcement learning,", "venue": "arXiv preprint arXiv:1803.11485,", "year": 2018 }, { "authors": [ "K. Son", "D. Kim", "W.J. Kang", "D.E. Hostallero", "Y. Yi" ], "title": "Qtran: Learning to factorize with transformation for cooperative multi-agent reinforcement learning,", "venue": null, "year": 1905 }, { "authors": [ "P. Sunehag", "G. Lever", "A. Gruslys", "W.M. Czarnecki", "V. Zambaldi", "M. Jaderberg", "M. Lanctot", "N. Sonnerat", "J.Z. Leibo", "K. Tuyls" ], "title": "Value-decomposition networks for cooperative multi-agent learning,", "venue": "arXiv preprint arXiv:1706.05296,", "year": 2017 }, { "authors": [ "M. Hausknecht", "P. Stone" ], "title": "Deep recurrent q-learning for partially observable mdps,", "venue": "arXiv preprint arXiv:1507.06527,", "year": 2015 }, { "authors": [ "J. Foerster", "I.A. Assael", "N. De Freitas", "S. Whiteson" ], "title": "Learning to communicate with deep multi-agent reinforcement learning,", "venue": "Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "S. Sukhbaatar", "R. Fergus" ], "title": "Learning multiagent communication with backpropagation,", "venue": "Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "A. Singh", "T. Jain" ], "title": "Sukhbaatar, “Learning when to communicate at scale in multiagent cooperative and competitive tasks,", "venue": "arXiv preprint arXiv:1812.09755,", "year": 2018 }, { "authors": [ "T. Chu", "S. Chinchali", "S. Katti" ], "title": "Multi-agent reinforcement learning for networked system control,", "venue": "arXiv preprint arXiv:2004.01339,", "year": 2020 }, { "authors": [ "A. Lazaridou", "A. Peysakhovich", "M. Baroni" ], "title": "Multi-agent cooperation and the emergence of (natural) language,", "venue": "arXiv preprint arXiv:1612.07182,", "year": 2016 }, { "authors": [ "I. Mordatch", "P. Abbeel" ], "title": "Emergence of grounded compositional language in multi-agent populations,", "venue": "arXiv preprint arXiv:1703.04908,", "year": 2017 }, { "authors": [ "I. Kajić", "E. Aygün", "D. Precup" ], "title": "Learning to cooperate: Emergent communication in multi-agent navigation,", "venue": "arXiv preprint arXiv:2004.01097,", "year": 2020 }, { "authors": [ "O. Krupnik", "I. Mordatch", "A. Tamar" ], "title": "Multi-agent reinforcement learning with multi-step generative models,", "venue": "in Conference on Robot Learning,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "“We use imagination in our ordinary perception of the world. This perception cannot be separated from interpretation.” (Warnock, 1976). The human brain, and the mind that emerges from its working, is currently our best example of a general purpose intelligent learning system. And our ability to imagine, is an integral part of it (Abraham, 2020). The imagination is furthermore intimately connected to other parts of our cognition such as our use of language (Shulman, 2012). In fact, Dor (2015) argues that:\n“The functional specificity of language lies in the very particular functional strategy it employs. It is dedicated to the systematic instruction of imagination: we use it to communicate directly with our interlocutors’ imaginations.”\nHowever, the origin of language resides not only in individual cognition, but in society (Von Humboldt, 1999), grounded in part through interpersonal experience (Bisk et al., 2020). The complexity of the world necessitates our use of individual mental models (Forrester, 1971), to store abstract representations of the information we perceive through the direct experiences of our senses (Chang and Tsao, 2017). As society expanded, the sharing of direct experiences within groups reached its limit. Growing societies could only continue to function through the invention of language, a unique and effective communication protocol where a sender’s coded message of abstract mental representations delivered through speech, could serve as a direct instruction to the receiver’s imagination (Dor, 2015). Therefore, the combination of language and imagination gave us the ability to solve complex tasks by performing abstract reasoning (Perkins, 1985) and joint spatiotemporal planning (Reuland, 2010).\nIn this work, we explore a plausible learning system architecture for the development of an artificial multi-agent communication protocol of the imagination. Based on the above discussion, the minimum set of required features of such a system include: (1) that it be constructed from multiple individual agents where, (2) each agent possesses an abstract model of the world that can serve as an imagination, (3) has access to a communication medium, or channel, and (4) jointly learns and interacts in a\ncollective society. Consequently, these features map most directly onto the learning framework of model-based deep multi-agent reinforcement learning.\nReinforcement learning (RL) has demonstrated close connections with neuroscientific models of learning (Barto, 1995; Schultz et al., 1997). However, beside this connection, RL has proven to be an extremely useful computational framework for building effective artificial learning systems (Sutton and Barto, 2018). This is true, not only in simulated environments and games (Mnih et al., 2015; Silver et al., 2017), but also in real-world applications (Gregurić et al., 2020). Futhermore, RL approaches are being considered for some of humanities most pressing problems, such as the need to build sustainable food supply (Binas et al., 2019) and energy forecasting systems (Jeong and Kim, 2020), brought about through global climate change (Manabe and Wetherald, 1967; Hays et al., 1976; Hansen et al., 2012; Rolnick et al., 2019).\nOur system. We develop our system specifically in the context of cooperative mutli-agent RL (OroojlooyJadid and Hajinezhad, 2019), where multiple agents jointly attempt to learn how to act in a partially observable environment by maximising a shared global reward. Our agents make use of model-based reinforcement learning (Langlois et al., 2019; Moerland et al., 2020). To learn an artificial language of the imagination, each individual agent in our system is given access to a recurrent world model capable of learning rich abstract representations of real and imagined future states. We combine this world model with an encoder function to encode world model rollouts as messages and use a recurrent differentiable message passing channel for communication. To show the benefits of our system, we develop a set of ablation tests and specialised experiments using novel as well as well-known multi-agent environments and compare the performance of our system to a set of strong model-free deep MARL baselines.\nOur findings and contributions. We find that joint planning using learned communication through imagination can significantly improve MARL system performance when compared to a set of stateof-the-art baselines. We demonstrate this advantage of planning in a set of specialised environments specifically designed to test for the use of communication combined with imagined future prediction.\nOur present work is not at scale and we only consider situations containing two agents. However, to the best of our knowledge, this is the first demonstration of a model-based deep MARL system that combines world models with differentiable communication for joint planning, able to solve tasks successfully, where state-of-the-art model-free deep MARL methods fail. We see this work as a preliminary step towards building larger-scale joint planning systems using model-based deep multi-agent RL." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "Reinforcement learning is concerned with optimal sequential decision making within a particular environment. In single agent RL, the problem is modeled as a Markov decision process (MDP) defined by the following tuple (S,A, r, p, ρ0, γ) (Andreae, 1969; Watkins, 1989). At time step t, in a state st, which is a member of the state space S, the agent can select an action at from a set of actions A. The environment state transition function p(st+1|st, at) provides a distribution over next states st+1 and a reward function r(st, at, st+1) returns a scalar reward, given the current state, action and next state. The initial state distribution is given by ρ0, with s0 ∼ ρ0, and γ ∈ (0, 1] is a discount factor controlling the influence of future reward. The goal of RL is to find an optimal policy π∗, where the policy is a mapping from states to a distribution over actions, that maximises long-term discounted future reward such that π∗ = argmaxπE[ ∑∞ t=0 γ\ntr(st, at, st+1)]. If the environment state is partially observed by the agent, an observation function o(st) is assumed and the agent has access only to the observation ot = o(st) at each time step, with the full observation space defined as O = {o(s)|s ∈ S}. In this work, we focus only on the case of partial observability. Deep RL. Popular algorithms for solving the RL problem include value-based methods such as Qlearning (Watkins and Dayan, 1992) and policy gradient methods such as the REINFORCE algorithm (Williams, 1992). Q-learning learns a value function Q(s, a) for state-action pairs and obtains a policy by selecting actions according to these learned values using a specific action selector, e.g. -greedy (Watkins, 1989) or UCB (Auer et al., 2002). In contrast, policy gradient methods learn a parameterised policy πθ, with parameters θ, directly by following a performance gradient signal with respect to θ. The above approaches are combined in actor-critic methods (Sutton et al., 2000), where\nthe actor refers to the policy being learned and the critic to the value function. In deep RL, the policy and value functions use deep neural networks as high-capacity function approximators capable of learning distributed abstract representations from raw input signals that are useful for downstream decision making. Recent state-of-the-art deep RL methods include Deep Q-Networks (DQN) (Mnih et al., 2013) and related variants (Hessel et al., 2017), as well as advanced actor-critic methods such as PPO (Schulman et al., 2017) and SAC (Haarnoja et al., 2018). See (Arulkumaran et al., 2017; Li, 2017) for an in-depth review of deep RL.\nModel-based RL. In RL, the environment transition function p is typically unknown. As a result, so-called model-free RL methods, such as DQN and PPO, rely solely on data gathered from the environment, i.e. real experience, to learn an optimal policy. However, if given access to a transition function, an agent can generate useful simulated, or imagined experience, and use it to plan. Therefore, in model-based RL, a model p̂φ(ot+1|ot, at) with parameters φ is learned using stored transitions gathered from either a random, heuristic or learned policy to simulate transitions from the true (unknown) transition function p. The model can then be used for model-based planning, which can either happen in the background, or at decisiontime. We briefly highlight the differences between these two types of planning and discuss\nwork related to each and how this relates to our own work.\n– Background planning. In background planning, the model is primarily used to generate additional experience and assist learning, i.e. for updating the parameters of the policy and/or value functions. An early version of this approach is DYNA-Q (Sutton, 1990) which uses the additional experience to help learn a value function. However, the usefulness of a model degrades over long time horizons as model rollout error starts to compound (Gu et al., 2016). This has lead to different approaches that either use fixed depth rollouts based on model uncertainty (Feinberg et al., 2018), dynamic rollout schedules (Buckman et al., 2018), or short rollouts starting from intermediate states sampled from a buffer (Janner et al., 2019). A promising alternative approach is to update gradients directly via imagined rollouts in a lower-dimensional latent space (Hafner et al., 2019; 2020; Byravan et al., 2020).\n– Decision-time planning. In decision-time planning, the model is used to generate imagined rollouts from a given state for the purpose of selecting the optimal action or sequence of actions. Decisiontime planning methods for discrete action spaces often rely on search methods such as Monte Carlo tree search (MCTS) (Coulom, 2006) and have been used successfully in several works (Silver et al., 2017; Anthony et al., 2017; Schrittwieser et al., 2019). In continuous action spaces, methods include trajectory optimisation approaches using trajectory sampling (Todorov and Li, 2005; Theodorou et al., 2010; Nagabandi et al., 2018; Chua et al., 2018) or collocation (Posa et al., 2014) (optimising reward while forcing the model’s predictions to be close to already visited states).\nThe model in our system is utilised for decision-time planning and follows the approach of Ha and Schmidhuber (2018), who used recurrent neural world models as a way to give agent’s the ability to learn how to think (Schmidhuber, 2015). Specifically, we make use of a recurrent world model that takes the form of a mixture density network LSTM (MDN-LSTM), as used in (Ha and Eck, 2017). The model is therefore a form of a recurrent Gaussian mixture model and allows us to sample probabilistic predictions of imagined next states.\nAn illustration of the core features of model-based RL and the different types of planning is given in Figure 1. Also see (Janner, 2019) and (Mordatch and Hamrick, 2020) for useful overviews.\nMulti-agent RL (MARL). In the multi-agent case with N agents, we use the formalism of partially observable Markov games (Littman, 1994), defined as the tuple given above for the single agent case, but with observation and action spaces given by the following cartesian products: O = ∏N i=1Oi ⊆ S\nand A = ∏N i=1Ai, for agents i = 1, ..., N . The goal in this setting is to find an optimal joint policy π∗(a1, ..., aN |o1, ..., oN ) that maximises a shared long-term discounted future reward for all agent as π∗ = argmaxπE[ ∑N i=1 ∑∞ t=0 γ tr(oti, a t i, o t+1 i )].\nEarly work in MARL simply trained multiple independent Q-learning algorithms (Tan, 1993), which has since been extended to include deep neural networks, or more specifically independent DQNs (Tampuu et al., 2017). However, from the perspective of an individual agent, these approaches treat all other learning agents as part of the environment, resulting in the optimal policy distribution to become non-stationary. Furthermore, if the environment is only partially observable, the learning task can become even more difficult, where agents may struggle with credit assignment due to spurious rewards received from unobserved actions of other agents (Claus and Boutilier, 1998).\nTo mitigate the issue of non-stationarity, MARL systems are often designed within the paradigm of centralised training with decentralised execution (CTDE) (Oliehoek et al., 2008; Lowe et al., 2017; Foerster et al., 2017). In CTDE, a centralised value function, or critic, is used during training, which conditions on the global state and joint actions from all the agents to make the learning problem stationary, but is then later removed once the individual agent’s policies have been learned, making it possible to use each policy independently during system execution. However, individual agent policies extracted in this way may still perform poorly because training is not specifically aligned with the goal of performing well under decentralised execution. Therefore, state-of-the-art value-based MARL approaches such as Q-mix (Rashid et al., 2018) and QTRAN (Son et al., 2019) make use of value function decomposition strategies (Sunehag et al., 2017) to more closely resemble decentralised training, where each agent is a recurrent DQN (Hausknecht and Stone, 2015) that has memory to also deal with partial observability. Another clear way to help with the issue of partial observability is for agents to be able to communicate.\nLearned multi-agent communication has been a key innovation in helping MARL systems to scale to more complex environments and solve more challenging tasks (Foerster et al., 2016; Sukhbaatar et al., 2016; Singh et al., 2018; Chu et al., 2020). To facilitate communication in our work, we formally extend the Markov gameM by having agents connected to each other via communication channels according to a pre-defined neighbourhood graph G(V, E). The graph G is defined by a set of nodes (vertices) V along with a set of edge connections E = {(i, j)|i, j ∈ V, i 6= j}, where each agent is a node in the graph, locally connected to other agent nodes. We define the connected neighbourhood surrounding agent i as Ni = {j ∈ V|(i, j) ∈ E}. This networked Markov game MG is then defined by the following tuple (G,S,A, r, p, ρ0, γ). Our communication channels are recurrent and end-to-end differentiable allowing for agent-to-agent communication protocols to be learned during training. Unlike work studying the emergence of language through communication in MARL, e.g. (Lazaridou et al., 2016; Mordatch and Abbeel, 2017; Kajić et al., 2020) our work is more focused on communication through imagination as a useful system design for task solving, as apposed to uncovering new insights into emergent phenomena related to the human imagination.\nModel-based MARL. To the best of our knowledge, the literature on model-based deep MARL is quite sparse and very little work has been done in this area. A notable exception is the recent work by Krupnik et al. (2020) on multi-agent model-based latent space trajectory optimisation. Here a multi-step generative model, specifically a temporal segment model (Mishra et al., 2017), is used to generate rollouts in a disentangled latent space and optimisation is performed directly over agent latent variables. Our work is the first we are aware of in the area of model-based deep MARL that combines communication with decision-time planning using recurrent neural world models." }, { "heading": "3 METHOD", "text": "In this section, we provide the full details of our approach to model-based deep MARL and outline our system architecture, which we refer to as MACI: Multi-Agent Communication through Imagination. We explain the details of the system by way of a walk-through from the perspective of a single agent i, from time step t to t + 1. At time step t, the agent receives the observation oti and initiates an imagined rollout of possible future observations and predicted rewards.\nRollout. For k = 1, ...,K rollout steps, the agent produces an action:\naki = AgentControllerMLP(o k i , h k−1 i ,m I,c−1 i ), (1)\nusing a neural network (MLP) controller (we provide specifics, e.g. layer sizes and number of layers in the appendix), where oki is an imagined observation, h k−1 i is the world model hidden state, and mI,c−1i is an aggregated incoming message to agent i, from connected agents in agent i’s neighbourhood, i.e. agents j ∈ Ni. In turn, an imagined rollout is produced by the world model given an observation oki and an action a k i :\nok+1i , r k+1 i , h k i = WorldModelMDN-LSTM(o k i , h k−1 i , a k i ), (2)\nwhere the world model output includes the imagined next observation ok+1i , reward r k+1 i and an updated hidden state hki . To initialise the rollout procedure, we set o k=1 = ot, h(k−1)=0 = ht−1 and mI,(c−1)=0i = m I,t−1 i . The final output of the rollout after K steps is the agent’s imagination of the future summarised as follows: Ici = CONCAT([o 1 i , ..., o K i ], [r 1 i , ..., r K i ], [l 1 i , ..., l K i ]), where lki are the logits associated with a k i to maintain differentiability and we concatenate along a depth dimension to maintain a sequence of length K. Once the agent’s rollout is complete, the agent starts to communicate its imagination to its neighbours.\nCommunication. For c = 1, ..., C communication rounds the agent encodes it’s imagination into a summarised abstract representation to serve as an outgoing message:\nmO,ci = ENCODER1D-CNN(I c i ), (3)\nand sends this message to its connected neighbours. To encode the imagined sequence, we use a 1D convolutional neural network (CNN). In return, the agent receives all outgoing messages mO,cj from its neighbours j ∈ Ni, which is then turned into an incoming message using a simple aggregation function (e.g. concatenation, average or summation):\nmI,ci = AGGREGATOR({m O,c j |j ∈ Ni}). (4)\nNote that for the next round of communication to begin, another rollout inner loop of K steps must first take place to produce an updated imagination Ic+1i . After C communication rounds the agent takes a real action ati = AgentControllerMLP(o t i, h t i,m I,C i ), conditioned on the final message m I,C i and receives from the environment the next real observation ot+1 and reward rt+1. Finally, for our agents, we employ weight sharing and make use of a single world model shared between all agents. An illustration of the MACI system with two agents is provided in Figure 2.\nTraining.\nBlock 1: MACI – Methods AgentController: f WorldModel: g Encoder: z Aggregator: h\nFunction playEpisode(f, g, z, e=None): o01, ..., o 0 |V| ∼ penv(oi, e|i ∈ V)\nfor t = 1, ..., T , environment steps do # Plan for c = 1, ..., C, communication steps do\n# Imagine for agent i ∈ V do\nfor k = 1, ..., K, rollout steps do act: aki = f(o k i , h k−1 i ,m I,c−1 i )\nimagine: ok+1i , r k+1 i , h k i =\ng(oki , h k−1 i , a k i )\nend consolidate: Ici =\nCONCAT([o1i , ..., o K i ], [r 1 i , ..., r K i ],\n[l1i , ..., l K i ])\nencode outgoing message: mO,ci = z(I c i )\nend # Communicate for agent i ∈ V do\naggregate incoming messages: mI,ci = h({m O,c j |for j ∈ Ni}).\nend end # Step act: ati = f(o t i, h t i,m I,C i ) observe: ot+1i ∼ penv(oi, e|a t i)\nspace end return o1...oT , a1...aT , r1...rT\nEnd Function # Loss functions Lgk(θ) = (o t − go,θ(ht−1))2 + c(rt − gr,θ(ht−1))2 Lf,ck (θ) = PPO_loss (θ,A, ))\nBlock 2: MACI-Training Initialize f , g and z; for e = 1, ..., E, training steps do\nInitialize experience buffer (ex) for n = 1, ..., N , episode steps do\nex += playEpisode(f, g, z); end # Update world model parameters for e in ex do\nplayEpisode(f, g, z, e); θgk+1 =\nargmaxθ 1 BT ∑ τ∈B ∑T t=0 L g k(θ\ng) end # Update policy and encoder parameters for e in ex do\nplayEpisode(f, g, z, e); θf,zk+1 =\nargmaxθ 1 BT ∑ τ∈B ∑T t=0 L f,z k (θ\nf,z) end\nend\nBlock 3: Python code outline\ndef play_episode(): obs = env.reset() states = zeros() # {id: (world_state, comm_state, message), ...}\nfor time_step = 1,..., T if not done: actions, states = next_actions(obs, states) obs, reward, done = env.step(actions)\ndef next_actions(obs, states): action_values, states = action_values(obs, states) actions = {id: argmax(q) for id, q in action_values }\n# keep world model in sync with env states = update_world_models(obs, actions, states)\nreturn actions, states\ndef action_values(obs, states):\nfor step = 1,..., comm_rounds: states = update_comm_nets(obs,\naggregate_messages(obs, states))\nreturn { id: agent.controller_net(ob, state) for id, agent, ob, state in zip(agents, obs,\nstates) }, states\ndef aggregate_messages(obs, states): message_from = {\nid: agent.encode_plan(ob, state) for id, agent, ob, state in zip(agents, obs,\nstates) }\nmessage_to = { id: mean(message_from[other] for other in adj) for id, adj in comm_graph }\nreturn { id: state.update_message(message) for id, state, message in zip(states,\nmessage_to) }\ndef agent.encode_plan(current_ob, state): obs, action_values, rewards = [], [], []\nfor step in 1,..., rollout_steps: obs.append(current_ob) action_values.append(self.controller_net(obs\n[-1], state))\ncurrent_ob, reward, hidden = self.world_model( obs[-1], argmax(action_values[-1]), state.\nworld_state, )\nstate = state.update_world(hidden) rewards.append(reward)\nreturn self.encoder(obs, action_values, rewards)" }, { "heading": "4 EXPERIMENTS", "text": "We test the feasibility of our system on a set of specialised environments. Each environment is conceptually simple. However, both environments still prove to be a formidable challenge, even for state-of-the-art MARL approaches, due their extreme partial observability and the need for agents to be able to think ahead and communicate to solve the task. Our first environment, Digit game, is inspired by a similar challenge in (Foerster et al., 2016) and our second environment, invisible spread, is based off of the popular multi-agent particle environment (Mordatch and Abbeel, 2017; Lowe et al., 2017). Similar to previous work on model-based MARL (Krupnik et al., 2020), we only consider the case of two agents. In all our experiments, we compare our system against basic baselines namely, independent PPO learners (IPPO) (a surprisingly strong baseline) and independent Q-learners (IQL), as well as state-of-the-art systems, Q-mix and QTRAN. Futhermore, in each experiment we use short two-step rollouts to guard against compounding model error.\nDigit game. At each time step, each agent receives a random one-hot encoded digit (0-9), where the environment transition dynamics obeys the following rule: ot+1 = (ot + ot−1) mod 10, with o0 = 0. The challenge is for each agent to produce as action the digit the other agent is likely to receive at the next time step. If the agent action is correct, the agent gets a reward of 1, if the action is incorrect, it gets 0. The digit game therefore specifically tests if agents can predict future observations based on present and past experience (i.e. use their imagination), as well as learn an effective communication protocol (as the other agent’s observation can only be determined through communication). Figure 3 (A) shows an example of the digit game. In this example, if agent 1 selects as action the digit 8 at time step t = 2, i.e. a21 = 8, it will receive a reward of 1.\n– Results. We provide our experimental results on the digit game environment in Figure 4 (A). Learning curves are mean rewards averaged over 5 runs and include shaded standard deviation bounds. Our system, MACI, is shown to have a clear advantage, significantly outperforming all the other baselines. Due to the fact that we make use of a world model to perform rollouts that cost compute time, we also show (in the inset plot) the wall-clock time for each system over real environment steps. Although MACI scales less well than the baselines in this specific instance, we note that in more complex scenarios, real environment steps may prove more expensive than imagined rollouts and the sample efficiency of decision-time planning could outweigh the extra model compute time.\nInvisible spread. The goal in this environment is for each agent to occupy a separate landmark starting from a random location. Agents receive a shared reward based on how close both agents are to their respective landmarks. The observation space consists of values for the agent’s position and velocity as well as the positions of the landmarks. To make the task more difficult, we make each agent invisible to the other. Therefore, agents must coordinate through communication and use their world models to show their intent (in terms of the landmark they are currently planning to occupy). Figure 3 (B) shows an example of the invisible spread environment, where the agents are represented as large purple circles and the landmarks are shown as small black circles.\n– Results. We provide our results on the invisible spread environment in Figure 4 (B). Learning curves are again mean rewards averaged over 5 runs and include shaded standard deviation bounds. MACI is shown to again have a clear advantage, outperforming all the other baselines. Interestingly, in this environment, MACI scales well in terms of compute time and is shown to perform close to the most efficient baseline, IPPO.\n0 20000 40000 60000 80000 100000 120000 140000 Steps\n0\n20\n40\n60\n80\n100\n120\n140\nM ea\nn re\nwa rd\n(A) Digit game\nippo iql qmix qtran maci (ours)\nSteps 0\n10\n20\nHo ur\ns\nWall-clock time\n0 25000 50000 75000 100000 125000 150000 175000 200000 Steps\n1000\n800\n600\n400\n200\n0\n200\nM ea\nn re\nw ar\nd\n(B) Invisible spread\nippo iql qmix qtran maci (ours)\nSteps 0\n10\nH ou\nrs\nWall-clock time\nFigure 4: Experimental results (A) Digit game. (B) Invisible spread.\n0 20000 40000 60000 80000 100000 Steps\n10\n20\n30\n40\n50\n60\nM ea\nn re\nwa rd\n(A)\nDigit game full comms\n0 10000 20000 30000 40000 50000 60000 70000 80000 Steps\n550\n500\n450\n400\n350\n300\nM ea\nn re\nw ar\nd\n(B) Invisible spread full comms\nFigure 5: Ablation study (A) Digit game. (B) Invisible spread.\nAblation study. To disentangle the roles of communication and imagination we perform an ablation study on both environments, digit game and invisible spread. In each case, we train the MACI system with and without using a world model. We perform 5 independent runs for each case, showing the average curves with standard deviations bounds. The results of this study is shown in Figure 5. For the digit game, shown in panel (A), the world model is a crucial component determining final performance. This is to be expected given the strong requirement for future prediction. In the invisible spread environment, the benefit is less significant and the system seems to rely more heavily on communication of past and present information." }, { "heading": "5 CONCLUSION", "text": "Our ability to imagine plays an integral role in our intelligence. Inspired by the idea of language as the systematic instruction of imagination, we developed a system for multi-agent communication through imagination (MACI) in the context of model-based deep mutli-agent RL. In our system, each agent has access to a recurrent world model for decision-time planning and uses a differentiable message passing channel for learned communication.\nIn our experiments on two specialised environments, digit game and invisible spread, we showed that using learned communication through imagination can significantly improve MARL system performance when compared to state-of-the-art baselines. Although our environments are conceptually simple, both environments still proved to be a formidable challenge, even for state-of-the-art methods. Furthermore, the sample efficiency of decision-time planning was shown to outweigh the extra model compute time in the invisible spread environment.\nOur work demonstrates the feasibility of a model-based deep MARL system that combines world models with differentiable communication for joint planning. Specifically, it highlights the potential benefits of decision-time planning in a multi-agent setting as a means to improve agent cooperation.\nAn interesting future line of work could explore the combination of background and decision-time planning in the multi-agent setting. In addition, many interesting innovations from the single agent model-based RL literature could potentially find fruitful application in MARL. However, scaling MARL to larger numbers of agents remains a difficult task, and even more so for model-based methods. We see this work as a first step towards building larger-scale joint planning systems, which we plan to undertake in future work." }, { "heading": "6 APPENDIX", "text": "Block 4: MACI – Settings Environment: - obs_dim,Dobs - action_dim,Dact Hyperparameters: - message_dim,Dm = 16 - agent_controller_hidden_dim,Da = 16 - agent_controller_hidden_layers,Ha = 1 - world_model_hidden_dim,Dwm = 16 - world_model_hidden_layers,Hwm = 1 - rollout steps,K - communication steps, C Architectures: - Agent controller, f(·):\nType: Feedforward MLP Input dim: Dm +Dobs +Dwm Hidden dim: Da Hidden layers: Ha Output dim: Dact\n- World model, g(·): Type: LSTM Input dim: Dobs +Dact Hidden dim: Dwm Hidden layers: Hwm Output dim: Dobs + 1 (reward) - Encoder, z(·): Type: 1D convnet Input width: Dobs +Dact + 1 (reward) Input length: K Output dim: Dm - Aggregator, h(·): Type: Concatenation - Neighbour graph: Type: Fully connected" } ]
2,020
null
SP:78f30ff42b38782a096376e39364151da28d1812
[ "This work presents FOSAE++, an end-to-end system capable of producing \"lifted\" action models provided only bounding box annotations of image pairs before and after an unknown action is executed. Building on recent work in the space, the primary contribution of this work is to generate PDDL action rules. To accomplish this, the authors introduce novel 'params' function that use the Gumbel-Softmax function to implement a differentiable mechanism for selecting which entities are relevant to the current action and feeds them into the new 'bind' and 'unbind' functions that select those elements in the tensor predicting their relevance. Overall, this work is a meaningful contribution in the direction of generated lifted action models without direct labeled data.", "This paper addresses the problem of learning dynamics model directly from raw sensory inputs. The authors propose an unsupervised end-to-end model that can perform high-level tasks planning on raw observations. This work extends Asai et al. 2020, 2019 etc, and with improved symbol generation and lifted PDDL. The authors follow the experimental setup as seen in prior work, where three artificial environments (blocksworld, MNIST 8-puzzle, and sokoban) are used for planning. " ]
We propose FOSAE++, an unsupervised end-to-end neural system that generates a compact discrete state transition model (dynamics / action model) from raw visual observations. Our representation can be exported to Planning Domain Description Language (PDDL), allowing symbolic state-of-the-art classical planners to perform high-level task planning on raw observations. FOSAE++ expresses states and actions in First-Order Logic (FOL), a superset of so-called object-centric representation. It is the first unsupervised neural system that fully supports FOL in PDDL action modeling, while existing systems are limited to continuous, propositional, or property-based representations, and/or require manually labeled input.
[]
[ { "authors": [ "Diego Aineto", "Sergio Jiménez", "Eva Onaindia" ], "title": "Learning STRIPS Action Models with Classical Planning", "venue": "In Proc. of the International Conference on Automated Planning and Scheduling(ICAPS),", "year": 2018 }, { "authors": [ "Vidal Alcázar", "Daniel Borrajo", "Susana Fernández", "Raquel Fuentetaja" ], "title": "Revisiting Regression in Planning", "venue": "In Proc. of International Joint Conference on Artificial Intelligence (IJCAI),", "year": 2013 }, { "authors": [ "Garrett Andersen", "George Konidaris" ], "title": "Active Exploration for Learning Symbolic Representations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Masataro Asai" ], "title": "Unsupervised Grounding of Plannable First-Order Logic Representation from Images", "venue": "In Proc. of the International Conference on Automated Planning and Scheduling(ICAPS),", "year": 2019 }, { "authors": [ "Masataro Asai", "Alex Fukunaga" ], "title": "Classical Planning in Deep Latent Space: Bridging the Subsymbolic-Symbolic Boundary", "venue": "In Proc. of AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Masataro Asai", "Hiroshi Kajino" ], "title": "Towards Stable Symbol Grounding with Zero-Suppressed State AutoEncoder", "venue": "In Proc. of the International Conference on Automated Planning and Scheduling(ICAPS),", "year": 2019 }, { "authors": [ "Masataro Asai", "Christian Muise" ], "title": "Learning Neural-Symbolic Descriptive Planning Models via Cube-Space Priors: The Voyage Home (to STRIPS)", "venue": "In Proc. of International Joint Conference on Artificial Intelligence (IJCAI),", "year": 2020 }, { "authors": [ "Masataro Asai", "Zilu Tang" ], "title": "Discrete Word Embedding for Logical Natural Language Understanding", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2020 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2018 }, { "authors": [ "Stephen Cresswell", "Peter Gregory" ], "title": "Generalised Domain Model Acquisition from Action Traces", "venue": "In Proc. of the International Conference on Automated Planning and Scheduling(ICAPS),", "year": 2011 }, { "authors": [ "Stephen Cresswell", "Thomas Leo McCluskey", "Margaret Mary West" ], "title": "Acquiring planning domain models using LOCM", "venue": "Knowledge Eng. Review,", "year": 2013 }, { "authors": [ "Joseph Culberson" ], "title": "Sokoban is PSPACE-complete", "venue": "In Proceedings in Informatics 4, International Conference on Fun with Algorithms,", "year": 1998 }, { "authors": [ "J Cullen", "A Bryman" ], "title": "The Knowledge Acquisition Bottleneck: Time for Reassessment", "venue": "Expert Systems,", "year": 1988 }, { "authors": [ "Honghua Dong", "Jiayuan Mao", "Tian Lin", "Chong Wang", "Lihong Li", "Denny Zhou" ], "title": "Neural Logic Machines", "venue": "In Proc. of the International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Martin Engelcke", "Adam R Kosiorek", "Oiwi Parker Jones", "Ingmar Posner" ], "title": "GENESIS: Generative Scene Inference and Sampling with Object-Centric Latent Representations", "venue": "In Proc. of the International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Kutluhan Erol", "James Hendler", "Dana S Nau" ], "title": "HTN Planning: Complexity and Expressivity", "venue": "In Proc. of AAAI Conference on Artificial Intelligence,", "year": 1994 }, { "authors": [ "SM Ali Eslami", "Nicolas Heess", "Theophane Weber", "Yuval Tassa", "David Szepesvari", "Geoffrey E Hinton" ], "title": "Attend, Infer, Repeat: Fast Scene Understanding with Generative Models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Richard E Fikes", "Peter E. Hart", "Nils J. Nilsson" ], "title": "Learning and Executing Generalized Robot Plans", "venue": "Artificial Intelligence,", "year": 1972 }, { "authors": [ "Maria Fox", "Derek Long" ], "title": "Modelling mixed discrete-continuous domains for planning", "venue": "Journal of Artificial Intelligence Research,", "year": 2006 }, { "authors": [ "Kunihiko Fukushima" ], "title": "Neocognitron: A Self-Organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position", "venue": "Biological Cybernetics,", "year": 1980 }, { "authors": [ "Martin Gebser", "Benjamin Kaufmann", "Roland Kaminski", "Max Ostrowski", "Torsten Schaub", "Marius Schneider" ], "title": "Potassco: The Potsdam answer set solving collection", "venue": "AI Communications,", "year": 2011 }, { "authors": [ "Klaus Greff", "Raphaël Lopez Kaufman", "Rishabh Kabra", "Nick Watters", "Christopher Burgess", "Daniel Zoran", "Loic Matthey", "Matthew Botvinick", "Alexander Lerchner" ], "title": "Multi-Object Representation Learning with Iterative Variational Inference", "venue": "In Proc. of the International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Stevan Harnad" ], "title": "The symbol grounding problem", "venue": "Physica D: Nonlinear Phenomena,", "year": 1990 }, { "authors": [ "Patrik Haslum", "Nir Lipovetzky", "Daniele Magazzeni", "Christian Muise" ], "title": "An Introduction to the Planning Domain Definition Language, volume 13", "venue": null, "year": 2019 }, { "authors": [ "Malte Helmert" ], "title": "The Fast Downward Planning System", "venue": "J. Artif. Intell. Res.(JAIR),", "year": 2006 }, { "authors": [ "Malte Helmert", "Carmel Domshlak" ], "title": "Landmarks, Critical Paths and Abstractions: What’s the Difference Anyway", "venue": "In Proc. of the International Conference on Automated Planning and Scheduling(ICAPS),", "year": 2009 }, { "authors": [ "De-An Huang", "Danfei Xu", "Yuke Zhu", "Animesh Garg", "Silvio Savarese", "Li Fei-Fei", "Juan Carlos Niebles" ], "title": "Continuous Relaxation of Symbolic Planner for One-Shot Imitation Learning", "venue": "In Proc. of IEEE International Workshop on Intelligent Robots and Systems (IROS),", "year": 2019 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", "venue": "In Proc. of the International Conference on Machine Learning, pp", "year": 2015 }, { "authors": [ "Steven James", "Benjamin Rosman", "George Konidaris" ], "title": "Learning Portable Representations for High-Level Planning", "venue": "In Proc. of the International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Steven James", "Benjamin Rosman", "George Konidaris" ], "title": "Learning Object-Centric Representations for High-Level Planning in Minecraft", "venue": "In Proceedings of Workshop on Object-Oriented Learning at ICML.,", "year": 2020 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical Reparameterization with Gumbel-Softmax", "venue": "In Proc. of the International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Justin Johnson", "Bharath Hariharan", "Laurens van der Maaten", "Li Fei-Fei", "C Lawrence Zitnick", "Ross Girshick" ], "title": "CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning", "venue": "In Proc. of IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-Encoding Variational Bayes", "venue": "In Proc. of the International Conference on Learning Representations,", "year": 2013 }, { "authors": [ "George Konidaris", "Leslie Pack Kaelbling", "Tomás Lozano-Pérez" ], "title": "Constructing Symbolic Representations for High-Level Planning", "venue": "In Proc. of AAAI Conference on Artificial Intelligence, pp. 1932–1938,", "year": 2014 }, { "authors": [ "George Konidaris", "Leslie Pack Kaelbling", "Tomás Lozano-Pérez" ], "title": "Symbol Acquisition for Probabilistic High-Level Planning", "venue": "In Proc. of International Joint Conference on Artificial Intelligence (IJCAI),", "year": 2015 }, { "authors": [ "George Konidaris", "Leslie Pack Kaelbling", "Tomás Lozano-Pérez" ], "title": "From Skills to Symbols: Learning Symbolic Representations for Abstract High-Level Planning", "venue": "J. Artif. Intell. Res.(JAIR),", "year": 2018 }, { "authors": [ "Thanard Kurutach", "Aviv Tamar", "Ge Yang", "Stuart Russell", "Pieter Abbeel" ], "title": "Learning Plannable Representations with Causal InfoGAN", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-Based Learning Applied to Document Recognition", "venue": "Proc. of the IEEE,", "year": 1998 }, { "authors": [ "Chris J. Maddison", "Andriy Mnih", "Yee Whye Teh" ], "title": "The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables", "venue": "In Proc. of the International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Kira Mourão", "Luke S. Zettlemoyer", "Ronald P.A. Petrick", "Mark Steedman" ], "title": "Learning STRIPS Operators from Noisy and Incomplete Observations", "venue": "In Proc. of the International Conference on Uncertainty in Artificial Intelligence,", "year": 2012 }, { "authors": [ "Stephen Muggleton" ], "title": "Inductive Logic Programming", "venue": "New generation computing,", "year": 1991 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified Linear Units Improve Restricted Boltzmann Machines", "venue": "In Proc. of the International Conference on Machine Learning,", "year": 2010 }, { "authors": [ "Dana S Nau", "Tsz Chiu Au", "Okhtay Ilghami", "Ugur Kuter", "J William Murdock", "Dan Wu", "Fusun Yaman" ], "title": "SHOP2: An HTN Planning System", "venue": "J. Artif. Intell. Res.(JAIR),", "year": 2003 }, { "authors": [ "Negin Nejati", "Pat Langley", "Tolga Konik" ], "title": "Learning Hierarchical Task Networks by Observation", "venue": "In Proc. of the International Conference on Machine Learning,", "year": 2006 }, { "authors": [ "Allen Newell", "Herbert A. Simon" ], "title": "Computer Science as Empirical Inquiry: Symbols and Search", "venue": "Commun. ACM,", "year": 1976 }, { "authors": [ "Charles Payan" ], "title": "On the Chromatic Number of Cube-Like Graphs", "venue": "Discrete mathematics,", "year": 1992 }, { "authors": [ "Edwin PD Pednault" ], "title": "Formulating Multiagent, Dynamic-World Problems in the Classical Planning Framework", "venue": "In Reasoning about actions & plans,", "year": 1987 }, { "authors": [ "Joseph Redmon", "Ali Farhadi" ], "title": "Yolov3: An incremental improvement", "venue": "arXiv preprint arXiv:1804.02767,", "year": 2018 }, { "authors": [ "Raymond Reiter" ], "title": "On Closed World Data Bases", "venue": "In Readings in Artificial Intelligence,", "year": 1981 }, { "authors": [ "Adam Santoro", "David Raposo", "David G Barrett", "Mateusz Malinowski", "Razvan Pascanu", "Peter Battaglia", "Tim Lillicrap" ], "title": "A Simple Neural Network Module for Relational Reasoning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Tom Silver", "Rohan Chitnis" ], "title": "PDDLGym: Gym Environments from PDDL Problems, 2020", "venue": null, "year": 2020 }, { "authors": [ "Luc Steels" ], "title": "The Symbol Grounding Problem has been Solved. So What’s Next", "venue": null, "year": 2008 }, { "authors": [ "Mariarosaria Taddeo", "Luciano Floridi" ], "title": "Solving the Symbol Grounding Problem: A Critical Review of Fifteen Years of Research", "venue": "Journal of Experimental & Theoretical Artificial Intelligence,", "year": 2005 }, { "authors": [ "Emre Ugur", "Justus Piater" ], "title": "Bottom-up Learning of Object Categories, Action Effects and Logical Rules: From Continuous Manipulative Exploration to Symbolic Planning", "venue": "In Proc. of IEEE International Conference on Robotics and Automaton (ICRA),", "year": 2015 }, { "authors": [ "Arash Vahdat", "Evgeny Andriyash", "William Macready" ], "title": "DVAE#: Discrete variational autoencoders with relaxed Boltzmann priors", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Arash Vahdat", "William G Macready", "Zhengbing Bian", "Amir Khoshaman", "Evgeny" ], "title": "Andriyash. DVAE++: Discrete variational autoencoders with overlapping transformations", "venue": null, "year": 2018 }, { "authors": [ "Aaron van den Oord", "Oriol Vinyals" ], "title": "Neural Discrete Representation Learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Vadim G Vizing" ], "title": "The Chromatic Class of a Multigraph", "venue": "Cybernetics and Systems Analysis,", "year": 1965 }, { "authors": [ "Yisen Wang", "Xingjun Ma", "Zaiyi Chen", "Yuan Luo", "Jinfeng Yi", "James Bailey" ], "title": "Symmetric Cross Entropy for Robust Learning with Noisy Labels", "venue": "In Proc. of IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Kai Xu", "Akash Srivastava", "Charles Sutton" ], "title": "Variational Russian Roulette for Deep Bayesian Nonparametrics", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Qiang Yang", "Kangheng Wu", "Yunfei Jiang" ], "title": "Learning Action Models from Plan Examples using Weighted MAX-SAT", "venue": "Artificial Intelligence,", "year": 2007 }, { "authors": [ "Håkan LS Younes", "Michael L Littman" ], "title": "PPDDL1. 0: An Extension to PDDL for Expressing Planning Domains with Probabilistic Effects", "venue": "Techn. Rep. CMU-CS-04-162,", "year": 2004 }, { "authors": [ "Vinicius Zambaldi", "David Raposo", "Adam Santoro", "Victor Bapst", "Yujia Li", "Igor Babuschkin", "Karl Tuyls", "David Reichert", "Timothy Lillicrap", "Edward Lockhart" ], "title": "Deep Reinforcement Learning with Relational Inductive Biases", "venue": "In Proc. of the International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Hankz Hankui Zhuo", "Subbarao Kambhampati" ], "title": "Action-Model Acquisition from Noisy Plan Traces", "venue": "In Proc. of International Joint Conference on Artificial Intelligence (IJCAI),", "year": 2013 }, { "authors": [ "Hankz Hankui Zhuo", "Jing Peng", "Subbarao Kambhampati" ], "title": "Learning Action Models from Disordered and Noisy Plan Traces", "venue": null, "year": 1908 }, { "authors": [ "van den Oord" ], "title": "They may contribute to the stable performance, but we leave the task of faster / easier training to the future work", "venue": "DVAE++Vahdat et al. (2018b), DVAE# Vahdat et al", "year": 2017 }, { "authors": [ "Asai", "Muise" ], "title": "2020) identified that state transition graphs of STRIPS planning problems form a graph class called directed cube-like graph (Payan", "venue": null, "year": 1992 } ]
[ { "heading": "1 INTRODUCTION", "text": "Learning a high-level symbolic transition model of an environment from raw input (e.g., images) is a major challenge in the integration of connectionism and symbolism. Doing so without manually defined symbols is particularly difficult as it requires solving both the Symbol Grounding (Harnad, 1990; Taddeo & Floridi, 2005; Steels, 2008) and the Action Model Learning/Acquisition problem.\nRecently, seminal work by Asai & Fukunaga (2018, Latplan) that learns discrete planning models from images has opened the door to applying symbolic Classical Planning systems to a wide variety of raw, noisy data. Latplan uses discrete variational autoencoders to generate propositional latent states and its dynamics (action model) directly from images. Unlike existing work, which requires several machine learning pipelines (SVM/decision trees) and labeled inputs (e.g., a sequence of high-level options) (Konidaris et al., 2014), Latplan is an end-to-end unsupervised neural network that requires no manually labeled inputs. Numerous extensions and enhancements have been proposed: Causal InfoGAN (Kurutach et al., 2018) instead uses GAN framework to obtain propositional representations. Latplan’s representation was shown to be compatible with symbolic Goal Recognition (Amado et al., 2018). First-Order State AutoEncoder (Asai, 2019, FOSAE) extends Latplan to generate predicate symbols. Cube-Space AutoEncoder (Asai & Muise, 2020, CSAE) regularized the latent space to a particular form which directly exports to a learned propositional PDDL model (Fikes et al., 1972). Discrete Sequential Application of Words (DSAW) learns a plannable propositional word embedding from a natural language corpus (Asai & Tang, 2020).\nIn this paper, we obtain a lifted action model expressed in First-Order Logic (FOL), which is a superset of object-centric (property-based) representation that Machine Learning community recently began to pay attention to1, but has long been the central focus of the broader AI community. In propositional action models, the environment representation is a fixed-sized binary array and does not transfer to a different or a dynamically changing environment with a varying number of objects. In contrast, lifted FOL representations are generalized over objects and environments, as we demonstrate in Blocksworld with different number of blocks, or Sokoban with different map sizes. We propose Lifted First-Order Space AutoEncoder (FOSAE++) neuro-symbolic architecture, which learns a lifted PDDL action model by integrating and extending the FOSAE, the CSAE and the Neural Logic Machine (Dong et al., 2019, NLM) architectures.\nThe overall task of our system is illustrated in Fig. 1. The system takes a transition dataset containing a set of pairs of raw observations which are single time step away. Each observation consists of multiple visual segmentations of the objects. It learns a lifted action model of the environment\n1e.g., ICML workshop on Object-Oriented Learning https://oolworkshop.github.io/\nby generating the symbols and emits a PDDL (Haslum et al., 2019) encoding for state-of-the-art planning systems.\nContribution Table 1 contains a taxonomy of existing model acquisition systems in chronological order. FOSAE++ is the first system that satisfies all features readily available in symbolic action model acquisition systems, while not relying on human-derived symbols. FOSAE++ generates unnamed symbols by itself — Effectively addressing the long-standing Knowledge Acquisition bottleneck (Cullen & Bryman, 1988) and the Symbol Grounding problem, showing a future direction for high-level symbolic autonomy." }, { "heading": "2 PRELIMINARIES AND BACKGROUND", "text": "We denote a multi-dimensional array (tensor) in bold and its elements with a subscript (e.g., x ∈ RN×M , x2 ∈ RM ), an integer range n ≤ i ≤ m by n..m, a concatenation of tensors a and b in the last axis by a; b, and the i-th data point of a dataset by a superscript i which we may omit for clarity. We use the same symbol for a set and its size (e.g., S, and not |S|) to avoid the clutter. Finally, B = [0, 1] ⊂ R. We assume background knowledge of discrete VAEs with continuous relaxations (included in the appendix Sec. A.1), such as Gumbel-Softmax (GS) and Binary-Concrete (BC) (Jang et al., 2017; Maddison et al., 2017). Their activations are denoted as GS and BC, respectively." }, { "heading": "2.1 LIFTED STRIPS/PDDL PLANNING", "text": "Planning Domain Description Language (PDDL) is a modeling language for a Lifted STRIPS planning formalism (Fikes et al., 1972) and its extensions (Haslum et al., 2019). Let F(T ) be\n1(Yang et al., 2007; Cresswell et al., 2013; Aineto et al., 2018; Zhuo et al., 2019; Cresswell & Gregory, 2011; Mourão et al., 2012; Zhuo & Kambhampati, 2013)\n2Konidaris et al. (2014) requires sequences of high-level options to learn from, such as [move,move, interact, · · · ] in Playroom domain. Causal InfoGAN cannot deterministically enumerate all successors (a requirement for search completeness) due to the lack of action symbols and should sample the successors. James et al. (2020b)’s PDDL output is limited to unary predicates / properties of objects, thus cannot model the interactions between objects. Also, it requires sequences of high-level options such as [WalkToItem,AttachBlock,WalkToItem, · · · ] in the Minecraft domain.\na formula consisting of logical operations {∧,¬} and a set of terms T . For example, when T = {have(I, food), full(I)}, then have(I, food) ∧ ¬full(I) ∈ F(T ). We denote a lifted STRIPS planning problem as a 5-tuple 〈O,P,A, I,G〉. O is a set of objects (3 food), P is a set of predicates (3 full(x)), and A is a set of lifted actions (3 eat). Each predicate p ∈ P has an arity #p ≥ 0. Predicates are instantiated/grounded into propositions P (O) = ⋃ p∈P ( {p} ×O × #p. . .×O ) , such as have(I, food). A state s ⊆ P (O) represents truth assignments to the propositions, e.g., s = {have(I, food)} represents have(I, food) = >. We can also represent it as a bitvector of size ∑ pO #p.\nEach lifted action a(X) ∈ A has an arity #a and parameters X = (x1, · · · , x#a), such as eat(x1, x2). Lifted actions are instantiated into ground actions A(O) =⋃ a∈A ( {a} ×O × #a. . .×O ) , such as eat(I, food). a(X) is a 3-tuple 〈PRE(a), ADD(a), DEL(a)〉, where PRE(a), ADD(a), DEL(a) ∈ F(P (X)) are preconditions, add-effects, and delete-effects: e.g., eat(x1, x2) = 〈{have(x1, x2)}, {full(x1)}, {have(x1, x2)}〉. The semantics of these three elements are as follows: A ground action a† ∈ A(O) is applicable when a state s satisfies PRE ( a† ) , i.e., PRE ( a† ) ⊆ s, and applying an action a† to s yields a new successor state\na†(s) = (s \\ DEL ( a† ) ) ∪ ADD ( a† ) , e.g., eat(I, food) = “I can eat a food when I have one, and if I eat one I am full but the food is gone.” Finally, I,G ⊆ P (O) are the initial state and a goal condition, respectively. The task of classical planning is to find a plan (a†1, · · · , a†n) which satisfies a†n ◦ · · · ◦ a † 1(I) ⊆ G." }, { "heading": "2.2 NEURAL PROPOSITIONAL/ACTION SYMBOL GENERATION WITH LATPLAN", "text": "Latplan is a framework for domain-independent image-based classical planning (Asai & Fukunaga, 2018). It learns a propositional state representation and transition rules entirely from image-based observations of the environment with discrete VAEs and solves the problem using a classical planner. Latplan is trained on a transition input Tr: a set of pairs of raw data randomly sampled from the environment. The i-th transition (oi,0,oi,1) ∈ Tr is a pair of observations made before and after an unknown high-level action is performed. Once trained, Latplan can process a planning input (oI ,oG), a pair of raw images corresponding to an initial and goal state of the environment. The output of Latplan is a data sequence representing the plan execution (oI , . . .oG) that reaches oG from oI . While the original paper used an image-based implementation, conceptually any form of temporal data is viable for this methodology, e.g., an NLP corpus (Asai & Tang, 2020).\nThe latest Latplan Asai & Muise (2020) has a training phase and a planning phase. In the training phase, it trains an end-to-end neural network called Cube-Space AutoEncoder (CSAE) on Tr (Fig. 2, top left). CSAE is a variational autoencoder modeled by binary and categorical random variables each representing the propositional states and the actions in the classical planning. The dynamics modeled by these actions directly compiles into a PDDL model. The network combines BinaryConcrete VAE to produce binary state representation, and a Gumbel-Softmax VAE to produce a\ncategorical bottleneck layer which assigns a categorical label to each input. Let o0 and o1 be a pair of observed states in a transition, z0 and z1 be the corresponding binary latent states, and a be the one-hot vector that represents a discrete action label assigned to the transition. CSAE is a variational autoencoder network that can be formalized as follows:\n(encoder) z0, z1 = ENCODE(o0), ENCODE(o1) ∈ BF\n(action assignment/clustering) a = ACTION(z0, z1) ∈ BA\n(learning the dynamics) z∼1 = APPLY(z0,a) ∈ BF\n(decoder) o∼0,o∼1,o∼ 1 = DECODE(z0), DECODE(z1), DECODE(z∼1)\nwhere ENCODE, DECODE, ACTION, APPLY are arbitrary multilayer perceptrons. The outputs of ENCODE and APPLY are activated by Binary Concerete, and the output of ACTION is activated by Gumbel-Softmax of A categories (hyperparameter). Assuming a certain set of prior distributions, a lower bound (ELBO) of the log likelihood of observing a pair of states (o0,o1) can be as derived follows (Appendix Sec. A.3.3):\nlog p(o0,o1) ≥ −DKL(q(a|o1, z0, z1)||p(a|z0, z1))−DKL(q(z∼1|o1,a, z0, z1)||q(z1|o1)) −DKL(q(z0|o0)||p(z0))−DKL(q(z1|o1)||p(z1)) + log p(o0|z0) + log p(o1|z∼1,a, z0, z1)\nAfter the training, it generates a propositional classical planning problem (zI , zG) from a planning input (oI ,oG) and export it into PDDL files together with the action model obtained in (2), which are then solved by Fast Downward (Helmert, 2006), an optimized C++-based solver independent from the neural network. Finally, it obtains a step-by-step, human-comprehensible visualization of the plan execution by decoding the intermediate states of the plan into images.\nThe A-dimensional one-hot categorical variable ai in the network performs a clustering on the state transitions with the maximum number of clusters A specified as a hyperparameter. The cluster ID is used as its action symbol. The clustering is performed by its encoder, ACTION(zi,0, zi,1) = ai, which takes a propositional state pair (zi,0, zi,1) and returns a one-hot vector ai of A categories using Gumbel-Softmax. AAE’s decoder APPLY takes the current state zi,0 and the action ai and reconstructs a successor state z∼i,1 ≈ zi,1, acting as a progression function APPLY(ai, zi,0) = z∼i,1. APPLY is typically just called a “model” in model-based RL literature.\nWhile APPLY can be any network from the training standpoint, such a neural black-box function does not directly translates to a STRIPS action model, preventing efficient search with state-ofthe-art classical planner. Cube-Space AE (Asai & Muise, 2020) addresses this issue by Back-ToLogit technique (BTL Fig. 2, bottom-left) which modifies APPLY. Latent state transitions learned by BTL guarantees that the actions and the transitions satisfy the STRIPS state transition rule s′ = (s \\ DEL(a)) ∪ ADD(a), thus enabling a direct translation from neural network weights to PDDL modeling language. Details of the network, the translation method and the proof can be found in the appendix Sec. A.3." }, { "heading": "2.3 PREDICATE SYMBOL GENERATION WITH SIMPLE FIRST-ORDER STATE AUTOENCODER", "text": "First-Order State AutoEncoder (Asai, 2019, FOSAE, Fig. 2, bottom) is an autoencoder that takes a set of object feature vectors, identifies its latent predicate representation, then outputs a reconstruction of the input. Unlike prior work on relational modeling (Santoro et al., 2017; Zambaldi et al., 2019; Battaglia et al., 2018), this system obtains discrete logical predicates compatible with symbolic systems. The input is similar to the setting of Ugur & Piater (2015) and James et al. (2020b): Each feature vector can represent each object in the environment in an arbitrary complex manner. In this paper, we use segmented pixels combined with bounding box information (x, y and width, height). Let on ∈ RF be an F -dimensional feature vector representing each object and o = (o1, . . .oO) ∈ RO×F be the input matrix representing the set of O objects. FOSAE generates a multi-arity latent representation. Assume we represent the input with a combination of predicates of different arities. We denote a set of predicates of arity n as P/n (Prolog notation) and its propositions as P/n(O). We denote the binary tensor representation of P/n(O) as z/n ∈ BO× n...×O×P/n, and the latent space is a tuple z = (z/1, . . . ,z/N), where N is the largest arity. The total size of its latent space is ∑n=N n=1 O nP/n. To convert the input o into unary\npredicates P/1, we apply a 1D pointwise convolutional filter f1 of P/1 output features over the objects and activate it by BinaryConcrete for discretization, i.e., z/1i = BC(f1(oi)) ∈ BP/1. Similarly, for binary relationships P/2, we apply a filter f2 over concatenated pairs of objects, i.e., z/2ij = BC(f2(oi;oj)) ∈ BP/2. We can extend this framework to an arbitrary arity n. The multiarity latent tensors (z/1, . . . ,z/N) can be flattened, concatenated, then decoded to the reconstruction õ ∈ RO×F by an arbitrary decoder network (MLP in the original paper). There is a potential exponential blowup due to ON parameter combinations. This problem is addressed by attentions in the original work (Asai, 2019), but we omit this feature in this paper for simplicity and due to relatively small N (< 4). We call our variant as SimpleFOSAE." }, { "heading": "2.4 NEURAL LOGIC MACHINE (NLM)", "text": "The NLM (Dong et al., 2019) is a neural Inductive Logic Programming (ILP) (Muggleton, 1991) system whose inputs and outputs are hand-crafted binary tensors representing propositional groundings of FOL statements. These binary tensors are in the multi-arity representation (z/1, . . . ,z/N) identical to the latent space of SimpleFOSAE. NLM provides a key feature that is necessary for learning a lifted First-Order Logic action model: Invariance to the number and the order of objects in the multi-arity representation. The latter implies permutation equivariance, i.e., when the first n axes of each input z/n ∈ BO× n...×O×P/n are reordered by a permutation π, the output axes are also reordered by π. A more detailed explanation of NLM can be found in the appendix Sec. A.2.\nAn NLM contains N COMPOSE operations which are applied to the corresponding elements of (z/1, · · · , z/N). Each COMPOSE is denoted as COMPOSEQ,σ(z, n) ∈ BO\nn...O×Q, where Q is a constant that specifies the number of output features, and σ is a nonlinearlity. An NLM output is denoted as NLMQ,σ(z) = (COMPOSEQ,σ(z, 1), · · · , COMPOSEQ,σ(z, N))." }, { "heading": "3 LIFTING THE ACTION MODEL", "text": "In order to obtain a lifted action model in an unsupervised setting, there are three requirements: (1) white-box action model which is trivially convertible to STRIPS formalism, (2) invariance to the number/order of objects, (3) unsupervised generation of multi-arity predicate symbols. To our knowledge, no existing systems fulfill all requirements: (Simple)FOSAE lacks 1 and 2, CSAE lacks 2 and 3, and NLM lacks 1 and 3 (designed for hand-crafted symbolic data).\n(FOSAE encoder) zi,0, zi,1 = ENCODE(oi,0), ENCODE(oi,1)\n(action parameters selector in NLM) xi = PARAMS(zi,0, zi,1)\n(extract the parameter-bound subspace) zi,0† , z i,1 † = BIND(z i,0,xi), BIND(zi,1,xi)\n(unsupervised action assignment) ai = ACTION(zi,0† , z i,1 † )\n(learning the bounded dynamics) z∼i,1† = APPLY(z i,0 † ,a i)\n(reflection to the global dynamics) z∼i,1 = zi,0 − UNBIND(zi,0† ,x i) + UNBIND(z∼i,1† ,x i)\n(decoder in NLM) o∼i,0,o∼i,1,o∼ i,1 = DECODE(zi,0), DECODE(zi,1), DECODE(z∼i,1)\nLoss: `(oi,0,o∼i,0) + `(oi,1,o∼i,1) +`(oi,1,o∼i,1)+`(zi,1, z∼i,1) + `(zi,1† , z ∼i,1 † ) + regularization.\nWe now propose FOSAE++ whose overview is shown above, which addresses all issues at once. Its overall design follows the CSAE, which ENCODE s the input, identifies the action ai, APPLY-es the action, then DECODE s the results. It uses FOSAE’s encoder to generate multi-arity latent representation, which is then consumed by NLMs used as a basic building block of other components. This architecture’s key element is a new component PARAMS and a unique pair of operations called BIND and UNBIND, which intuitively reflects the structure of lifted action models. Suppose we model a lifted action (move ?block ?from ?to) in Blocksworld (Fig. 1), with effects such as (on ?block ?to). Since a lifted model always refers to the objects through its parameters such as ?to, it cannot affect\nthe objects that does not appear in the parameter list, e.g., action (move a b c) cannot affect (on d c) because d 6∈ {a, b, c}. We represent this restriction as differentiable matrix operations. To implement this idea, we first added a new attention network x = PARAMS(zi,0; zi,1) = (x1, . . . ,x#a) which attends to the objects, essentially learning the parameters of the action. We assume all actions have the same number of parameters, and thus #a is a hyperparameter. Each parameter xi is a one-hot hard attention vector in BO ( ∑j=O j=1 xij = 1) activated by Gumbel-Softmax. This can later extract a specific object from the object feature matrix o by an inner product xi · o (BORO×F → RF ). PARAMS has several NLM layers, ending with NLM#a,IDENTITY. We then extract its unary part of the results (→ RO×#a), transpose it (→ R#a×O), then apply #a-way GumbelSoftmax of O categories (→ B#a×O). As a result, the output attends to #a objects in total. We utilize the attentions in unique operations named BIND and UNBIND. Recall that the predicates in the effects can refer only to the specified action parameters. Therefore, we limit the dynamics to the objects attended by x (Fig. 3). We iteratively extracts the sub-axes of z/n attended by x using matrix operations (with an abuse of notation) z†/n = (x)nz/n ((B#a×O)nBO\nn...O×P/n → B#a n...#a×P/n). We call it BIND(z/n,x), as it binds the parameters X to the values in objects O, similar to the function call in a typical programming language. It is also similar to applying numpy.take in all dimensions. Binding z/n for all arities results in BIND(z,x) = (BIND(z/1,x), · · · BIND(z/N,x)). We also define UNBIND(z†/n,x) = (x>)nz†/n ∈ BO\nn...O×P/n which restores the original shape, but fills the cells with zeros for the objects not attended by x.\nExample 1. For a moment let’s ignore that we use a boolean tensor and let z/2 = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] with O = {1, 2, 3} which represents a two-arg function f(2, 1) = z/22,1 = 4 etc. When we bind z/2 to (2, 1) by x = [[.01, .98, .01], [.98, .01, .01]], we obtain z†/2 = x(z/2)x> ≈ [[5., 4.], [2.1, 1.1]] ≈ [[5, 4], [2, 1]], approximating the subspace extraction. To unbind it, x>(z†/2)x ≈ [[1.1, 2.1, .03], [3.9, 4.9, .09], [.05, .07, .01]] ≈ [[1, 2, 0], [4, 5, 0], [0, 0, 0]].\nAfter the subspace extraction with BIND, we use APPLY and ACTION to find the dynamics / action model inside the subspace. Notice that ACTION and APPLY no longer directly refers to each object oi by the index i because BIND already resolves the index reference. They take flattened binary vectors of size ∑ n(#a) nP/n as the input / output.\nIn order to reflect the changes applied to the bounded representation to the whole representation, we use UNBIND: z∼i,1 = zi,0−UNBIND(zi,0† )+UNBIND(z ∼i,1 † ). UNBIND is unique in that the unattended axes have near-0 values (0 at the limit of Gumbel-Softmax annealing in PARAMS). Therefore, the propositions not bound by x retain their values from zi,0.\nFinally, we replace SimpleFOSAE’s MLP decoder with NLM layers. To match the output shape with the input o ∈ RO×F , the final layer has F features and we use only the unary part of the result tuple, NLMF,σ(·)/1 = COMPOSE(·, 1, F, σ). The total loss is the sum of `(oi,0,o∼i,0), `(oi,1,o∼i,1), `(oi,1,o∼\ni,1 ), `(zi,1, z∼i,1), and `(zi,1† , z ∼i,1 † ), plus the variational losses for discrete VAEs. Hyperpa-\nrameters, choice of loss functions and the tuning process are detailed in the appendix Sec. C.\nThe number of parameters in FOSAE++ is not affected by the number of objects, since essentially each NLM layer performs a 1D pointwise convolution over tuples of objects." }, { "heading": "4 EXPERIMENTS", "text": "We prepared 3 artificial environments: Photo-realistic Blocksworld (Asai, 2018) renders an RGB Blocksworld image with Blender 3D engine and extracts the bounding boxes and the image patches of each object. MNIST 8-Puzzle (Asai & Fukunaga, 2018) is a 42x42 pixel, monochrome imagebased version of the 8-Puzzle. Tiles contain hand-written digits (0-9) from the MNIST database (LeCun et al., 1998), and valid moves swap the “0” tile with a neighboring tile, i.e., the “0” serves as the “blank” tile in the classic 8-Puzzle. Sokoban (Culberson, 1998) is a PSPACE-hard puzzle domain whose visualization was obtained from the PDDLGym library (Silver & Chitnis, 2020). In all datasets, segmentation is already provided by a domain-specific code, but in principle it can be replaced by the output of object-detectors such as YOLO (Redmon & Farhadi, 2018). The detail of the data preparation is available in the appendix Sec. D. Fig. 4 shows some comparisons between the input oi,0 and the reconstruction o∼i,0. In Sokoban, FOSAE++ generates up to 63 objects in a single scene." }, { "heading": "4.1 INTERPOLATION/EXTRAPOLATION TO VARYING, UNSEEN NUMBER OF OBJECTS", "text": "The key characteristics of the FOSAE++ is that once trained, its network structurally generalizes to the different number of objects. We demonstrate this performance in interpolation/extrapolation tasks, where in the former we randomly drop a certain number of objects from the input, and in the latter, we use an environment with a different distribution or more objects. In both tasks, FOSAE++ shows an excellent reconstruction for a varying, untrained number of objects (Fig. 5)." }, { "heading": "4.2 PLANNING EXPERIMENTS", "text": "Finally, we ran Fast Downward Helmert (2006) planning system with LMcut Helmert & Domshlak (2009) heuristics and A∗ search configuration on the PDDL domains generated by FOSAE++. For the planning to succeed, FOSAE++ must be accurate not only about the direct reconstruction\no∼i,0,o∼i,1 but also about the dynamics that predicts the successor state z∼i,1 and its reconstruction o∼ i,1.\nDue to the time constraint, we only performed the experiments for 8-Puzzle as of now. From the fixed goal state (the solved state of the puzzle), we performed a domain-specific Dijkstra search and sampled the initial states optimally L-steps away. Each initial/goal states are rendered, cropped into object-based input, then encoded by FOSAE++ to produce the symbolic initial/goal states. The planning results are visualized and manually inspected/validated by us. The symbolic classical planner correctly produced a solution which can be visualized into correct state transitions. See Sec. E.1 for the visualization of all 20 instances." }, { "heading": "5 RELATED WORK", "text": "James et al. (2020a;b) build on existing work (Konidaris et al., 2014; 2015; 2018) to find an egocentric logical representation that is invariant to the state of the observer in a complex environment, or an object-centric, property-based logical representation. Both representations are special cases of First-Order Logic representation where all predicates are unary. These approaches also require a training input that already contains action symbols (e.g., toggle-door). Andersen & Konidaris (2017) obtains an MDP-based PPDDL (Probabilistic PDDL) model using Active Learning. Incorporating Active Learning in FOSAE++ for self data collection is future work.\nHuang et al. (2019) reported that direct discretization approach performed worse than an approach that plans in the probabilistic representation obtained by neural networks. However, the question of continuous versus discrete is not conclusive. They used a naive rounding-based discretization which may cause a significant amount of information loss compared to the state-of-the-art discrete variational autoencoders. Furthermore, they rely on mappings from raw inputs to hand-coded symbolic representations that require supervised training.\nObject-based input we used can be obtained from state-of-the-art object-recognition methods such as YOLO (Redmon & Farhadi, 2018). More recent systems (Greff et al., 2019; Engelcke et al., 2020) can handle shapes not limited to rectangles, and can be trained unsupervised. Connecting our network with these systems is an exciting avenue of future work.\nEslami et al. (2016) and Xu et al. (2019) proposed an attention-based and a Bayesian approach to the problem of variable-sized object reconstruction. Their work do not address the state dynamics and is orthogonal to our work. The primary difference from our approach is that they studied on sequentially storing and retrieving a variable amount of information into/from a fixed-sized array, while we store them in a latent representation of the corresponding size." }, { "heading": "6 CONCLUSION", "text": "We proposed a first fully neural First-Order Logic Action Model Acquisition system that successfully produces a lifted PDDL planning model from noisy, segmented images alone. The system generates three types of symbols (action, predicate, and propositional) without manual labeling, and provides STRIPS-compatible explanations of actions. The learned results are generalized over the objects, thus can extrapolate to different and dynamically changing environments. We demonstrated that the state-of-the-art classical planner can produce correct solutions (as theoretically guaranteed) only with the interactions with the learned model. Note that the classical planner we used has no learning components. Our results partially support the Physical Symbol Systems Hypothesis by Newell & Simon (1976): In our word, a formal high-level representation learned from the raw inputs is sufficient for intelligent actions. Future work includes extensions to the formalisms with higher expressivity, including Probabilistic PDDL (Younes & Littman, 2004), Axioms (Thiébaux\net al., 2005), ADL (Pednault, 1987), numeric planning (Fox & Long, 2006), or Hierarchical Task Networks (Erol et al., 1994; Nau et al., 2003; Nejati et al., 2006)." }, { "heading": "A EXTENDED BACKGROUNDS", "text": "" }, { "heading": "A.1 DISCRETE VARIATIONAL AUTOENCODERS", "text": "Variational AutoEncoder (VAE) is a framework for reconstructing an observation x from a compact latent representation z that follows a certain prior distribution. Training is performed by maximizing the sum of the reconstruction loss and the KL divergence between the latent random distribution q(z|x) and the target distribution p(z), which gives a lower bound for the likelihood p(x) (Kingma & Welling, 2013). While typically p(z) is a Normal distribution N (0, 1), Gumbel-Softmax (GS) VAE (Jang et al., 2017) and Binary-Concrete (BC) VAE (Maddison et al., 2017) use a discrete, uniform categorical/Bernoulli(0.5) distribution, and further approximate it with a continuous relaxation by introducing a temperature parameter τ that is annealed down to 0.\nWe denote corresponding activation function in the latent space as GS(l) and BC(l), where l and l each represents a logit vector and scalar. A latent vector z ∈ [0, 1]C of GS-VAE is computed from a logit vector l ∈ RC by z = GS(l) = SOFTMAX((l + GUMBEL(0, 1))/τ), where C is the number of categories, GUMBEL(0, 1) = − log(− logu) and u ∼ UNIFORM(0, 1) ∈ [0, 1]C . A latent scalar z of BC-VAE is computed from a logit scalar l ∈ R by z = BC(l) = SIGMOID((l + LOGISTIC(0, 1))/τ), where LOGISTIC(0, 1) = log u− log(1− u) and u ∼ UNIFORM(0, 1) ∈ R. Both functions converge to discrete functions at the limit τ → 0: GS(l) → arg max(l) (in one-hot representation) and BC(l) → STEP(l) = (l < 0) ? 0 : 1. Typically, GS-VAE and BC-VAE contains multiple latent vectors / latent scalars to model the complex input.\nThere are more complex variations such as VQVAE van den Oord et al. (2017), DVAE++Vahdat et al. (2018b), DVAE# Vahdat et al. (2018a). They may contribute to the stable performance, but we leave the task of faster / easier training to the future work." }, { "heading": "A.2 NEURAL LOGIC MACHINE", "text": "The NLM (Dong et al., 2019) is a neural Inductive Logic Programming (ILP) (Muggleton, 1991) system based on First-Order Logic and the Closed-World Assumption (Reiter, 1981). Given a set of base predicates grounded on a set of objects, NLMs sequentially apply horn rules to draw further conclusions such as a property of or a relationship between objects. For example, in Blocksworld, based on a premise such as (on a b) for blocks a, b, NLMs can infer (not (clear b)). NLM has two unique features: (1) The ability to combine the predicates of different arities, and (2) size invariance & permutation equivaliance on input objects, which is achieved by enumerating the permutations of input arguments.\nNLM is designed to work on hand-crafted binary tensors representing propositional groundings of FOL statements. The format of these binary tensors are the multi-arity representation (z/1, . . . ,z/N) identical to the latent space of SimpleFOSAE.\nNLM is designed for a subset of First Order Logic where every statement is a horn rule, contains no function terms (such as a function that returns an object), and all rules are applied between neighboring arities. With these assumptions, the statements fall in one of the three types below:\n(expand) ∀x#p; p(X;x#p)← p(X), (reduce) p(X)← ∃x#p; p(X;x#p),\n(compose) q(X)← F (⋃ π (P ∪ P ∪ P )/#q(π(X)) ) .\nHere, p, p, p, q ∈ P, P , P ,Q (respectively) are predicates, X = (x1, . . .) is a sequence of parameters, F(T ) is a formula consisting of {∧,∨,¬,>,⊥} and T , (P ∪ P ∪ P )/#q is a set of predicates of the same arity as q, and π(X) is a permutation of X , which is used for composing the predicates with the different argument orders.\nAll three rules can be implemented as tensor operations (Fig. 6). Given a binary tensor z/n of shape O n. . .O × P/n, expand copies the n-th axis to n + 1-th axis resulting in a shape On+1. . . O × P/n, and reduce takes the max of n-th axis resulting in a shape On−1. . . O × P/n. While the original paper\nproposed both min and max versions to represent ∀ and ∃, with enough number of layers only one of them is necessary because ∀x; p(·, x) = ¬∃x;¬p(·, x). Finally the COMPOSE combines the information between the neiboring tensors z/n, z/n−1, z/n+1. In order to use the information in the neighboring arities (P , P and P ), the input concatenates z/n with EXPAND(z/n−1) and REDUCE(z/n+1) (→ O n. . .O×C where C = P/n+ P/n−1 + P/n+1). Next, a PERMUTE function enumerates and concatenates the results of swapping the first n axes in the tensor (→ O n. . .O×(!n·C)). It then applies a n-D pointwise convolutional filter fn withQ output features (→ O n. . .O × Q). In the actual implementation, this n-D pointwise filter is merely a 1D convolution performed on a matrix reshaped intoOn×(!n·C). It is activated by any nonlinearity σ to obtain the final result, which we denote as COMPOSE(z, n,Q, σ). Formally, ∀j ∈ 1..n,∀oj ∈ 1..O,\nCOMPOSE(z, n,Q, σ)o1···on = σ(fn(Πo1···on)) ∈ RQ where Π = PERMUTE ( EXPAND(z/n−1); z/n; REDUCE(z/n+1) ) ∈ BO n...O×(!n·C)\nAn NLM contains N (the maximum arity) COMPOSE operation for the neighboring arities, with appropriately omitting both ends (0 and N + 1) from the concatenation. We denote the result as NLMQ,σ(z) = (COMPOSE(z, 1, Q, σ), · · · , COMPOSE(z, N,Q, σ)). This horizontal arity-wise compositions can be layered vertically, allowing the composition of predicates whose arities differ more than 1 (e.g., 2 layers of NLM can combine unary and quaternary predicates).\nTwo minor modification was made from the original paper. First, we use a slightly different tensor shapes: For the notational conveniency, we use hypercube-dimensional tensors of shape BO n...O×P/n, instead of the original formulation BO×O−1×...O−n−1×P/n which tries to reduce the size by disallowing the duplicates in the parameters. Our modification does not significantly affect the complexity of the representation because the original representation also has O(On) complexity.\nSecond, we do not use the nullary predicates z/0 in order to disallow VAEs from encoding environment-specific information in it." }, { "heading": "A.3 LEARNING DISCRETE LATENT DYNAMICS USING BACK-TO-LOGIT", "text": "APPLY(ai, zi,0) is an arbitrary MLP, i.e., a neural black-box function that does not directly translates to a STRIPS action model, preventing efficient search with state-of-the-art classical planner. CubeSpace AE (Asai & Muise, 2020) addresses this issue by Back-To-Logit technique which replaces the MLP. Back-to-Logit places a so-called cube-like graph prior on the binary latent space / transitions. To understand the prior, the background of cube-like graph is necessary." }, { "heading": "A.3.1 CUBE-LIKE GRAPHS AND ITS EQUIVALENCE TO STRIPS", "text": "Asai & Muise (2020) identified that state transition graphs of STRIPS planning problems form a graph class called directed cube-like graph (Payan, 1992). A cube-like graph is a simple2 undirected graph G(S,D) = (V,E) defined by sets S and D. Each node v ∈ V is a finite subset of S, i.e., v ⊆ S. The set D is a family of subsets of S, and for every edge e = (v, w) ∈ E, the symmetric difference d = v ⊕ w = (v \\ w) ∪ (w \\ v) must belong to D. For example, a unit cube is a cube-like graph because S = {x, y, z}, V = {∅, {x}, . . . {x, y, z}}, E = {(∅, {x}), . . . ({y, z}, {x, y, z})}, D = {{x}, {y}, {z}}. The set-based representation can be alternatively represented as a bit-vector, e.g., V ′ = {(0, 0, 0), (0, 0, 1), . . . (1, 1, 1)}. We denote a one-to-one |S|-bit vector assignment as f : V → V ′ = {0, 1}|S|. In STRIPS modeling, we use a directed version of this graph class. For every edge e = (v, w) ∈ E, there is a pair of sets d = (d+, d−) = (w \\ v, v \\ w) ∈ D which satisfies the asymmetric difference w = (v \\ d−) ∪ d+. It is immediately obvious that this graph class corresponds to the relationship between binary states and action effects in STRIPS, s′ = (s \\ DEL(()a)) ∪ ADD(()a). As a special case, in an undirected STRIPS planning problem where all actions are reversible (i.e., for any action a such that t = a(s), there is an action a−1 that is able to reverse the effect s = a−1(t)), the state space graph is equivalent to an undirected cube-like graph." }, { "heading": "A.3.2 EDGE CHROMATIC NUMBER OF CUBE-LIKE GRAPHS AND THE NUMBER OF ACTION", "text": "SCHEMA IN STRIPS\nEdge coloring of cube-like graphs provides an intuitive understanding of the action schema expressible in STRIPS formalism. Consider coloring a graph which forms a unit cube (Fig. 8) and whose node embeddings correspond to the latent space of some images. A cube-like graph on the left can be efficiently colored (i.e., by fewer colors) by the difference between the neighboring embeddings. Edges can be categorized into 3 labels (0, 0,⊕1), (0,⊕1, 0) and (⊕1, 0, 0) (6 labels if directed), where each label is assigned to 4 edges which share the same node embedding differences, as depicted by the upward arrows with the common node difference (0, 0,⊕1) in the figure. The set of node embedding differences corresponds to the set D, and each element of D represents an action, e.g., when the node embeddings are decoded into images, moving toward positive direction of xaxis may result in the “0” tile in the image moving to the right, and moving toward positive direction of z-axis may result in the “2” tile in the image moving to the right — effectively making actions compositional. In contrast, the graph on the right has node embeddings that are randomly shuffled. Despite having the same topology and the same embedding size, this graph lacks the common patterns in the embedding differences like we saw on the left, thus cannot be efficiently colored by the node differences.\n2No duplicated edges between the same pair of nodes\nAs such, cube-like graphs can be characterized by the edge chromatic number (minimum edge coloring) according to the node differences. We now discuss some theoretical properties of the graph coloring with and without the assumptions on the colors and the node differences. Theorem 1 (Edge chromatic number Vizing (1965)). Let the edge chromatic number c(G) of an undirected graph G be the number of colors in a minimum edge coloring. Then c(G) is either ∆ or ∆ + 1, where ∆ is the maximum degree of the nodes.\nTheorem 2. Mapping E → D provides an edge coloring and thus c(G) ≤ minf |D|.\nProof. f is one-to-one: w 6= w′ ⇐⇒ f(w) 6= f(w′). For any set X , f(w) 6= f(w′) ⇐⇒ X ⊕ f(w) 6= X ⊕ f(w′). For any adjacent edges (v, w) and (v, w′), w 6= w′ because G is simple (at most one edge between any nodes), thus f(v) ⊕ f(w) 6= f(v) ⊕ f(w′). The remainder follows from the definition.\nFor Thm. 2, equality holds for hypercubes. For a certain embedding dimension |S|, there are graph instances where c(G) < minf |D| (Fig. 9, left). We “proved” this with an Answer Set Programming solver Potassco Gebser et al. (2011).\nBased on these results, we next consider the minimum number of actions required to model an undirected cube-like graph G(S,D) as a planning model. We restrict ourselves to a preconditionfree planning domains in order to focus on the action effects. We first need a lemma in order to focus on undirected graphs. Lemma 1. Let P be a precondition-free grounded STRIPS planning problem which contains irreversible actions. There is a corresponding planning problem P ′ whose state transition graph is identical to that of P and whose precondition relaxation P ′′ is reversible.\nProof. For any irreversible action a in P , add a set of actions A−1(a) whose size is |A−1(a)| = 2|ADD(()a)∪DEL(()a)|. Each action a′ ∈ A−1(a) contains those effects which encode one of the possible non-deterministic outcomes of reversing a, and contains an unsatisfiable precondition (e.g., adding a new proposition whose value is constantly false). Then the state transition graphs of P and P ′ are identical and P ′′ is reversible.\nSince we assume precondition-free domains, for any P we could instead consider P ′′ to discuss the effects of reversible planning domains. Let G be an undirected cube-like graph which is isomorphic to the state transition graph of a precondition-free reversible planning model P .\nTheorem 3. Let Pc be another planning problem definition which models G. The action effects in Pc are allowed to use conditional effects. Then the minimum number of actions in Pc required to model G is c(G).\nProof. For each color c ∈ 1..c(G) and for each edge (v, w) colored as c, and for each propositional value f(w)i ∈ {0, 1}, we add a conditional effect to c-th action. If f(w)i = 0, add a delete-effect, and f(w)i = 1, add an add-effect. The effect is conditioned by the full conjunction of f(v) using negative preconditions. See Fig. 9 (right), where all effects are put in one conditional effect.\nTheorem 4. The minimum number of actions required to model G without conditional effects (i.e., by STRIPS effects) is minf 2|D|.\nProof. Each d ∈ D needs 2 actions for forward/backward directions.\nThm. 3 indicates that conditional effects can compact as many edges as possible into just A = ∆ or ∆ + 1 actions regardless of the nature of the transitions, while Thm. 2 and Thm. 4 indicate that STRIPS effects require a larger number of actions. Therefore, merely assigning action symbols to state transitions (which is equivalent to edge coloring) does not result in a compact STRIPS model. Notice that the vanilla MLP AAE binds an unrestricted edge chromatic number c(G) = ∆ or ∆ + 1 by the maximum number of action labels A (a hyperparameter of the network), but does not bind 2|D|, the edge chromatic number in terms of neighboring node embedding differences. In order to find a compact STRIPS action model, we should instead bind 2|D| by A and restrict latent state transitions to follow a STRIPS transition rule.\nIn order to constrain the learned latent state space of the environment to a cube-like graph, we propose Cube-Space AutoEncoder. We first explain a vanilla Space AutoEncoder, an architecture that jointly learns the state and the action model by combining the SAE and the AAE into a single network. We then modify the APPLY progression to restrict state transitions. Due to the flexibility of neural networks, the loss enhanced by the restriction automatically propagates to the state representation, i.e., it modifies a state representation in order to reduce the loss produced by the restricted action model.\nA.3.3 VANILLA SPACE AUTOENCODER\nThe vanilla Space AutoEncoder (Fig. 10, right) connects the SAE and AAE subnetworks. The necessary modification, apart from connecting them, is the change in loss function. The original AAE was trained to optimize the distance between the binary vectors using binary crossentropy (BCE), which is assymmetric in definition: BCE(x, y) = x log y + (1− x) log(1− y). While this was not problematic in the AAE which uses the fixed state representation x and the successor prediction y, it is more natural for Vanilla Space AE to use a symmetric loss that equally affects x and y.\nIn addition to the loss for the successor prediction in the latent space, we also ensure that the predicted successor state can be decoded back to the correct image oi,1. Thus, the total loss is a sum of the losses for: (1) the main reconstructions `(oi,0,o∼i,0) and `(oi,1,o∼i,1), (2) the successor latent state reconstruction `(zi,1, z∼i,1), (3) the image reconstruction from the predicted successor `(oi,1,o∼i,1), and (4) the KL regularization. We call the second term as direct loss.\nWe next formally analyze this training objective. Given an observed transition (o0,o1), we assume that o0 followsN (o∼0, σ0) and o1 followsN (o∼ 0 , σ1) (See Sec. 2.2 for explanation). The maximization objective is a log-likelihood of observing a pair of states o0,o1.\nWe iteratively derive the lower bounds (ELBO) by inserting several latent variables. We first introduce z0, z1:\nlog p(o0,o1) ≥−DKL(q(z0, z1 | o0,o1)||p(z0, z1)) + Eq(z0,z1|o0,o1)[log p(o0,o1 | z0, z1)]. (1)\nThe first term of Eq. 1 is the KL divergence for z0, z1. Since all latent variables are assumed to be independent (mean-field assumption), p(z0, z1) = p(z0)p(z1), where p(z0), p(z1) are respectively the prior distributions for the binary latent variable z0 and z1 that we discussed in Sec. ??. q(z0, z1 | o0,o1) can also be decomposed because, in the encoder modeled by q, z0 depends only on o0 and z1 depends only on o1. Therefore, the entire KL divergence is divided into individual KL divergence:\nDKL(q(z 0, z1 | o0,o1)||p(z0, z1)) =DKL(q(z0 | o0,o1)||p(z0)) +DKL(q(z1 | o0,o1)||p(z1))\n=DKL(q(z 0 | o0)||p(z0)) +DKL(q(z1 | o1)||p(z1)). (2)\nWe next decompose the second term in Eq. 1 as shown in Eq. 3. This derivation is possible because o∼0 and o∼1 are generated independently in the decoder (the network that produces o∼0,o∼1 from z0, z1) modeled by p. The first term in Eq. 3 corresponds to a reconstruction loss for o∼0 (MSE due to p(o0 | z0) = N (o∼0, σ0)).\nlog p(o0,o1 | z0, z1) = log p(o0 | z0, z1) + log p(o1 | z0, z1) = log p(o0 | z0) + log p(o1 | z0, z1). (3)\nNext, we derive the lower bound of the second term in Eq. 3 by introducing a one-hot categorical latent variable a for action labels, and its prior distribution p(a | z0, z1) = Cat(1/A) (uniform categorical distribution of A categories).\nlog p(o1 | z0, z1) ≥−DKL(q(a | o1, z0, z1)||p(a | z0, z1)) + Eq(a|o1,z0,z1)[log p(o1 | a, z0, z1)]. (4)\nFinally, to complete the Vanilla Space AE network, we should model the second term in Eq. 4 using another latent variable z∼1. We further derive the lower bound using the same reformulation:\nlog p(o1 | a, z0, z1) ≥−DKL(q(z∼1 | o1,a, z0, z1)||p(z∼1 | a, z0, z1)) + E\nq(z∼1|o1,a,z0,z1)[log p(o 1 | z∼1,a, z0, z1)]. (5)\nThe second term of Eq. 5 is a reconstruction loss for o∼1 which can be computed by assuming p(o1 | z∼1,a, z0, z1) = N (o∼1, σ1). However, what should we use as a prior distribution p(z∼1 | a, z0, z1) in the first term (KL divergence)? We set it to be q(z1 | o1), because the choice of a prior is arbitrary and we want that the distribution of the predicted successor state z∼1 to be identical to the distribution of the successor state z1 directly encoded from the input. As a result, the total maximization objective is derived as follows:\nlog p(o0,o1) Interpretation\n≥−DKL(q(z0 | o0)||p(z0)) KL divergence for z0 in Eq. 2 −DKL(q(z1 | o1)||p(z1)) KL divergence for z1 in Eq. 2 + log p(o0 | z0) Reconstruction loss `(o0,o∼0) in Eq. 3 −DKL(q(a | o1, z0, z1)||p(a | z0, z1)) KL divergence for a in Eq. 4 −DKL(q(z∼1 | o1,a, z0, z1)||q(z1 | o1)) KL divergence between z1 and z∼1 in Eq. 5\n= direct loss `(z1, z∼1)\n+ log p(o1 | z∼1,a, z0, z1). Reconstruction loss `(o1,o∼1) in Eq. 5 (6)\nDirect loss `(z1, z∼1) can be computed by the same method introduced in (Sec. ??). We assume q(z∼1 | o1,a, z0, z1) = Bernoulli(q∼1) and q(z1 | o1) = Bernoulli(q1) where q∼1 = σ(l ∼1 ), q1 = σ(l1), and l ∼1\n, l1 are the inputs to the corresponding Binary Concrete activation. Then the KL divergence is computed as\nDKL(q(z ∼1 | o1,a, z0, z1)||q(z1 | o1)) = DKL(Bernoulli(q∼1)||Bernoulli(q1))\n= q∼1 log q∼1 q1 + (1− q∼1) log 1− q ∼1 1− q1 . (7)\nCoincidentally, this is DKL(q(z ∼1 | o1, z0,a)||Bernoulli(0.5)) + log 2 + BCE(q∼1, q1), whose last binary cross entropy term similar to the loss function of AAE.\nImplementation note: If we instead optimize DKL(q(z∼1 | o1, z0,a)||Bernoulli(0.5)) + log 2 + BCE(z∼1, z1), the training becomes slower due to the unnecessary logistic noise inside BC of z∼1 = BC(l ∼1 ) and z1 = BC(l1).\nImplementation note: It is important to bootstrap various terms in the optimization objective. In our experience, delaying the application of the direct loss until a certain epoch seems crucial, and we suspect the reason is due to the relatively short network distance between the two latent spaces zi,1/z∼i,1 compared to oi,1/o∼i,1. If we enable the direct loss from the beginning, the total loss does not converge because z∼i,1 prematurely converges to zi,1 causing a mode-collapse (e.g. all 0), before the image en/decoder learns a meaningful latent representation." }, { "heading": "A.3.4 CUBE-SPACE AE", "text": "Cube-Space AE modifies the APPLY network so that it directly predicts the effects without taking the current state as the input, and logically computes the successor state based on the predicted effect and the current state.\nA naive implementation of such a network is shown in Fig. 11 (left). The EFFECT network predicts a binary tensor of F × 3 using F -way Gumbel-Softmax of 3 categories. Each Gumbel Softmax corresponds to one bit in the F -bit latent space and 3 classes correspond to the add effect, delete effect and NOP, only one of which is selected by the one-hot vector. The effects are applied to the\ncurrent state either by a max/min operation or its smooth variants (smooth min/max). Formally, the naive Cube-Space AE is formulated as follows:\nBF 3 z∼i,1 = APPLY(zi,0,ai) = min(max(zi,0, ADD(()ai)), 1− DEL(()ai)) or (8) smin(smax(zi,0, ADD(()ai)), 1− DEL(()ai)), where (9)\nEFFECT(()ai) = GS(MLP(ai)) ∈ BF×3, ADD(()ai) = EFFECT(()ai)0, DEL(()ai) = EFFECT(()ai)1, NOP(a i) = EFFECT(()ai)2,\nsmax(x, y) = log(ex + ey), smin(x, y) = − log(e−x + e−y). (10)\nWhile being intuitive, we found these naive implementations extremely difficult to train. Our contribution to the architecture is Back-to-Logit (BtL, Fig. 11, right), a generic approach that computes a logical operation in the continuous logit space. We re-encode a logical, binary vector back to a continuous, logit representation by an monotonic function m. This monotonicity preserves the order between true (1) and false (0) even after transformed into real numbers. We then apply effects by adding the continuous state vector with a continuous effect vector. The effect vector is produced by applying an MLP named EFFECT to the action vector ai. After adding the continuous vectors, we re-activate the result with a discrete activation (Binary Concrete). Formally,\nz∼i,1 = APPLY(zi,0,ai) = BC(m(zi,0) + EFFECT(()ai)). (11)\nWe found that an easy and successful way to implement m is Batch Normalization Ioffe & Szegedy (2015), a method that was originally developed for addressing the covariate shift in the deep neural networks.\nAdditional background: For simplicity, we consider a scalar operation, which can be applied to vectors element-wise. During the batch training of the neural network, Batch Normalization layer BN(x) takes a minibatch input B = { x1 . . . x|B| } , computes the mean µ and variance σ2, then shift\nand scale each xi so that the result has a mean of 0 and a variance of 1. It then shifts and scales the results by two trainable coefficients γ and β. Formally,\n∀xi ∈ B; BN(xi) = x i − µ σ2 γ + β. (12)\nWhile rescaling the normalized result by γ and β seem to negate the original purpose of normalization, the presense of normalization to mean 0 / variance 1 is crucial. Notice that the first scaling depends on other training examples in the same minibatch, while γ and β is not dynamically adjusted for the minibatch. For example, imagine two minibatches B1 and B2, where B1 accidentally tends to contain larger values than B2 but the variance within B1 and B2 are the same. The bias is canceled by the normalization, and thus the outputs rescaled by γ and β are computed based on a relative scale inside the corresponding minibatch.\nImplementation note: Since ai is a probability vector overA action ids and ai eventually converges to a one-hot vector due to Gumbel-Softmax annealing, the additional MLP can be merely a linear\nembedding, i.e., EFFECT(()ai) = Eai, where E ∈ RF×A and z ∈ BF . It also helps the training if we apply BatchNorm on the effect vector. Therefore, a recommended implementation is\nAPPLY(a, zi,0) = BC(BN(zi,0) + BN(Eai)) (13)\nwhere EFFECT(()ai) = BN(Eai)." }, { "heading": "A.3.5 BACK-TO-LOGIT AND ITS EQUIVALENCE TO STRIPS", "text": "States learned by BTL has the following property:\nTheorem 5. (Asai & Muise, 2020) Under the same action a, state transitions are bitwise monotonic, deterministic, and restricted to three mutually exclusive modes, i.e., for each bit j:\n(add:) ∀i; (zi,0j , z i,1 j ) ∈ {(0, 1), (1, 1)} i.e. z i,0 j ≤ z i,1 j (14)\n(del:) ∀i; (zi,0j , z i,1 j ) ∈ {(1, 0), (0, 0)} i.e. z i,0 j ≥ z i,1 j (15)\n(nop:) ∀i; (zi,0j , z i,1 j ) ∈ {(0, 0), (1, 1)} i.e. z i,0 j = z i,1 j (16)\nThis theorem guarantees that each action deterministically sets a certain bit on and off in the binary latent space. Therefore, the actions and the transitions satisfy the STRIPS state transition rule s′ = (s\\DEL(()a))∪ADD(()a), thus enabling a direct translation from neural network weights to PDDL modeling language.\nThe proof is straightforward from the monotonicity of the BatchNorm and Binary Concrete. Note that we assume BatchNorm’s additional scale parameter γ is kept positive or disabled.\nProof. For readability, we omit j and assumes a 1-dimensional case. Let e = EFFECT(()a) ∈ R, which is a constant for the fixed input a. At the limit of annealing, Binary Concrete BC becomes a STEP function, which is also monotonic. BN is monotonic because we assumed the scale parameter γ of BN is positive, and the main feature of BN also only scales the variance of the batch, which is always positive. Then we have\nzi,1 = STEP(BN(zi,0) + e). (17)\nThe possible values a pair (zi,0, zi,1) can have is (0, 0), (0, 1), (1, 0), (1, 1). Since both STEP and BN are deterministic at the testing time (See Ioffe & Szegedy (2015)), we consider the deterministic mapping from zi,0 to zi,1. There are only 4 deterministic mappings from {0, 1} to {0, 1}: {(0, 1), (1, 1)}, {(1, 0), (0, 0)}, {(0, 0), (1, 1)}, {(0, 1), (1, 0)}. Thus our goal is to show that the last mapping {(0, 1), (1, 0)} is impossible in the latent space { . . . (zi,0, zi,1) . . . } produced by BTL.\nTo prove this, first, assume (zi,0, zi,1) = (0, 1) for some index i. Then\n1 = STEP(BN(0) + e)⇒ BN(0) + e > 0⇒ BN(1) + e > 0⇒ ∀i; BN(zi,0) + e > 0. (18)\nThe second step is due to the monotonicity BN(0) < BN(1). This shows zi,1 is constantly 1 regardless of zi,0, therefore it proves that (zi,0, zi,1) = (1, 0) cannot happen in any i.\nLikewise, if (zi,0, zi,1) = (1, 0) for some index i,\n0 = STEP(BN(1) + e)⇒ BN(1) + e < 0⇒ BN(0) + e < 0⇒ ∀i; BN(zi,0) + e < 0. (19)\nTherefore, zi,1 = 0 regardless of zi,0, and thus (zi,0, zi,1) = (0, 1) cannot happen in any i.\nFinally, if the data points do not contain (0, 1) or (1, 0), then by assumption they do not coexist. Therefore, the embedding learned by BTL cannot contain (0, 1) and (1, 0) at the same time." }, { "heading": "A.4 EFFECT RULE EXTRACTION", "text": "To extract the effects of an action a from Cube-Space AE, we compute ADD(a) = APPLY(a,0) and DEL(a) = 1 − APPLY(a,1) for each action a, where 0,1 are vectors filled by zeros/ones and has the same size as the binary embedding. Since APPLY deterministically sets values to 0 or 1, feeding\nthese vectors is sufficient to see which bit it turns on and off. For each j-th bit that is 1 in each result, a corresponding proposition is added to the add/delete-effect, respectively.\nIn FOSAE++, we extracts the effects from parameter-bounded subspace zi,0† , z i,1 † . The representation is a tuple z† = (z†/1 · · · z†/N), where z†/n ∈ B#a× n...×#a×P/n. BTL then operates on flattened\nand concatenated binary vectors of size ∑ n #a\nnP/n: The input, the output, and the effect share this shape. We extract the effects from this BTL vector in the same manner as noted above. After the extraction, however, each bit is converted to a lifted predicate according to the position. For example, when a bit that corresponds to z†/21,2,5 has turned from 0 to 1, then the add-effect contains p5(?arg1, ?arg2), where ?arg1 is a parameter used in the lifted PDDL encoding.\nWe show an example of such a learned PDDL model below, obtained from 8-Puzzle with P/1 = P/2 = 333 (reformatted for readability). Note that we disabled the nullary predicates P/0 and z/0, which consumes the first 333 dimensions in the flattened vector. Another note is that we also count the number of appearances of each action in the training dataset. If an action label is never used in the dataset, it is not exported in the resulting PDDL output. Thus, the index for the action starts from 7 in the example.\n(define (domain latent) (:requirements :strips :negative-preconditions) (:predicates (p333 ?x0) ... (p665 ?x0) (p666 ?x0 ?x1) ... (p998 ?x0 ?x1))\n(:action a7 :parameters (?x0) :precondition (and (p339 ?x0) (p388 ?x0) (p391 ?x0) (p398 ?x0) (p402 ?x0)\n(p420 ?x0) (p421 ?x0) (p446 ?x0) (p447 ?x0) (p473 ?x0) (p475 ?x0) (p489 ?x0) (p491 ?x0) (p502 ?x0) (p516 ?x0) (p559 ?x0) (p588 ?x0) (p615 ?x0) (p641 ?x0) (p648 ?x0) (p831 ?x0 ?x0) (p950 ?x0 ?x0) (not (p333 ?x0)) (not (p371 ?x0)) (not (p375 ?x0)) (not (p388 ?x0)) (not (p402 ?x0)) (not (p406 ?x0)) (not (p421 ?x0)) (not (p447 ?x0)) (not (p454 ?x0)) (not (p504 ?x0)) (not (p508 ?x0)) (not (p519 ?x0)) (not (p524 ?x0)) (not (p526 ?x0)) (not (p562 ?x0)) (not (p584 ?x0)) (not (p593 ?x0)) (not (p617 ?x0)) (not (p640 ?x0)) (not (p652 ?x0)) (not (p824 ?x0 ?x0)) (not (p892 ?x0 ?x0)) (not (p926 ?x0 ?x0)) (not (p975 ?x0 ?x0)) (not (p994 ?x0 ?x0)))\n:effect (and (p349 ?x0) (p361 ?x0) (p366 ?x0) (p370 ?x0) (p371 ?x0)\n(p378 ?x0) (p381 ?x0) (p385 ?x0) (p388 ?x0) (p401 ?x0) (p408 ?x0) (p421 ?x0) (p432 ?x0) (p445 ?x0) (p454 ?x0) (p475 ?x0) (p491 ?x0) (p496 ?x0) (p502 ?x0) (p503 ?x0) (p504 ?x0) (p507 ?x0) (p517 ?x0) (p526 ?x0) (p550 ?x0) (p562 ?x0) (p563 ?x0) (p575 ?x0) (p584 ?x0) (p588 ?x0) (p599 ?x0) (p601 ?x0) (p607 ?x0) (p612 ?x0) (p617 ?x0) (p631 ?x0) (p640 ?x0) (p641 ?x0) (p647 ?x0) (p656 ?x0) (p663 ?x0) (p724 ?x0 ?x0) (p768 ?x0 ?x0) (p831 ?x0 ?x0) (p902 ?x0 ?x0) (p911 ?x0 ?x0) (p993 ?x0 ?x0) (not (p339 ?x0)) (not (p355 ?x0)) (not (p365 ?x0)) (not (p391 ?x0)) (not (p397 ?x0)) (not (p398 ?x0)) (not (p402 ?x0)) (not (p406 ?x0)) (not (p422 ?x0)) (not (p446 ?x0)) (not (p447 ?x0)) (not (p448 ?x0)) (not (p451 ?x0)) (not (p456 ?x0)) (not (p472 ?x0)) (not (p473 ?x0)) (not (p478 ?x0)) (not (p489 ?x0)) (not (p490 ?x0)) (not (p495 ?x0)) (not (p516 ?x0)) (not (p518 ?x0)) (not (p524 ?x0)) (not (p525 ?x0)) (not (p527 ?x0)) (not (p534 ?x0)) (not (p544 ?x0)) (not (p559 ?x0))\n(not (p561 ?x0)) (not (p615 ?x0)) (not (p624 ?x0)) (not (p629 ?x0)) (not (p642 ?x0)) (not (p646 ?x0)) (not (p651 ?x0)) (not (p653 ?x0)) (not (p720 ?x0 ?x0)) (not (p813 ?x0 ?x0)) (not (p824 ?x0 ?x0)) (not (p892 ?x0 ?x0)) (not (p894 ?x0 ?x0)) (not (p926 ?x0 ?x0)) (not (p931 ?x0 ?x0)) (not (p975 ?x0 ?x0)) (not (p994 ?x0 ?x0))))\n(:action a22 :parameters (?x0) :precondition ..." }, { "heading": "A.5 PRECONDITION LEARNING WITH DYNAMICS REVERSED IN TIME", "text": "In the main text, we simplified the model by showing only the forward dynamics, i.e., the dynamics in the same direction as the time. This forward dynamics can model the effects (add/delete) of the actions. However, the forward dynamics is insufficient for learning the preconditions of the actions. The original CSAE paper (Asai & Muise, 2020) used an add-hoc method that extracts common bits of the current states.\nIn contrast, we added a network that uses the same BTL mechanism that is applied backward in time, i.e., predict the current state zi,0 from a successor state zi,1 and a one-hot action vector ai. We named the network REGRESS(zi,1,ai), alluding to the regression planning (Alcázar et al., 2013) literature.\nIn REGRESS, add-effects and delete-effects now correspond to positive preconditions and negative preconditions. A positive precondition (normal precondition) requires that a proposition is > prior to using an action. In contrast, a negative precondition requires that a proposition is ⊥ prior to using an action. While negative preconditions are (strictly speaking) out of STRIPS formalism, it is commonly supported by the modern classical planners that participated in the recent competitions. To extract the preconditions from the network, we can use the same method used for extracting the effects from the progressive/forward dynamics.\nThe entire model thus looks as follows.\n(encoder) zi,0, zi,1 = ENCODE(oi,0), ENCODE(oi,1)\n(action parameters, uses NLMs) xi = PARAMS(zi,0, zi,1)\n(parameter-bound subspace extraction) zi,0† , z i,1 † = BIND(z i,0,xi), BIND(zi,1,xi)\n(action assignment) ai = ACTION(zi,0† , z i,1 † )\n(bounded forward dynamics) z∼i,1† = APPLY(z i,0 † ,a i)\n(bounded backward dynamics) z∼i,0† = REGRESS(z i,1 † ,a i)\n(reflection to global forward dynamics) z∼i,1 = zi,0 − UNBIND(zi,0† ,x i) + UNBIND(z∼i,1† ,x i)\n(reflection to global backward dynamics) z∼i,0 = zi,1 − UNBIND(zi,1† ,x i) + UNBIND(z∼i,0† ,x i)\n(reconstructions) o∼i,0,o∼i,1 = DECODE(zi,0), DECODE(zi,1)\n(reconstruction based on forward dynamics) o∼i,1 = DECODE(z∼i,1)\n(reconstruction based on backward dynamics) o∼i,0 = DECODE(z∼i,0)\nThe total loss is `(oi,0,o∼i,0)+`(oi,1,o∼i,1)+`(oi,1,o∼i,1)+`(oi,0,o∼i,0)+`(zi,1, z∼i,1)+`(zi,1† , z ∼i,1 † )+ `(zi,0, z∼i,0) + `(zi,0† , z ∼i,0 † ) + Reg.\nB IMPLEMENTATION DETAIL\nWe based our code on a publicly-available Latplan source code repository (https://github. com/guicho271828/latplan/), which is based on Keras Deep Learning library. The repository hosts its own Genetic Algorithm based hyperparameter tuner which we mention several times later.\nB.1 INPUT DATA FORMAT AND LOSS FUNCTIONS\nAs we discussed in the earlier appendix section, our final loss function consists of 9 terms: `(oi,0,o∼i,0)+`(oi,1,o∼i,1)+`(oi,1,o∼ i,1 )+`(oi,0,o∼\ni,0 )+`(zi,1, z∼i,1)+`(zi,1† , z ∼i,1 † )+`(z i,0, z∼i,0)+\n`(zi,0† , z ∼i,0 † ) + Reg. We first describe the input data format that is shared among different domains, then the loss functions defined on it.\nOur input/output format o ∈ RO×F consists of O objects (environment-dependent) each having F features. F features consists of image-based features and the coordinate / dimension-based features (Fig. 12). All image patches extracted from the observations are resized into a fixed height H , width W and color channel C = 3. Each flattened object vector has the size F = H ×W × C + 4. The last 4 dimensions contain the center coordinate and the actual height/width before the resizing. Out of the 9 terms in the total loss, the terms that apply to the object vectors of this form are `(oi,0,o∼i,0), `(oi,1,o∼i,1), `(oi,1,o∼i,1), `(oi,0,o∼i,0).\nFor this data format, the loss function consists of the mean square value of the image part, and the square sum of the coordinate / dimension parts. We do not average the losses for the coordinates/dimensions to avoid making the gradient minuscule. To further enhance this direction, we additionally have the coordinate loss amplifier λ tuned by GA, as it is often the case that the object location has more visual impact on the reconstruction. Note that, for the tuning with the validation set and the evaluation with the test set, we set λ = 1 in order to obtain the consistent, comparable measuring. λ is altered only during the training. Formally, for the i-th objects in the input o and the reconstruction o∼, we define the loss as follows. These losses are averaged over the objects and the batch dimensions.\n`(oi,o ∼ i) =\n1\nHWC ||oi,1..HWC − o ∼ i,1..HWC ||22 + λ||oi,HWC..F − o ∼ i,HWC..F ||22\nWe call the remaining losses except the regularization terms as “latent dynamics loss”. They operate on the binary latent data activated by BinaryConcrete: `(zi,1, z∼i,1), `(zi,1† , z ∼i,1 † ), `(z\ni,0, z∼i,0), `(zi,0† , z ∼i,0 † ). However, note that during the training, all values used for the loss calculation are still continuous due to BinaryConcrete’s annealing. This means we can’t use Binary Cross Entropy BCE, the standard loss function for binary classifications, because the “training data” is also a noisy probability value. It is also in fact symmetric — as we discussed in the previous appendix sections, the role of these losses is not only just to obtain the accurate dynamics, but also to shape the state representation toward a cube-like graph. While there are several candidate loss functions that can be considered, we adapted Symmetric Cross Entropy (Wang et al., 2019) designed for noisy labels, which simply applies BCE in both ways. Formally, given BCE(z , z∼) = − ∑ i (zi log z ∼ i + (1− zi) log(1− z ∼ i)),\n`(z , z∼) = BCE(z , z∼) + BCE(z∼, z).\nThe remaining regularization losses include KL divergence for Discrete VAEs, as well as the L1 regularization for the latent vectors zi,0, zi,1 which were proven useful in Asai & Kajino (2019).\nFinally, we define the magnitude and the warmup of each loss. The magnitude multiplies each loss and is tuned by the GA tuner. Similar to λ, the values are set to 1 during the evaluation. We have α for the L1 regularization, β for KL divergence, and γ for the latent dynamics loss.\nThe warmup mechanism works by setting these values to 0 until the training reaches a certain epoch defined by the ratio r relative to the total number of epochs. We used the warmup rα, rγ for α, γ, as well as rrec and the rdyn for the main reconstruction loss `(oi,0,o\n∼i,0) + `(oi,1,o∼i,1) and the dynamics-based reconstruction loss `(oi,1,o∼i,1) + `(oi,0,o∼i,0).\nThe motivation behind these magnitudes and warmups is to balance the speed of convergence of the various parts of the networks. Depending on the hyperparameters (depth, width of the layers), occasionally the network completely ignores the dynamics, falling into something similar to (but the mechanism will be very different from) a mode collapse where the effect caused by the dynamics is empty, e.g., the forward dynamics produces the same state as the current state.\nAnother failure mode of the network is that the dynamics loss is too strong, due to its BCE which could become too large compared to the reconstruction loss. As a result, the network learns a perfect but meaningless latent space dynamics which does not produce correct reconstructions. Tuning the warmup and balancing the losses addressed these issues." }, { "heading": "B.2 NETWORK DETAIL", "text": "FOSAE++ consists of 6 networks: ENCODE, PARAMS, ACTION, APPLY, REGRESS, DECODE. Note that BIND and UNBIND are weight-less operations." }, { "heading": "B.2.1 ENCODER", "text": "ENCODE can be divided into trivial continuous feature extraction phase (pre-encoder) and the actual FOSAE encoder. The feature extractor is a simple 1D pointwise convolution over the objects, i.e., same as applying the same Dense/Fully-Connected(FC) layer on each individual object. Its depth and the hidden layer width is tuned by the GA tuner. All activations except the last Binary Concrete are Rectified Linear Unit (Fukushima, 1980; Nair & Hinton, 2010). The architecture is illustrated in Fig. 12.\nWe should note that we assign the same number of predicates to different arities: P/1 = P/2 . . . = P/N . In this Network Detail section, we denote P = P/1 . . . = P/N as a hyperparameter that specifies the number, although P was already used in the main text as the total number of predicates (i.e., ∑ n P/n)." }, { "heading": "B.2.2 PARAMS", "text": "PARAMS consists of multiple NLM layers, as depicted in Fig. 14. All hidden activations are ReLU, and the width Q of PARAMS is tuned by the GA tuner.\nz/1 z/2\nmulti-arity predicate representation\nz/3\nmulti-arity continuous\nrepresentation\n/1 /2 /3\n/1 /2 /3 NLM-relubn-dropout(0.1) NLM\nOxP OxOxP OxOxOxP shape: OxQ OxOxQ OxOxOxQ\nOx#a OxOx#a\nOxOxOx#a transpose\nO\n#a-way GumbelSoftmax\n#a arg1 arg2 arg3\nContinuous Logit\narg1 arg2 arg3\nobjects being attended#axO\nFigure 14: PARAMS network using NLMs." }, { "heading": "B.2.3 ACTION, APPLY / REGRESS", "text": "Since ACTION takes parameter-bounded representations (zi,0† , z i,1 † ) whose objects are already selected and appropriately reordered by BIND, we simply apply an MLP whose last layer has an output size A, and is activated by Gumbel-Softmax. This results in selecting one action, represented by a one-hot vector. Before applying the MLP, we should flatten and concatenate the tuple of vectors, each of size #a × n. . . × #a × P . The depth and the hidden layer width of the MLP is automatically tuned.\nFor APPLY and REGRESS, we similarly flatten the input and directly apply the BTL structure described in the earlier sections. The EFFECT network in z∼i,1 = BC(BN(zi,0) + EFFECT(ai)) is a single linear layer without bias combined with a batch normalization. In other words, with the weight matrix W ,\nz∼i,1 = BC(BN(zi,0) + BN(Wai))." }, { "heading": "B.2.4 DECODER", "text": "The decoder consists of NLMs followed by a post-decoder that shares the same width and depth as the pre-encoder. In Blocksworld, we additionally made the decoder a Bayesian Neural Network to absorb the uncertainty in the reconstruction. Details are available in (Sec. D.2)." }, { "heading": "B.3 HYPERPARAMETER TUNER", "text": "The tuning system assumes that the hyperparameter configuration is a vector/dict of categorical/finite candidates. The tuner is a textbook integer GA with uniform crossover and point mutation. Assume that a hyperparameter configuration is represented by H values. A new configuration p = {p1, · · · , pH} is created from two parents q = {q1, · · · , qH} and r = {r1, · · · , rH} by ∀i; pi = RandomChoice(qi, ri), and then a single value is randomly mutated, i.e., for m = RandomChoice(1..H), pm ← RandomChoice(ValidValuesOf(pm) \\ {pm}). It stores all evaluated configurations and never re-evaluates the same configuration.\nIn the beginning of the tuning process, it bootstraps the initial population with a certain number of configurations. New configurations are evaluated by the validation loss, then pushed in a sorted list. A certain number of best-performing configurations in the list are considered as a “live” population. In each iteration, it selects two parents from the live population by inverse weighted random sampling of the score, preferring the smaller validation losses but also occasionally selecting a second-tier parent. Non-performing configurations will “die” by being pushed away from the top as the algorithm finds better configurations. The evaluation and insertion to the queue is asynchronous and all processes can run in parallel." }, { "heading": "C TRAINING DETAILS AND HYPERPARAMETERS", "text": "We trained multiple network configurations on a distributed compute cluster equipped with Tesla K80 and Xeon E5-2600 v4, which is a rather old hardware. The list of hyperparameters are shown in Table 3.\nIn all experiments, we used the total of 5000 state transitions (10000 states) from the training environments. The details of data collection for each domain is available in the later sections. This dataset is divided into training/validation/test sets (90%:5%:5%). The Genetic Algorithm hyperparameter tuner uses the validation loss as the evaluation metric.\nWe set a limit of 1500 total runs for each environment, with 100 initial population, and ran maximum 100 processes in parallel for each environment. As an additional trick, to avoid testing unpromising candidates (e.g., those with diverging loss), the epoch parameter is forced to be 50 in the first 100 configurations and the runs finish quickly. The rest of the runs use these initial populations as the parents, but replaces the epoch with an appropriate value selected from the candidates." }, { "heading": "C.1 REPRODUCIBILITY, TRAINING CURVES ON 3 RUNS", "text": "In Fig. 15, we show 3 training runs on each environment with the same hyperparameter configuration found by our tuner. It reproduces an almost identical, stable curves across 3 runs." }, { "heading": "D DOMAIN-WISE DETAILS", "text": "" }, { "heading": "D.1 SOKOBAN", "text": "We generated 10000 transitions of each training problem using Dijkstra search from the initial state. We shuffled the 50000 transitions, subsampled 5000 transitions out of 50000, then stored them in a single archive. The rendering and other data are obtained from PDDLGym library Silver & Chitnis (2020). Each tile is resized into 16x16 pixels, and the tile ordering is also shuffled.\nIn order to make the training data dimension consistent and make it convenient for GPU-based training, we performed a so-called Random Object Masking which removes a certain number of objects from each state via random selection. The idea is similar to masking-based image augmentation commonly used in the computer vision literature, but the purpose is different from those: Ours has more emphasis on having the consistent number of objects in each state. For example, Sokoban training problem 0 (leftmost in Fig. 16) has 49 tiles, while the problem 1 (the second picture) has 72 tiles. In the combined archive, the number of objects is set to the smallest among the problems.\nWe also removed certain tiles that cannot be reached by the player. For example, in problem 0, the three floors on the top left corner cannot be reached by the player. Similarly in problem 1, the bottom right corner is not reachable by the player. We performed a simple custom reachability analysis using the meta-information obtained from PDDLGym library. This helped reducing the dataset size and thus the training.\nFinally, during the dataset merging, we accounted for the potential location bias caused by the map size difference. For example, if we preserved the original x, y locations, the tiles tend to be biased around 0, 0 and the location around (x, y) = (12, 12) (by tiles) is never occupied by any tile. To address the issue, for each state pair, we relocated the entire environment by a value selected uniformly randomly from a certain width. The width is decided from the maximum dimension of all training problems, i.e., 12x12. For example, a state pair in problem 4 (which has a 10x9 map dimension) will be shifted by a random value between 0..(12− 10) in x-axis, and 0..(12− 9) in y-axis." }, { "heading": "D.2 BLOCKSWORLD", "text": "We generated 5000 random state transitions using Photorealistic-Blocksworld dataset Asai (2018), which in turn is based on CLEVR Johnson et al. (2017) dataset generator. It uses Blender 3D rendering engine to produce realistic effects such as metallic reflection, surface materials and shadows. The objects are extracted from the information available from the generator metadata. We cropped the image region suggested by the metadata, resized them into 32x32x3 image patches and stored them in an archive. The size of the objects reported by the metadata may vary and is noisy due to the camera jitter, object location, and the heuristics used for extracting the objects. This is a case even for the objects that are not moved by the random actions.\nEach transition is generated by sampling the current state and then randomly moving a block. To sample a state, we first generate a set of block configurations (color, material, shape, size), then placing them randomly on a straight line in the 3D environment without collisions. When we move a block, we select a set of blocks of which nothing else is on top, choose one randomly, pick a new horizontal location and place it at the lowest non-colliding height. We ensure that the block is always moved, i.e., not stacked on top of the same block, and not on the floor if it is already on the floor.\nIn Blocksworld, we noticed that a same conceptual symbolic action (such as move) may have a nondeterministic outcome in the image space while each individual concept is discrete and deterministic.\nFor example, move may relocate a block onto a floor, but the resulting position is chosen randomly during the dataset generation, i.e., it can be placed anywhere on the floor.\nTo model this uncertainty in our framework, we used a Bayesian Neural Network layers in the decoder for Blocksworld domain. The final output o ∈ RO×F is produced by two NLMs of output feature size F , each producing the mean and the variance vectors µ, ∈ RO×F , and the output reconstruction is generated by the random sampling: o = µ+σ · , where is a random noise vector following normal distributionN (0, 1). During the testing (e.g., visualization), the random sampling is disabled and we use the value µ for rendering.\nD.3 8-PUZZLE\nThe 8-puzzle domain generator is directly included in the Latplan code base. It uses MNIST dataset for its tiles and the tile size is set to 16. Similarly, we generated 5000 random state transitions using the generator." }, { "heading": "E PDDL GENERATION AND PLANNING EXPERIMENTS", "text": "E.1 EXAMPLE OUTPUT: 8-PUZZLE" } ]
2,020
GROUNDING PLANNABLE LIFTED ACTION MODELS
SP:940f5374980f33ee94784370eccd403e49c99ac3
[ "The paper introduces a novel decentralized algorithm (LEAD) incorporated with compression that achieves linear convergence rate in strongly convex setting. The main idea is to apply and communicate the compression of an auxiliary variable instead of the primal or dual iterates. Convergence analysis is provided for both deterministic and stochastic variants. Experiments shows the state-of-the-art performance. ", "This paper introduces a novel algorithm for decentralized optimization when nodes can only communicate a compressed signal with their neighbors. Unlike most decentralized methods with compression that are inspired by primal methods (DGD type methods), this paper introduces a new primal-dual algorithm with compression. The proposed method's main idea is borrowed from the NIDS algorithm, which converges linearly when the local loss functions are smooth and strongly convex. As the proposed LEAD method is based on primal-dual methods, it succeeds in improving the sublinear rate of primal-based methods. To the best of my knowledge, this is the first decentralized method that achieves a linear convergence rate in the setting that nodes use compressed signals. " ]
Communication compression has become a key strategy to speed up distributed optimization. However, existing decentralized algorithms with compression mainly focus on compressing DGD-type algorithms. They are unsatisfactory in terms of convergence rate, stability, and the capability to handle heterogeneous data. Motivated by primal-dual algorithms, this paper proposes the first LinEAr convergent Decentralized algorithm with compression, LEAD. Our theory describes the coupled dynamics of the inexact primal and dual update as well as compression error, and we provide the first consensus error bound in such settings without assuming bounded gradients. Experiments on convex problems validate our theoretical analysis, and empirical study on deep neural nets shows that LEAD is applicable to non-convex problems.
[ { "affiliations": [], "name": "Xiaorui Liu" }, { "affiliations": [], "name": "Yao Li" }, { "affiliations": [], "name": "Rongrong Wang" }, { "affiliations": [], "name": "Jiliang Tang" }, { "affiliations": [], "name": "Ming Yan" } ]
[ { "authors": [ "Dan Alistarh", "Demjan Grubic", "Jerry Li", "Ryota Tomioka", "Milan Vojnovic" ], "title": "QSGD: Communication-efficient sgd via gradient quantization and encoding", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jeremy Bernstein", "Yu-Xiang Wang", "Kamyar Azizzadenesheli", "Animashree Anandkumar" ], "title": "SIGNSGD: compressed optimisation for non-convex problems", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Ruggero Carli", "Fabio Fagnani", "Paolo Frasca", "Sandro Zampieri" ], "title": "Gossip consensus algorithms via quantized communication", "venue": null, "year": 2010 }, { "authors": [ "Sai Praneeth Karimireddy", "Quentin Rebjock", "Sebastian Urban Stich", "Martin Jaggi" ], "title": "Error feedback fixes SignSGD and other gradient compression schemes", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Anastasia Koloskova", "Sebastian U. Stich", "Martin Jaggi" ], "title": "Decentralized stochastic optimization and gossip algorithms with compressed communication", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Anastasia Koloskova", "Tao Lin", "Sebastian U Stich", "Martin Jaggi" ], "title": "Decentralized deep learning with arbitrary communication compression", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Yao Li", "Ming Yan" ], "title": "On linear convergence of two decentralized algorithms", "venue": "arXiv preprint arXiv:1906.07225,", "year": 2019 }, { "authors": [ "Zhi Li", "Wei Shi", "Ming Yan" ], "title": "A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates", "venue": "IEEE Transactions on Signal Processing,", "year": 2019 }, { "authors": [ "Xiangru Lian", "Ce Zhang", "Huan Zhang", "Cho-Jui Hsieh", "Wei Zhang", "Ji Liu" ], "title": "Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Qing Ling", "Wei Shi", "Gang Wu", "Alejandro Ribeiro" ], "title": "DLM: Decentralized linearized alternating direction method of multipliers", "venue": "IEEE Transactions on Signal Processing,", "year": 2015 }, { "authors": [ "Xiaorui Liu", "Yao Li", "Jiliang Tang", "Ming Yan" ], "title": "A double residual compression algorithm for efficient distributed learning", "venue": "The 23rd International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Yucheng Lu", "Christopher De Sa" ], "title": "Moniqua: Modulo quantized communication in decentralized SGD", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Sindri Magnússon", "Hossein Shokri-Ghadikolaei", "Na Li" ], "title": "On maintaining linear convergence of distributed learning and optimization under limited communication", "venue": "IEEE Transactions on Signal Processing,", "year": 2020 }, { "authors": [ "Konstantin Mishchenko", "Eduard Gorbunov", "Martin Takáč", "Peter Richtárik" ], "title": "Distributed learning with compressed gradient differences", "venue": "arXiv preprint arXiv:1901.09269,", "year": 2019 }, { "authors": [ "Joao FC Mota", "Joao MF Xavier", "Pedro MQ Aguiar", "Markus Püschel" ], "title": "D-ADMM: A communication-efficient distributed algorithm for separable optimization", "venue": "IEEE Transactions on Signal Processing,", "year": 2013 }, { "authors": [ "Angelia Nedic", "Asuman Ozdaglar" ], "title": "Distributed subgradient methods for multi-agent optimization", "venue": "IEEE Transactions on Automatic Control,", "year": 2009 }, { "authors": [ "Angelia Nedic", "Alex Olshevsky", "Wei Shi" ], "title": "Achieving geometric convergence for distributed optimization over time-varying graphs", "venue": "SIAM Journal on Optimization,", "year": 2017 }, { "authors": [ "Yurii Nesterov" ], "title": "Introductory lectures on convex optimization: A basic course, volume 87", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Shi Pu", "Angelia Nedić" ], "title": "Distributed stochastic gradient tracking methods", "venue": "Mathematical Programming,", "year": 2020 }, { "authors": [ "Amirhossein Reisizadeh", "Aryan Mokhtari", "Hamed Hassani", "Ramtin Pedarsani" ], "title": "An exact quantized decentralized gradient descent algorithm", "venue": "IEEE Transactions on Signal Processing,", "year": 2019 }, { "authors": [ "Amirhossein Reisizadeh", "Hossein Taheri", "Aryan Mokhtari", "Hamed Hassani", "Ramtin Pedarsani" ], "title": "Robust and communication-efficient collaborative learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Frank Seide", "Hao Fu", "Jasha Droppo", "Gang Li", "Dong Yu" ], "title": "1-bit stochastic gradient descent and application to data-parallel distributed training of speech DNNs", "venue": null, "year": 2014 }, { "authors": [ "Wei Shi", "Qing Ling", "Gang Wu", "Wotao Yin" ], "title": "EXTRA: An exact first-order algorithm for decentralized consensus optimization", "venue": "SIAM Journal on Optimization,", "year": 2015 }, { "authors": [ "Sebastian U. Stich", "Jean-Baptiste Cordonnier", "Martin Jaggi" ], "title": "Sparsified SGD with memory", "venue": "In Proceedings of the 32nd International Conference on Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Hanlin Tang", "Shaoduo Gan", "Ce Zhang", "Tong Zhang", "Ji Liu" ], "title": "Communication compression for decentralized training", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Hanlin Tang", "Xiangru Lian", "Shuang Qiu", "Lei Yuan", "Ce Zhang", "Tong Zhang", "Ji Liu" ], "title": "Deepsqueeze: Decentralization meets error-compensated compression", "venue": "CoRR, abs/1907.07346,", "year": 2019 }, { "authors": [ "Hanlin Tang", "Chen Yu", "Xiangru Lian", "Tong Zhang", "Ji Liu" ], "title": "DoubleSqueeze: Parallel stochastic gradient descent with double-pass error-compensated compression", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "John Tsitsiklis", "Dimitri Bertsekas", "Michael Athans" ], "title": "Distributed asynchronous deterministic and stochastic gradient optimization algorithms", "venue": "IEEE transactions on automatic control,", "year": 1986 }, { "authors": [ "Jiaxiang Wu", "Weidong Huang", "Junzhou Huang", "Tong Zhang" ], "title": "Error compensated quantized SGD and its applications to large-scale distributed optimization", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Lin Xiao", "Stephen Boyd" ], "title": "Fast linear iterations for distributed averaging", "venue": "Systems & Control Letters,", "year": 2004 }, { "authors": [ "Jinming Xu", "Ye Tian", "Ying Sun", "Gesualdo Scutari" ], "title": "Accelerated primal-dual algorithms for distributed smooth convex optimization over networks", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Kun Yuan", "Qing Ling", "Wotao Yin" ], "title": "On the convergence of decentralized gradient descent", "venue": "SIAM Journal on Optimization,", "year": 2016 }, { "authors": [ "Kun Yuan", "Bicheng Ying", "Xiaochuan Zhao", "Ali H Sayed" ], "title": "Exact diffusion for distributed optimization and learning—part i: Algorithm development", "venue": "IEEE Transactions on Signal Processing,", "year": 2018 }, { "authors": [ "Kun Yuan", "Wei Xu", "Qing Ling" ], "title": "Can primal methods outperform primal-dual methods in decentralized dynamic optimization", "venue": "arXiv preprint arXiv:2003.00816,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Distributed optimization solves the following optimization problem\nx∗ := argmin x∈Rd\n[ f(x) := 1\nn n∑ i=1 fi(x) ]\n(1)\nwith n computing agents and a communication network. Each fi(x) : Rd → R is a local objective function of agent i and typically defined on the data Di settled at that agent. The data distributions {Di} can be heterogeneous depending on the applications such as in federated learning. The variable x ∈ Rd often represents model parameters in machine learning. A distributed optimization algorithm seeks an optimal solution that minimizes the overall objective function f(x) collectively. According to the communication topology, existing algorithms can be conceptually categorized into centralized and decentralized ones. Specifically, centralized algorithms require global communication between agents (through central agents or parameter servers). While decentralized algorithms only require local communication between connected agents and are more widely applicable than centralized ones. In both paradigms, the computation can be relatively fast with powerful computing devices; efficient communication is the key to improve algorithm efficiency and system scalability, especially when the network bandwidth is limited.\nIn recent years, various communication compression techniques, such as quantization and sparsification, have been developed to reduce communication costs. Notably, extensive studies (Seide et al., 2014; Alistarh et al., 2017; Bernstein et al., 2018; Stich et al., 2018; Karimireddy et al., 2019; Mishchenko et al., 2019; Tang et al., 2019b; Liu et al., 2020) have utilized gradient compression to significantly boost communication efficiency for centralized optimization. They enable efficient large-scale optimization while maintaining comparable convergence rates and practical performance with their non-compressed counterparts. This great success has suggested the potential and significance of communication compression in decentralized algorithms.\nWhile extensive attention has been paid to centralized optimization, communication compression is relatively less studied in decentralized algorithms because the algorithm design and analysis are\nmore challenging in order to cover general communication topologies. There are recent efforts trying to push this research direction. For instance, DCD-SGD and ECD-SGD (Tang et al., 2018a) introduce difference compression and extrapolation compression to reduce model compression error. (Reisizadeh et al., 2019a;b) introduce QDGD and QuanTimed-DSGD to achieve exact convergence with small stepsize. DeepSqueeze (Tang et al., 2019a) directly compresses the local model and compensates the compression error in the next iteration. CHOCO-SGD (Koloskova et al., 2019; 2020) presents a novel quantized gossip algorithm that reduces compression error by difference compression and preserves the model average. Nevertheless, most existing works focus on the compression of primal-only algorithms, i.e., reduce to DGD (Nedic & Ozdaglar, 2009; Yuan et al., 2016) or P-DSGD (Lian et al., 2017). They are unsatisfying in terms of convergence rate, stability, and the capability to handle heterogeneous data. Part of the reason is that they inherit the drawback of DGD-type algorithms, whose convergence rate is slow in heterogeneous data scenarios where the data distributions are significantly different from agent to agent.\nIn the literature of decentralized optimization, it has been proved that primal-dual algorithms can achieve faster converge rates and better support heterogeneous data (Ling et al., 2015; Shi et al., 2015; Li et al., 2019; Yuan et al., 2020). However, it is unknown whether communication compression is feasible for primal-dual algorithms and how fast the convergence can be with compression. In this paper, we attempt to bridge this gap by investigating the communication compression for primal-dual decentralized algorithms. Our major contributions can be summarized as:\n• We delineate two key challenges in the algorithm design for communication compression in decentralized optimization, i.e., data heterogeneity and compression error, and motivated by primal-dual algorithms, we propose a novel decentralized algorithm with compression, LEAD.\n• We prove that for LEAD, a constant stepsize in the range (0, 2/(µ + L)] is sufficient to ensure linear convergence for strongly convex and smooth objective functions. To the best of our knowledge, LEAD is the first linear convergent decentralized algorithm with compression. Moreover, LEAD provably works with unbiased compression of arbitrary precision.\n• We further prove that if the stochastic gradient is used, LEAD converges linearly to the O(σ2) neighborhood of the optimum with constant stepsize. LEAD is also able to achieve exact convergence to the optimum with diminishing stepsize.\n• Extensive experiments on convex problems validate our theoretical analyses, and the empirical study on training deep neural nets shows that LEAD is applicable for nonconvex problems. LEAD achieves state-of-art computation and communication efficiency in all experiments and significantly outperforms the baselines on heterogeneous data. Moreover, LEAD is robust to parameter settings and needs minor effort for parameter tuning." }, { "heading": "2 RELATED WORKS", "text": "Decentralized optimization can be traced back to the work by Tsitsiklis et al. (1986). DGD (Nedic & Ozdaglar, 2009) is the most classical decentralized algorithm. It is intuitive and simple but converges slowly due to the diminishing stepsize that is needed to obtain the optimal solution (Yuan et al., 2016). Its stochastic version D-PSGD (Lian et al., 2017) has been shown effective for training nonconvex deep learning models. Algorithms based on primal-dual formulations or gradient tracking are proposed to eliminate the convergence bias in DGD-type algorithms and improve the convergence rate, such as D-ADMM (Mota et al., 2013), DLM (Ling et al., 2015), EXTRA (Shi et al., 2015), NIDS (Li et al., 2019), D2 (Tang et al., 2018b), Exact Diffusion (Yuan et al., 2018), OPTRA(Xu et al., 2020), DIGing (Nedic et al., 2017), GSGT (Pu & Nedić, 2020), etc.\nRecently, communication compression is applied to decentralized settings by Tang et al. (2018a). It proposes two algorithms, i.e., DCD-SGD and ECD-SGD, which require compression of high accuracy and are not stable with aggressive compression. Reisizadeh et al. (2019a;b) introduce QDGD and QuanTimed-DSGD to achieve exact convergence with small stepsize and the convergence is slow. DeepSqueeze (Tang et al., 2019a) compensates the compression error to the compression in the next iteration. Motivated by the quantized average consensus algorithms, such as (Carli et al., 2010), the quantized gossip algorithm CHOCO-Gossip (Koloskova et al., 2019) converges linearly to the consensual solution. Combining CHOCO-Gossip and D-PSGD leads to a decentralized algorithm with compression, CHOCO-SGD, which converges sublinearly under the strong convexity and\ngradient boundedness assumptions. Its nonconvex variant is further analyzed in (Koloskova et al., 2020). A new compression scheme using the modulo operation is introduced in (Lu & De Sa, 2020) for decentralized optimization. A general algorithmic framework aiming to maintain the linear convergence of distributed optimization under compressed communication is considered in (Magnússon et al., 2020). It requires a contractive property that is not satisfied by many decentralized algorithms including the algorithm in this paper." }, { "heading": "3 ALGORITHM", "text": "We first introduce notations and definitions used in this work. We use bold upper-case letters such as X to define matrices and bold lower-case letters such as x to define vectors. Let 1 and 0 be vectors with all ones and zeros, respectively. Their dimensions will be provided when necessary. Given two matrices X, Y ∈ Rn×d, we define their inner product as 〈X,Y〉 = tr(X>Y) and the norm as ‖X‖ = √ 〈X,X〉. We further define 〈X,Y〉P = tr(X>PY) and ‖X‖P = √ 〈X,X〉\nP for\nany given symmetric positive semidefinite matrix P ∈ Rn×n. For simplicity, we will majorly use the matrix notation in this work. For instance, each agent i holds an individual estimate xi ∈ Rd of the global variable x ∈ Rd. Let Xk and∇F(Xk) be the collections of {xki }ni=1 and {∇fi(xki )}ni=1 which are defined below:\nXk = [ xk1 , . . . ,x k n ]> ∈ Rn×d, ∇F(Xk) = [∇f1(xk1), . . . ,∇fn(xkn)]> ∈ Rn×d. (2) We use ∇F(Xk; ξk) to denote the stochastic approximation of ∇F(Xk). With these notations, the update Xk+1 = Xk − η∇F(Xk; ξk) means that xk+1i = xki − η∇fi(xki ; ξki ) for all i. In this paper, we need the average of all rows in Xk and ∇F(Xk), so we define Xk = (1>Xk)/n and ∇F(Xk) = (1>∇F(Xk))/n. They are row vectors, and we will take a transpose if we need a column vector. The pseudoinverse of a matrix M is denoted as M†. The largest, ith-largest, and smallest nonzero eigenvalues of a symmetric matrix M are λmax(M), λi(M), and λmin(M). Assumption 1 (Mixing matrix). The connected network G = {V, E} consists of a node set V = {1, 2, . . . , n} and an undirected edge set E . The primitive symmetric doubly-stochastic matrix W = [wij ] ∈ Rn×n encodes the network structure such that wij = 0 if nodes i and j are not connected and cannot exchange information.\nAssumption 1 implies that −1 < λn(W) ≤ λn−1(W) ≤ · · ·λ2(W) < λ1(W) = 1 and W1 = 1 (Xiao & Boyd, 2004; Shi et al., 2015). The matrix multiplication Xk+1 = WXk describes that agent i takes a weighted sum from its neighbors and itself, i.e., xk+1i = ∑ j∈Ni∪{i} wijx k j , where Ni denotes the neighbors of agent i." }, { "heading": "3.1 THE PROPOSED ALGORITHM", "text": "The proposed algorithm LEAD to solve problem (1) is showed in Alg. 1 with matrix notations for conciseness. We will refer to the line number in the analysis. A complete algorithm description from the agent’s perspective can be found in Appendix A. The motivation behind Alg. 1 is to achieve two goals: (a) consensus (xki − (Xk)> → 0) and (b) convergence ((Xk)> → x∗). We first discuss how goal (a) leads to goal (b) and then explain how LEAD fulfills goal (a).\nIn essence, LEAD runs the approximate SGD globally and reduces to the exact SGD under consensus. One key property for LEAD is 1>n×1D\nk = 0, regardless of the compression error in Ŷk. It holds because that for the initialization, we require D1 = (I −W)Z for some Z ∈ Rn×d, e.g., D1 = 0n×d, and that the update of Dk ensures Dk ∈ Range(I − W) for all k and 1>n×1(I −W) = 0 as we will explain later. Therefore, multiplying (1/n)1>n×1 on both sides of Line 7 leads to a global average view of Alg. 1:\nXk+1 = Xk − η∇F(Xk; ξk), (3)\nwhich doesn’t contain the compression error. Note that this is an approximate SGD step because, as shown in (2), the gradient ∇F(Xk; ξk) is not evaluated on a global synchronized model Xk. However, if the solution converges to the consensus solution, i.e., xki − (Xk)> → 0, then Eξk [∇F(Xk; ξk)−∇f(Xk; ξk)]→ 0 and (3) gradually reduces to exact SGD.\nAlgorithm 1 LEAD Input: Stepsize η, parameter (α, γ), X0, H1, D1 = (I−W)Z for any Z Output: XK or 1/n ∑n i=1 X K i 1: H1w = WH 1 2: X1 = X0 − η∇F(X0; ξ0) 3: for k = 1, 2, · · · ,K − 1 do 4: Yk = Xk − η∇F(Xk; ξk)− ηDk\n5: Ŷk, Ŷkw,H k+1,Hk+1w = COMM(Y k,Hk,Hkw) 6: Dk+1 = Dk + γ2η (Ŷ\nk − Ŷkw) 7: Xk+1 = Xk − η∇F(Xk; ξk)− ηDk+1\n8: end for\n9: procedure COMM(Y,H,Hw) 10: Q = COMPRESS(Y −H) 11: Ŷ = H+Q\n12: Ŷw = Hw +WQ\n13: H = (1− α)H+ αŶ 14: Hw = (1− α)Hw + αŶw 15: Return: Ŷ, Ŷw,H,Hw 16: end procedure\nWith the establishment of how consensus leads to convergence, the obstacle becomes how to achieve consensus under local communication and compression challenges. It requires addressing two issues, i.e., data heterogeneity and compression error. To deal with these issues, existing algorithms, such as DCD-SGD, ECD-SGD, QDGD, DeepSqueeze, Moniqua, and CHOCO-SGD, need a diminishing or constant but small stepsize depending on the total number of iterations. However, these choices unavoidably cause slower convergence and bring in the difficulty of parameter tuning. In contrast, LEAD takes a different way to solve these issues, as explained below.\nData heterogeneity. It is common in distributed settings that there exists data heterogeneity among agents, especially in real-world applications where different agents collect data from different scenarios. In other words, we generally have fi(x) 6= fj(x) for i 6= j. The optimality condition of problem (1) gives 1>n×1∇F(X∗) = 0, where X∗ = [x∗, · · · ,x∗] is a consensual and optimal solution. The data heterogeneity and optimality condition imply that there exist at least two agents i and j such that ∇fi(x∗) 6= 0 and ∇fj(x∗) 6= 0. As a result, a simple D-PSGD algorithm cannot converge to the consensual and optimal solution as X∗ 6= WX∗ − ηEξ∇F(X∗; ξ) even when the stochastic gradient variance is zero.\nGradient correction. Primal-dual algorithms or gradient tracking algorithms are able to convergence much faster than DGD-type algorithms by handling the data heterogeneity issue, as introduced in Section 2. Specifically, LEAD is motivated by the design of primal-dual algorithm NIDS (Li et al., 2019) and the relation becomes clear if we consider the two-step reformulation of NIDS adopted in (Li & Yan, 2019):\nDk+1 = Dk + I−W 2η (Xk − η∇F(Xk)− ηDk), (4)\nXk+1 = Xk − η∇F(Xk)− ηDk+1, (5)\nwhere Xk and Dk represent the primal and dual variables respectively. The dual variable Dk plays the role of gradient correction. As k → ∞, we expect Dk → −∇F(X∗) and Xk will converge to X∗ via the update in (5) since Dk+1 corrects the nonzero gradient ∇F(Xk) asymptotically. The key design of Alg. 1 is to provide compression for the auxiliary variable defined as Yk = Xk − η∇F(Xk) − ηDk. Such design ensures that the dual variable Dk lies in Range(I −W), which is essential for convergence. Moreover, it achieves the implicit error compression as we will explain later. To stabilize the algorithm with inexact dual update, we introduce a parameter γ to control the stepsize in the dual update. Therefore, if we ignore the details of the compression, Alg. 1 can be concisely written as\nYk = Xk − η∇F(Xk; ξk)− ηDk (6)\nDk+1 = Dk + γ\n2η (I−W)Ŷk (7)\nXk+1 = Xk − η∇F(Xk; ξk)− ηDk+1 (8)\nwhere Ŷk represents the compression of Yk and F(Xk; ξk) denote the stochastic gradients.\nNevertheless, how to compress the communication and how fast the convergence we can attain with compression error are unknown. In the following, we propose to carefully control the compression error by difference compression and error compensation such that the inexact dual update (Line 6) and primal update (Line 7) can still guarantee the convergence as proved in Section 4.\nCompression error. Different from existing works, which typically compress the primal variable Xk or its difference, LEAD first construct an intermediate variable Yk and apply compression to obtain its coarse representation Ŷk as shown in the procedure COMM(Y,H,Hw):\n• Compress the difference between Y and the state variable H as Q; • Q is encoded into the low-bit representation, which enables the efficient local communication\nstep Ŷw = Hw +WQ. It is the only communication step in each iteration. • Each agent recovers its estimate Ŷ by Ŷ = H+Q and we have Ŷw = WŶ. • States H and Hw are updated based on Ŷ and Ŷw, respectively. We have Hw = WH.\nBy this procedure, we expect when both Yk and Hk converge to X∗, the compression error vanishes asymptotically due to the assumption we make for the compression operator in Assumption 2.\nRemark 1. Note that difference compression is also applied in DCD-PSGD (Tang et al., 2018a) and CHOCO-SGD (Koloskova et al., 2019), but their state update is the simple integration of the compressed difference. We find this update is usually too aggressive and cause instability as showed in our experiments. Therefore, we adopt a momentum update H = (1−α)H+αŶ motivated from DIANA (Mishchenko et al., 2019), which reduces the compression error for gradient compression in centralized optimization.\nImplicit error compensation. On the other hand, even if the compression error exists, LEAD essentially compensates for the error in the inexact dual update (Line 6), making the algorithm more stable and robust. To illustrate how it works, let Ek = Ŷk −Yk denote the compression error and eki be its i-th row. The update of D k gives\nDk+1 = Dk + γ\n2η (Ŷk − Ŷkw) = Dk +\nγ 2η (I−W)Yk + γ 2η (Ek −WEk)\nwhere −WEk indicates that agent i spreads total compression error − ∑ j∈Ni∪{i} wjie k i = −eki to all agents and Ek indicates that each agent compensates this error locally by adding eki back. This error compensation also explains why the global view in (3) doesn’t involve compression error.\nRemark 2. Note that in LEAD, the compression error is compensated into the model Xk+1 through Line 6 and Line 7 such that the gradient computation in the next iteration is aware of the compression error. This has some subtle but important difference from the error compensation or error feedback in (Seide et al., 2014; Wu et al., 2018; Stich et al., 2018; Karimireddy et al., 2019; Tang et al., 2019b; Liu et al., 2020; Tang et al., 2019a), where the error is stored in the memory and only compensated after gradient computation and before the compression.\nRemark 3. The proposed algorithm, LEAD in Alg. 1, recovers NIDS (Li et al., 2019), D2 (Tang et al., 2018b), Exact Diffusion (Yuan et al., 2018). These connections are established in Appendix B." }, { "heading": "4 THEORETICAL ANALYSIS", "text": "In this section, we show the convergence rate for the proposed algorithm LEAD. Before showing the main theorem, we make some assumptions, which are commonly used for the analysis of decentralized optimization algorithms. All proofs are provided in Appendix E.\nAssumption 2 (Unbiased and C-contracted operator). The compression operator Q : Rd → Rd is unbiased, i.e., EQ(x) = x, and there exists C ≥ 0 such that E‖x − Q(x)‖22 ≤ C‖x‖22 for all x ∈ Rd. Assumption 3 (Stochastic gradient). The stochastic gradient ∇fi(x; ξ) is unbiased, i.e., Eξ∇fi(x; ξ) = ∇fi(x), and the stochastic gradient variance is bounded: Eξ‖∇fi(x; ξ) − ∇fi(x)‖22 ≤ σ2i for all i ∈ [n]. Denote σ2 = 1n ∑n i=1 σ 2 i .\nAssumption 4. Each fi is L-smooth and µ-strongly convex with L ≥ µ > 0, i.e., for i = 1, 2, . . . , n and ∀x,y ∈ Rd, we have\nfi(y) + 〈∇fi(y),x− y〉+ µ\n2 ‖x− y‖2 ≤ fi(x) ≤ fi(y) + 〈∇fi(y),x− y〉+\nL 2 ‖x− y‖2.\nTheorem 1 (Constant stepsize). Let {Xk,Hk,Dk} be the sequence generated from Alg. 1 and X∗ is the optimal solution with D∗ = −∇F(X∗). Under Assumptions 1-4, for any constant stepsize η ∈ (0, 2/(µ+ L)], if the compression parameters α and γ satisfy\nγ ∈ ( 0,min { 2 (3C + 1)β , 2µη(2− µη) [2− µη(2− µη)]Cβ }) , (9)\nα ∈ [ Cβγ\n2(1 + C) , 1 a1 min {2− βγ 4− βγ , µη(2− µη) }] , (10)\nwith β := λmax(I−W). Then, in total expectation we have 1\nn ELk+1 ≤ ρ 1 n ELk + η2σ2, (11)\nwhere\nLk := (1− a1α)‖Xk −X∗‖2 + (2η2/γ)E‖Dk −D∗‖2(I−W)† + a1‖H k −X∗‖2,\nρ := max\n{ 1− µη(2− µη)\n1− a1α , 1− γ 2λmax((I−W)†) , 1− α\n} < 1, a1 := 4(1 + C)\nCβγ + 2\nThe result holds for C → 0. Corollary 1 (Complexity bounds). Define the condition numbers of the objective function and communication graph as κf = Lµ and κg =\nλmax(I−W) λ+min(I−W) , respectively. Under the same setting in\nTheorem 1, we can choose η = 1L , γ = min{ 1 Cβκf , 1(1+3C)β }, and α = O( 1 (1+C)κf ) such that\nρ = max { 1−O ( 1 (1 + C)κf ) , 1−O ( 1 (1 + C)κg ) , 1−O ( 1 Cκfκg )} .\nWith full-gradient (i.e., σ = 0), we obtain the following complexity bounds:\n• LEAD converges to the -accurate solution with the iteration complexity O (( (1 + C)(κf + κg) + Cκfκg ) log 1 ) .\n• When C = 0 (i.e., there is no compression), we obtain ρ = max{1−O( 1κf ), 1−O( 1 κg )}, and the iteration complexity O ( (κf + κg) log 1 ) . This exactly recovers the convergence\nrate of NIDS (Li et al., 2019). • When C ≤ κf+κgκfκg+κf+κg , the asymptotical complexity is O ( (κf + κg) log 1 ) , which also\nrecovers that of NIDS (Li et al., 2019) and indicates that the compression doesn’t harm the convergence in this case.\n• With C = 0 (or C ≤ κf+κgκfκg+κf+κg ) and fully connected communication graph (i.e., W = 11 >\nn ), we have β = 1 and κg = 1. Therefore, we obtain ρ = 1 − O( 1 κf ) and the\ncomplexity boundO(κf log 1 ). This recovers the convergence rate of gradient descent (Nesterov, 2013).\nRemark 4. Under the setting in Theorem 1, LEAD converges linearly to the O(σ2) neighborhood of the optimum and converges linearly exactly to the optimum if full gradient is used, e.g., σ = 0. The linear convergence of LEAD holds when η < 2/L, but we omit the proof. Remark 5 (Arbitrary compression precision). Pick any η ∈ (0, 2/(µ+ L)], based on the compression-related constant C and the network-related constant β, we can select γ and α in certain ranges to achieve the convergence. It suggests that LEAD supports unbiased compression with arbitrary precision, i.e., any C > 0.\nCorollary 2 (Consensus error). Under the same setting in Theorem 1 , let xk = 1n ∑n i=1 x k i be the averaged model and H0 = H1, then all agents achieve consensus at the rate\n1\nn n∑ i=1 E ∥∥xki − xk∥∥2 ≤ 2L0n ρk + 2σ21− ρη2. (12)\nwhere ρ is defined as in Corollary 1 with appropriate parameter settings. Theorem 2 (Diminishing stepsize). Let {Xk,Hk,Dk} be the sequence generated from Alg. 1 and X∗ is the optimal solution with D∗ = −∇F(X∗). Under Assumptions 1-4, if ηk = 2θ5θ3θ4θ5k+2 and γk = θ4ηk, by taking αk = Cβγk2(1+C) , in total expectation we have\n1\nn n∑ i=1 E ∥∥xki − x∗∥∥2 . O(1k ) (13)\nwhere θ1, θ2, θ3, θ4 and θ5 are constants defined in the proof. The complexity bound for arriving at the -accurate solution is O( 1 ). Remark 6. Compared with CHOCO-SGD, LEAD requires unbiased compression and the convergence under biased compression is not investigated yet. The analysis of CHOCO-SGD relies on the bounded gradient assumptions, i.e., ‖∇fi(x)‖2 ≤ G, which is restrictive because it conflicts with the strong convexity while LEAD doesn’t need this assumption. Moreover, in the theorem of CHOCO-SGD, it requires a specific point set of γ while LEAD only requires γ to be within a rather large range. This may explain the advantages of LEAD over CHOCO-SGD in terms of robustness to parameter setting." }, { "heading": "5 NUMERICAL EXPERIMENT", "text": "We consider three machine learning problems – `2-regularized linear regression, logistic regression, and deep neural network. The proposed LEAD is compared with QDGD (Reisizadeh et al., 2019a), DeepSqueeze (Tang et al., 2019a), CHOCO-SGD (Koloskova et al., 2019), and two non-compressed algorithms DGD (Yuan et al., 2016) and NIDS (Li et al., 2019).\nSetup. We consider eight machines connected in a ring topology network. Each agent can only exchange information with its two 1-hop neighbors. The mixing weight is simply set as 1/3. For compression, we use the unbiased b-bits quantization method with∞-norm\nQ∞(x) := ( ‖x‖∞2−(b−1) sign(x) ) · ⌊ 2(b−1)|x| ‖x‖∞ + u ⌋ , (14)\nwhere · is the Hadamard product, |x| is the elementwise absolute value of x, and u is a random vector uniformly distributed in [0, 1]d. Only sign(x), norm ‖x‖∞, and integers in the bracket need to be transmitted. Note that this quantization method is similar to the quantization used in QSGD (Alistarh et al., 2017) and CHOCO-SGD (Koloskova et al., 2019), but we use the∞-norm scaling instead of the 2-norm. This small change brings significant improvement on compression precision as justified both theoretically and empirically in Appendix C. In this section, we choose 2-bit quantization and quantize the data blockwise (block size = 512).\nFor all experiments, we tune the stepsize η from {0.01, 0.05, 0.1, 0.5}. For QDGD, CHOCO-SGD and Deepsqueeze, γ is tuned from {0.01, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0}. Note that different notations are used in their original papers. Here we uniformly denote the stepsize as η and the additional parameter in these algorithms as γ for simplicity. For LEAD, we simply fix α = 0.5 and γ = 1.0 for all experiments since we find LEAD is robust to parameter settings as we validate in the parameter sensitivity analysis in Appendix D.1. This indicates the minor effort needed for tuning LEAD. Detailed parameter settings for all experiments are summarized in Appendix D.3.\nLinear regression. We consider the problem: f(x) = ∑n i=1(‖Aix−bi‖2+λ‖x‖2). Data matrices Ai ∈ R200×200 and the true solution x′ is randomly synthesized. The values bi are generated by adding Gaussian noise to Aix′. We let λ = 0.1 and the optimal solution of the linear regression problem be x∗. We use full-batch gradient to exclude the impact of gradient variance. The performance is showed in Fig. 1. The distance to x∗ in Fig. 1a and the consensus error in Fig. 1c verify\nthat LEAD converges exponentially to the optimal consensual solution. It significantly outperforms most baselines and matches NIDS well under the same number of iterations. Fig. 1b demonstrates the benefit of compression when considering the communication bits. Fig. 1d shows that the compression error vanishes for both LEAD and CHOCO-SGD while the compression error is pretty large for QDGD and DeepSqueeze because they directly compress the local models.\nLogistic regression. We further consider a logistic regression problem on the MNIST dataset. The regularization parameter is 10−4. We consider both homogeneous and heterogeneous data settings. In the homogeneous setting, the data samples are randomly shuffled before being uniformly partitioned among all agents such that the data distribution from each agent is very similar. In the heterogeneous setting, the samples are first sorted by their labels and then partitioned among agents. Due to the space limit, we mainly present the results in heterogeneous setting here and defer the homogeneous setting to Appendix D.2. The results using full-batch gradient and mini-batch gradient (the mini-batch size is 512 for each agent) are showed in Fig. 2 and Fig. 3 respectively and both settings shows the faster convergence and higher precision of LEAD.\nNeural network. We empirically study the performance of LEAD in optimizing deep neural network by training AlexNet (240 MB) on CIFAR10 dataset. The mini-batch size is 64 for each agents. Both the homogeneous and heterogeneous case are showed in Fig. 4. In the homogeneous case, CHOCO-SGD, DeepSqueeze and LEAD perform similarly and outperform the non-compressed variants in terms of communication efficiency, but CHOCO-SGD and DeepSqueeze need more efforts for parameter tuning because their convergence is sensitive to the setting of γ. In the heterogeneous cases, LEAD achieves the fastest and most stable convergence. Note that in this setting, sufficient information exchange is more important for convergence because models from different agents are moving to significantly diverse directions. In such case, DGD only converges with smaller stepsize and its communication compressed variants, including QDGD, DeepSqueeze and CHOCO-SGD, diverge in all parameter settings we try.\nIn summary, our experiments verify our theoretical analysis and show that LEAD is able to handle data heterogeneity very well. Furthermore, the performance of LEAD is robust to parameter settings and needs less effort for parameter tuning, which is critical in real-world applications." }, { "heading": "6 CONCLUSION", "text": "In this paper, we investigate the communication compression in decentralized optimization. Motivated by primal-dual algorithms, a novel decentralized algorithm with compression, LEAD, is proposed to achieve faster convergence rate and to better handle heterogeneous data while enjoying the benefit of efficient communication. The nontrivial analyses on the coupled dynamics of inexact primal and dual updates as well as compression error establish the linear convergence of LEAD when full gradient is used and the linear convergence to the O(σ2) neighborhood of the optimum when stochastic gradient is used. Extensive experiments validate the theoretical analysis and demonstrate the state-of-the-art efficiency and robustness of LEAD. LEAD is also applicable to non-convex problems as empirically verified in the neural network experiments but we leave the non-convex analysis as the future work." }, { "heading": "ACKNOWLEDGEMENTS", "text": "Xiaorui Liu and Dr. Jiliang Tang are supported by the National Science Foundation (NSF) under grant numbers CNS-1815636, IIS-1928278, IIS-1714741, IIS-1845081, IIS-1907704, and IIS1955285. Yao Li and Dr. Ming Yan are supported by NSF grant DMS-2012439 and Facebook Faculty Research Award (Systems for ML). Dr. Rongrong Wang is supported by NSF grant CCF1909523." }, { "heading": "Contents of Appendix", "text": "" }, { "heading": "A LEAD in agent’s perspective 14", "text": "" }, { "heading": "B Connections with exiting works 14", "text": "" }, { "heading": "C Compression method 15", "text": "C.1 p-norm b-bits quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15\nC.2 Compression error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16" }, { "heading": "D Experiments 17", "text": "D.1 Parameter sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17\nD.2 Experiments in homogeneous setting . . . . . . . . . . . . . . . . . . . . . . . . . 17\nD.3 Parameter settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18" }, { "heading": "E Proofs of the theorems 19", "text": "E.1 Illustrative flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19\nE.2 Two central Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20\nE.3 Proof of Lemma 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20\nE.4 Proof of Lemma 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22\nE.5 Proof of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23\nE.6 Proof of Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28" }, { "heading": "A LEAD IN AGENT’S PERSPECTIVE", "text": "In the main paper, we described the algorithm with matrix notations for concision. Here we further provide a complete algorithm description from the agents’ perspective.\nAlgorithm 2 LEAD in Agent’s Perspective input: stepsize η, compression parameters (α, γ), initial values x0i , h1i , zi, ∀i ∈ {1, 2, . . . , n} output: xKi , ∀i ∈ {1, 2, . . . , n} or ∑n i=1 x K i\nn\n1: for each agent i ∈ {1, 2, . . . , n} do 2: d1i = zi − ∑ j∈Ni∪{i} wijzj\n3: (hw)1i = ∑ j∈Ni∪{i} wij(hw) 1 j 4: x1i = x 0 i − η∇fi(x0i ; ξ0i ) 5: end for 6: for k = 1, 2, . . . ,K − 1 do in parallel for all agents i ∈ {1, 2, . . . , n} 7: compute ∇fi(xki ; ξki ) B Gradient computation 8: yki = x k i − η∇fi(xki ; ξki )− ηdki 9: qki = Compress(y k i − hki ) B Compression\n10: ŷki = h k i + q k i 11: for neighbors j ∈ Ni do 12: Send qki and receive q k j B Communication 13: end for 14: (ŷw) k i = (hw) k i + ∑ j∈Ni∪{i} wijq k j 15: hk+1i = (1− α)hki + αŷki 16: (hw) k+1 i = (1− α)(hw) k i + α(ŷw) k i 17: dk+1i = d k i + γ 2η ( ŷki − (ŷw) k i\n) 18: xk+1i = x k i − η∇fi(xki ; ξki )− ηd k+1 i B Model update 19: end for" }, { "heading": "B CONNECTIONS WITH EXITING WORKS", "text": "The non-compressed variant of LEAD in Alg. 1 recovers NIDS (Li et al., 2019), D2 (Tang et al., 2018b) and Exact Diffusion (Yuan et al., 2018) as shown in Proposition 1. In Corollary 3, we show that the convergence rate of LEAD exactly recovers the rate of NIDS whenC = 0, γ = 1 and σ = 0. Proposition 1 (Connection to NIDS, D2 and Exact Diffusion). When there is no communication compression (i.e., Ŷk = Yk) and γ = 1, Alg. 1 recovers D2:\nXk+1 = I+W\n2\n( 2Xk −Xk−1 − η∇F(Xk; ξk) + η∇F(Xk−1; ξk−1) ) . (15)\nFurthermore, if the stochastic estimator of the gradient∇F(Xk; ξk) is replaced by the full gradient, it recovers NIDS and Exact Diffusion with specific settings. Corollary 3 (Consistency with NIDS). When C = 0 (no communication compression), γ = 1 and σ = 0 (full gradient), LEAD has the convergence consistent with NIDS with η ∈ (0, 2/(µ+ L)]:\nLk+1 ≤ max { 1− µ(2η − µη2), 1− 1\n2λmax((I−W)†)\n} Lk. (16)\nSee the proof in E.5.\nProof of Proposition 1. Let γ = 1 and Ŷk = Yk. Combing Lines 4 and 6 of Alg. 1 gives\nDk+1 = Dk + I−W 2η (Xk − η∇F(Xk; ξk)− ηDk). (17)\nBased on Line 7, we can represent ηDk from the previous iteration as\nηDk = Xk−1 −Xk − η∇F(Xk−1; ξk−1). (18)\nEliminating both Dk and Dk+1 by substituting (17)-(18) into Line 7, we obtain\nXk+1 = Xk − η∇F(Xk; ξk)− ( ηDk +\nI−W 2\n(Xk − η∇F(Xk; ξk)− ηDk) ) (from (17))\n= I+W 2 (Xk − η∇F(Xk; ξk))− I+W 2 ηDk\n= I+W 2 (Xk − η∇F(Xk; ξk))− I+W 2 (Xk−1 −Xk − η∇F(Xk−1; ξk−1)) (from (18))\n= I+W\n2 (2Xk −Xk−1 − η∇F(Xk; ξk) + η∇F(Xk−1; ξk−1)), (19)\nwhich is exactly D2. It also recovers Exact Diffusion with A = I+W2 and M = ηI in Eq. (97) of (Yuan et al., 2018)." }, { "heading": "C COMPRESSION METHOD", "text": "" }, { "heading": "C.1 P-NORM B-BITS QUANTIZATION", "text": "Theorem 3 (p-norm b-bit quantization). Let us define the quantization operator as\nQp(x) := ( ‖x‖p sign(x)2−(b−1) ) · ⌊ 2b−1|x| ‖x‖p + u ⌋ (20)\nwhere · is the Hadamard product, |x| is the elementwise absolute value and u is a random dither vector uniformly distributed in [0, 1]d. Qp(x) is unbiased, i.e., EQp(x) = x, and the compression variance is upper bounded by\nE‖x−Qp(x)‖2 ≤ 1\n4 ‖ sign(x)2−(b−1)‖2‖x‖2p, (21)\nwhich suggests that ∞-norm provides the smallest upper bound for the compression variance due to ‖x‖p ≤ ‖x‖q,∀x if 1 ≤ q ≤ p ≤ ∞. Remark 7. For the compressor defined in (20), we have the following the compression constant\nC = sup x ‖ sign(x)2−(b−1)‖2‖x‖2p 4‖x‖2 .\nProof. Let denote v = ‖x‖p sign(x)2−(b−1), s = 2 b−1|x| ‖x‖p , s1 = ⌊ 2b−1|x| ‖x‖p ⌋ and s2 = ⌈ 2b−1|x| ‖x‖p ⌉ . We can rewrite x as x = s · v. For any coordinate i such that si = (s1)i, we have Qp(xi) = (s1)ivi with probability 1. Hence EQp(x)i = sivi = xi and\nE(xi −Qp(x)i)2 = (xi − sivi)2 = 0.\nFor any coordinate i such that si 6= (s1)i, we have (s2)i − (s1)i = 1 and Qp(x)i satisfies\nQp(x)i = { (s1)ivi, w.p. (s2)i − si, (s2)ivi, w.p. si − (s1)i.\nThus, we derive\nEQp(x)i = vi(s1)i(s2 − s)i + vi(s2)i(s− s1)i = visi(s2 − s1)i = visi = xi,\nand\nE[xi −Qp(x)i]2 = (xi − vi(s1)i)2(s2 − s)i + (xi − vi(s2)i)2(s− s1)i = (s2 − s1)ix2i + ( (s1)i(s2)i(s1 − s2)i + si((s2)2i − (s1)2i ) ) v2i − 2si(s2 − s1)ixivi\n= x2i + ( − (s1)i(s2)i + si(s2 + s1)i ) v2i − 2sixivi\n= (xi − sivi)2 + ( − (s1)i(s2)i + si(s2 + s1)i − s2i ) v2i\n= (xi − sivi)2 + (s2 − s)i(s− s1)iv2i = (s2 − s)i(s− s1)iv2i\n≤ 1 4 v2i .\nConsidering both cases, we have EQ(x) = x and E‖x−Qp(x)‖2 = ∑\n{si=(s1)i}\nE[xi −Qp(x)i]2 + ∑\n{si 6=(s1)i}\nE[xi −Qp(x)i]2\n≤ 0 + 1 4 ∑ {si 6=(s1)i} v2i\n≤ 1 4 ‖v‖2 = 1\n4 ‖ sign(x)2−(b−1)‖2‖x‖2p." }, { "heading": "C.2 COMPRESSION ERROR", "text": "To verify Theorem 3, we compare the compression error of the quantization method defined in (20) with different norms (p = 1, 2, 3, . . . , 6,∞). Specifically, we uniformly generate 100 random vectors in R10000 and compute the average compression error. The result shown in Figure 5 verifies our proof in Theorem 3 that the compression error decreases when p increases. This suggests that ∞-norm provides the best compression precision under the same bit constraint. Under similar setting, we also compare the compression error with other popular compression methods, such as top-k and random-k sparsification. The x-axes represents the average bits needed to represent each element of the vector. The result is showed in Fig. 6. Note that intuitively top-k methods should perform better than random-k method, but the top-k method needs extra bits to transmitted the index while random-k method can avoid this by using the same random seed. Therefore, top-k method doesn’t outperform random-k too much under the same communication budget. The result in Fig. 6 suggests that∞-norm b-bits quantization provides significantly better compression precision than others under the same bit constraint." }, { "heading": "D EXPERIMENTS", "text": "" }, { "heading": "D.1 PARAMETER SENSITIVITY", "text": "In the linear regression problem, the convergence of LEAD under different parameter settings of α and γ are tested. The result showed in Figure 7 indicates that LEAD performs well in most settings and is robust to the parameter setting. Therefore, in this paper, we simply set α = 0.5 and γ = 1.0 for LEAD in all experiment, which indicates the minor effort needed for parameter tuning." }, { "heading": "D.2 EXPERIMENTS IN HOMOGENEOUS SETTING", "text": "The experiments on logistic regression problem in homogeneous case are showed in Fig. 8 and Fig. 9. It shows that DeepSqueeze, CHOCO-SGD and LEAD converges similarly while Deep-\nSqueeze and CHOCO-SGD require to tune a smaller γ for convergence as showed in the parameter setting in Section D.3. Generally, a smaller γ decreases the model propagation between agents since γ changes the effective mixing matrix and this may cause slower convergence. However, in the setting where data from different agents are very similar, the models move to close directions such that the convergence is not affected too much." }, { "heading": "D.3 PARAMETER SETTINGS", "text": "The best parameter settings we search for all algorithms and experiments are summarized in Tables 1– 4. QDGD and DeepSqueeze are more sensitive to γ and CHOCO-SGD is slight more robust. LEAD is most robust to parameter settings and it works well for the setting α = 0.5 and γ = 1.0 in all experiments in this paper." }, { "heading": "E PROOFS OF THE THEOREMS", "text": "E.1 ILLUSTRATIVE FLOW\nThe following flow graph depicts the relation between iterative variables and clarifies the range of conditional expectation. {Gk}∞k=0 and {Fk}∞k=0 are two σ−algebras generated by the gradient sampling and the stochastic compression respectively. They satisfy\nG0 ⊂ F0 ⊂ G1 ⊂ F1 ⊂ · · · ⊂ Gk ⊂ Fk ⊂ · · ·\n(X1,D1,H1) (X2,D2,H2) (X3,D3,H3) (Xk,Dk,Hk) · · ·\nY1 Y2 Yk−1 Yk\nF0 F1 Fk−2 Fk−1\n∇F(X1;ξ1)∈G0 ∇F(X2;ξ2)∈G1 ··· ∇F(Xk;ξk)∈Gk−1E 1\n1st round\nE2\n···\nEk−1\n(k−1)th round\n⊂ ··· ⊂\nThe solid and dashed arrows in the top flow illustrate the dynamics of the algorithm, while in the bottom, the arrows stand for the relation between successive F-σ-algebras. The downward arrows\ndetermine the range of F-σ-algebras. E.g., up to Ek, all random variables are in Fk−1 and up to ∇F(Xk; ξk), all random variables are in Gk−1 with Gk−1 ⊂ Fk−1. Throughout the appendix, without specification, E is the expectation conditioned on the corresponding stochastic estimators given the context." }, { "heading": "E.2 TWO CENTRAL LEMMAS", "text": "Lemma 1 (Fundamental equality). Let X∗ be the optimal solution, D∗ := −∇F(X∗) and Ek denote the compression error in the kth iteration, that is Ek = Qk − (Yk − Hk) = Ŷk − Yk. From Alg. 1, we have\n‖Xk+1 −X∗‖2 + (η2/γ)‖Dk+1 −D∗‖2M =‖Xk −X∗‖2 + (η2/γ)‖Dk −D∗‖2M − (η2/γ)‖Dk+1 −Dk‖2M − η2‖Dk+1 −D∗‖2\n− 2η〈Xk −X∗,∇F(Xk; ξk)−∇F(X∗)〉+ η2‖∇F(Xk; ξk)−∇F(X∗)‖2 + 2η〈Ek,Dk+1 −D∗〉,\nwhere M := 2(I−W)†− γI and γ < 2/λmax(I−W) ensures the positive definiteness of M over range(I−W). Lemma 2 (State inequality). Let the same assumptions in Lemma 1 hold. From Alg. 1, if we take the expectation over the compression operator conditioned on the k-th iteration, we have\nE‖Hk+1 −X∗‖2 ≤ (1− α)‖Hk −X∗‖2 + αE‖Xk+1 −X∗‖2 + αη2E‖Dk+1 −Dk‖2\n+ 2αη2\nγ E‖Dk+1 −Dk‖2M + α2E‖Ek‖2 − αγE‖Ek‖2I−W − α(1− α)‖Yk −Hk‖2." }, { "heading": "E.3 PROOF OF LEMMA 1", "text": "Before proving Lemma 1, we let Ek = Ŷk −Yk and introduce the following three Lemmas. Lemma 3. Let X∗ be the consensus solution. Then, from Line 4-7 of Alg. 1, we obtain\nI−W 2η\n(Xk+1 −X∗) = ( I\nγ − I−W 2\n) (Dk+1 −Dk)− I−W\n2η Ek. (22)\nProof. From the iterations in Alg. 1, we have\nDk+1 = Dk + γ\n2η (I−W)Ŷk (from Line 6)\n= Dk + γ\n2η (I−W)(Yk +Ek)\n= Dk + γ\n2η (I−W)(Xk − η∇F(Xk; ξk)− ηDk +Ek) (from Line 4)\n= Dk + γ\n2η (I−W)(Xk − η∇F(Xk; ξk)− ηDk+1 −X∗ + η(Dk+1 −Dk) +Ek)\n= Dk + γ 2η (I−W)(Xk+1 −X∗) + γ 2 (I−W)(Dk+1 −Dk) + γ 2η (I−W)Ek,\nwhere the fourth equality holds due to (I−W)X∗ = 0 and the last equality comes from Line 7 of Alg. 1. Rewriting this equality, and we obtain (22).\nLemma 4. Let D∗ = −∇F(X∗) ∈ span{I−W}, we have\n〈Xk+1 −X∗,Dk+1 −Dk〉 =η γ ‖Dk+1 −Dk‖2M − 〈Ek,Dk+1 −Dk〉, (23) 〈Xk+1 −X∗,Dk+1 −D∗〉 =η γ 〈Dk+1 −Dk,Dk+1 −D∗〉M − 〈Ek,Dk+1 −D∗〉, (24)\nwhere M = 2(I−W)† − γI and γ < 2/λmax(I−W) ensures the positive definiteness of M over span{I−W}.\nProof. Since Dk+1 ∈ span{I−W} for any k, we have\n〈Xk+1 −X∗,Dk+1 −Dk〉 =〈(I−W)(Xk+1 −X∗), (I−W)†(Dk+1 −Dk)〉\n=\n〈 η\nγ (2I− γ(I−W))(Dk+1 −Dk)− (I−W)Ek, (I−W)†(Dk+1 −Dk)\n〉 (from (22))\n=\n〈 η\nγ (2(I−W)† − γI\n) (Dk+1 −Dk)−Ek,Dk+1 −Dk 〉 = η\nγ ‖Dk+1 −Dk‖2M − 〈Ek,Dk+1 −Dk〉.\nSimilarly, we have\n〈Xk+1 −X∗,Dk+1 −D∗〉 =〈(I−W)(Xk+1 −X∗), (I−W)†(Dk+1 −D∗)〉\n=\n〈 η\nγ (2I− γ(I−W))(Dk+1 −Dk)− (I−W)Ek, (I−W)†(Dk+1 −D∗) 〉 = 〈 η\nγ (2(I−W)† − I)(Dk+1 −Dk)−Ek,Dk+1 −D∗ 〉 = η\nγ 〈Dk+1 −Dk,Dk+1 −D∗〉M − 〈Ek,Dk+1 −D∗〉.\nTo make sure that M is positive definite over span{I−W}, we need γ < 2/λmax(I−W).\nLemma 5. Taking the expectation conditioned on the compression in the kth iteration, we have 2ηE〈Ek,Dk+1 −D∗〉 = 2ηE 〈 Ek,Dk + γ\n2η (I−W)Yk + γ 2η (I−W)Ek −D∗ 〉 = γE〈Ek, (I−W)Ek〉 = γE‖Ek‖2I−W,\n2ηE〈Ek,Dk+1 −Dk〉 = 2ηE 〈 Ek, γ\n2η (I−W)Yk + γ 2η (I−W)Ek 〉 = γE〈Ek, (I−W)Ek〉 = γE‖Ek‖2I−W.\nProof. The proof is straightforward and omitted here.\nProof of Lemma 1. From Alg. 1, we have\n2η〈Xk −X∗,∇F(Xk; ξk)−∇F(X∗)〉 =2〈Xk −X∗, η∇F(Xk; ξk)− η∇F(X∗)〉 =2〈Xk −X∗,Xk −Xk+1 − η(Dk+1 −D∗)〉 (from Line 7) =2〈Xk −X∗,Xk −Xk+1〉 − 2η〈Xk −X∗,Dk+1 −D∗〉 =2〈Xk −X∗,Xk −Xk+1〉 − 2η〈Xk −Xk+1,Dk+1 −D∗〉 − 2η〈Xk+1 −X∗,Dk+1 −D∗〉 =2〈Xk −X∗ − η(Dk+1 −D∗),Xk −Xk+1〉 − 2η〈Xk+1 −X∗,Dk+1 −D∗〉 =2〈Xk+1 −X∗ + η(∇F(Xk; ξk)−∇F(X∗)),Xk −Xk+1〉 − 2η〈Xk+1 −X∗,Dk+1 −D∗〉 (from Line 7) =2〈Xk+1 −X∗,Xk −Xk+1〉+ 2η〈∇F(Xk; ξk)−∇F(X∗),Xk −Xk+1〉 − 2η〈Xk+1 −X∗,Dk+1 −D∗〉. (25)\nThen we consider the terms on the right hand side of (25) separately. Using 2〈A − B,B − C〉 = ‖A−C‖2 − ‖B−C‖2 − ‖A−B‖2, we have\n2〈Xk+1 −X∗,Xk −Xk+1〉 =2〈X∗ −Xk+1,Xk+1 −Xk〉 =‖Xk −X∗‖2 − ‖Xk+1 −Xk‖2 − ‖Xk+1 −X∗‖2. (26)\nUsing 2〈A,B〉 = ‖A‖2 + ‖B‖2 − ‖A−B‖2, we have\n2η〈∇F(Xk; ξk)−∇F(X∗),Xk −Xk+1〉 =η2‖∇F(Xk; ξk)−∇F(X∗)‖2 + ‖Xk −Xk+1‖2 − ‖Xk −Xk+1 − η(∇F(Xk; ξk)−∇F(X∗))‖2\n=η2‖∇F(Xk; ξk)−∇F(X∗)‖2 + ‖Xk −Xk+1‖2 − η2‖Dk+1 −D∗‖2. (from Line 7) (27)\nCombining (25), (26), (27), and (23), we obtain\n2η〈Xk −X∗,∇F(Xk; ξk)−∇F(X∗)〉 = ‖Xk −X∗‖2 − ‖Xk+1 −Xk‖2 − ‖Xk+1 −X∗‖2︸ ︷︷ ︸\n2〈Xk+1−X∗,Xk−Xk+1〉\n+ η2‖∇F(Xk; ξk)−∇F(X∗)‖2 + ‖Xk −Xk+1‖2 − η2‖Dk+1 −D∗‖2︸ ︷︷ ︸ 2η〈∇F(Xk;ξk)−∇F(X∗),Xk−Xk+1〉\n− (2η2 γ 〈Dk+1 −Dk,Dk+1 −D∗〉M − 2η〈Ek,Dk+1 −D∗〉 ) ︸ ︷︷ ︸\n2η〈Xk+1−X∗,Dk+1−D∗〉\n=‖Xk −X∗‖2 − ‖Xk+1 −Xk‖2 − ‖Xk+1 −X∗‖2\n+ η2‖∇F(Xk; ξk)−∇F(X∗)‖2 + ‖Xk −Xk+1‖2 − η2‖Dk+1 −D∗‖2\n+ η2\nγ\n( ‖Dk −D∗‖2M − ‖Dk+1 −D∗‖2M − ‖Dk+1 −Dk‖2M ) ︸ ︷︷ ︸\n−2〈Dk+1−Dk,Dk+1−D∗〉M\n+2η〈Ek,Dk+1 −D∗〉,\nwhere the last equality holds because\n2〈Dk −Dk+1,Dk+1 −D∗〉M =‖Dk −D∗‖2M − ‖Dk+1 −D∗‖2M − ‖Dk+1 −Dk‖2M.\nThus, we reformulate it as\n‖Xk+1 −X∗‖2 + η 2\nγ ‖Dk+1 −D∗‖2M\n=‖Xk −X∗‖2 + η 2\nγ ‖Dk −D∗‖2M −\nη2\nγ ‖Dk+1 −Dk‖2M − η2‖Dk+1 −D∗‖2\n− 2η〈Xk −X∗,∇F(Xk; ξk)−∇F(X∗)〉+ η2‖∇F(Xk; ξk)−∇F(X∗)‖2 + 2η〈Ek,Dk+1 −D∗〉,\nwhich completes the proof." }, { "heading": "E.4 PROOF OF LEMMA 2", "text": "Proof of Lemma 2. From Alg. 1, we take the expectation conditioned on kth compression and obtain\nE‖Hk+1 −X∗‖2\n=E‖(1− α)(Hk −X∗) + α(Yk −X∗) + αEk‖2 (from Line 13) =‖(1− α)(Hk −X∗) + α(Yk −X∗)‖2 + α2E‖Ek‖2\n=(1− α)‖Hk −X∗‖2 + α‖Yk −X∗‖2 − α(1− α)‖Hk −Yk‖2 + α2E‖Ek‖2. (28)\nIn the second equality, we used the unbiasedness of the compression, i.e., EEk = 0. The last equality holds because of\n‖(1− α)A+ αB‖2 = (1− α)‖A‖2 + α‖B‖2 − α(1− α)‖A−B‖2.\nIn addition, by taking the conditional expectation on the compression, we have\n‖Yk −X∗‖2 =‖Xk − η∇F(Xk; ξk)− ηDk −X∗‖2 (from Line 4) =E‖Xk+1 + ηDk+1 − ηDk −X∗‖2 (from Line 7) =E‖Xk+1 −X∗‖2 + η2E‖Dk+1 −Dk‖2 + 2ηE〈Xk+1 −X∗,Dk+1 −Dk〉 =E‖Xk+1 −X∗‖2 + η2E‖Dk+1 −Dk‖2\n+ 2η2\nγ E‖Dk+1 −Dk‖2M − 2ηE〈Ek,Dk+1 −Dk〉. (from (23))\n=E‖Xk+1 −X∗‖2 + η2E‖Dk+1 −Dk‖2\n+ 2η2\nγ E‖Dk+1 −Dk‖2M − γE‖Ek‖2I−W. (from Line 6) (29)\nCombing the above two equations (28) and (29) together, we have\nE‖Hk+1 −X∗‖2\n≤(1− α)‖Hk −X∗‖2 + αE‖Xk+1 −X∗‖2 + αη2E‖Dk+1 −Dk‖2 + 2αη 2\nγ E‖Dk+1 −Dk‖2M\n− αγE‖Ek‖2I−W + α2E‖Ek‖2 − α(1− α)‖Yk −Hk‖2, (30) which completes the proof." }, { "heading": "E.5 PROOF OF THEOREM 1", "text": "Proof of Theorem 1. Combining Lemmas 1, 2, and 5, we have the expectation conditioned on the compression satisfying\nE‖Xk+1 −X∗‖2 + η 2\nγ E‖Dk+1 −D∗‖2M + a1E‖Hk+1 −X∗‖2\n≤‖Xk −X∗‖2 + η 2\nγ ‖Dk −D∗‖2M −\nη2\nγ E‖Dk+1 −Dk‖2M − η2E‖Dk+1 −D∗‖2\n− 2η〈Xk −X∗,∇F(Xk; ξk)−∇F(X∗)〉+ η2‖∇F(Xk; ξk)−∇F(X∗)‖2 + γE‖Ek‖2I−W + a1(1− α)‖Hk −X∗‖2 + a1αE‖Xk+1 −X∗‖2 + a1αη2E‖Dk+1 −Dk‖2\n+ 2a1αη\n2\nγ E‖Dk+1 −Dk‖2M + a1α2E‖Ek‖2 − a1αγE‖Ek‖2I−W − a1α(1− α)‖Yk −Hk‖2\n= ‖Xk −X∗‖2 − 2η〈Xk −X∗,∇F(Xk; ξk)−∇F(X∗)〉+ η2‖∇F(Xk; ξk)−∇F(X∗)‖2︸ ︷︷ ︸ A\n+ a1αE‖Xk+1 −X∗‖2 + η2\nγ ‖Dk −D∗‖2M − η2E‖Dk+1 −D∗‖2\n+ a1(1− α)‖Hk −X∗‖2−(1− 2a1α) η2\nγ E‖Dk+1 −Dk‖2M + a1αη2E‖Dk+1 −Dk‖2︸ ︷︷ ︸\nB\n+ a1α 2E‖Ek‖2 + (1− a1α)γE‖Ek‖2I−W − a1α(1− α)‖Yk −Hk‖2︸ ︷︷ ︸\nC\n, (31)\nwhere a1 is a non-negative number to be determined. Then we deal with the three terms on the right hand side separately. We want the terms B and C to be nonpositive. First, we consider B. Note that Dk ∈ Range(I−W). If we want B ≤ 0, then, we need 1−2a1α > 0, i.e., a1α < 1/2. Therefore we have\nB =− (1− 2a1α) η2\nγ E‖Dk+1 −Dk‖2M + a1αη2E‖Dk+1 −Dk‖2 ≤ ( a1α−\n(1− 2a1α)λn−1(M) γ\n) η2E‖Dk+1 −Dk‖2,\nwhere λn−1(M) > 0 is the second smallest eigenvalue of M. It means that we also need\na1α+ (2a1α− 1)λn−1(M)\nγ ≤ 0,\nwhich is equivalent to\na1α ≤ λn−1(M)\nγ + 2λn−1(M) < 1/2. (32)\nThen we look at C. We have C =a1α2E‖Ek‖2 + (1− a1α)γE‖Ek‖2I−W − a1α(1− α)‖Yk −Hk‖2\n≤((1− a1α)βγ + a1α2)E‖Ek‖2 − a1α(1− α)‖Yk −Hk‖2\n≤C((1− a1α)βγ + a1α2)‖Yk −Hk‖2 − a1α(1− α)‖Yk −Hk‖2\nBecause we have 1− a1α > 1/2, so we need C((1− a1α)βγ + a1α2)− a1α(1− α) = (1 + C)a1α2 − a1(Cβγ + 1)α+ Cβγ ≤ 0. (33)\nThat is\nα ≥ a1(Cβγ + 1)− √ a21(Cβγ + 1)\n2 − 4(1 + C)Ca1βγ 2(1 + C)a1 =: α0, (34)\nα ≤ a1(Cβγ + 1) + √ a21(Cβγ + 1)\n2 − 4(1 + C)Ca1βγ 2(1 + C)a1 =: α1. (35)\nNext, we look at A. Firstly, by the bounded variance assumption, we have the expectation conditioned on the gradient sampling in kth iteration satisfying\nE‖Xk −X∗‖2 − 2ηE〈Xk −X∗,∇F(Xk; ξk)−∇F(X∗)〉+ η2E‖∇F(Xk; ξk)−∇F(X∗)‖2\n≤‖Xk −X∗‖2 − 2η〈Xk −X∗,∇F(Xk)−∇F(X∗)〉+ η2‖∇F(Xk)−∇F(X∗)‖2 + nη2σ2\nThen with the smoothness and strong convexity from Assumptions 4, we have the co-coercivity of ∇gi(x) with gi(x) := fi(x)− u2 ‖x‖ 2 2, which gives\n〈Xk −X∗,∇F(Xk)−∇F(X∗)〉 ≥ µL µ+ L ‖Xk −X∗‖2 + 1 µ+ L ‖∇F(Xk)−∇F(X∗)‖2.\nWhen η ≤ 2/(µ+ L), we have\n〈Xk −X∗,∇F(Xk)−∇F(X∗)〉\n= ( 1− η(µ+ L)\n2\n) 〈Xk −X∗,∇F(Xk)−∇F(X∗)〉+ η(µ+ L)\n2 〈Xk −X∗,∇F(Xk)−∇F(X∗)〉\n≥ ( µ− ηµ(µ+ L)\n2 + ηµL 2\n) ‖Xk −X∗‖2 + η\n2 ‖∇F(Xk)−∇F(X∗)‖2\n=µ ( 1− ηµ\n2\n) ‖Xk −X∗‖2 + η\n2 ‖∇F(Xk)−∇F(X∗)‖2.\nTherefore, we obtain\n− 2η〈Xk −X∗,∇F(Xk)−∇F(X∗)〉 ≤ − η2‖∇F(Xk)−∇F(X∗)‖2 − µ(2η − µη2)‖Xk −X∗‖2. (36)\nConditioned on the kthe iteration, (i.e., conditioned on the gradient sampling in kth iteration), the inequality (31) becomes\nE‖Xk+1 −X∗‖2 + η 2\nγ E‖Dk+1 −D∗‖2M + a1E‖Hk+1 −X∗‖2\n≤ ( 1− µ(2η − µη2) ) ‖Xk −X∗‖2 + a1αE‖Xk+1 −X∗‖2\n+ η2\nγ ‖Dk −D∗‖2M − η2E‖Dk+1 −D∗‖2 + a1(1− α)‖Hk −X∗‖2 + nη2σ2, (37)\nif the step size satisfies η ≤ 2µ+L . Rewriting (37), we have\n(1− a1α)E‖Xk+1 −X∗‖2 + η2\nγ E‖Dk+1 −D∗‖2M + η2E‖Dk+1 −D∗‖2 + a1E‖Hk+1 −X∗‖2\n≤ ( 1− µ(2η − µη2) ) ‖Xk −X∗‖2 + η 2\nγ ‖Dk −D∗‖2M + a1(1− α)‖Hk −X∗‖2 + nη2σ2,\n(38)\nand thus\n(1− a1α)E‖Xk+1 −X∗‖2 + η2\nγ E‖Dk+1 −D∗‖2M+γI + a1E‖Hk+1 −X∗‖2\n≤ ( 1− µ(2η − µη2) ) ‖Xk −X∗‖2 + η 2\nγ ‖Dk −D∗‖2M + a1(1− α)‖Hk −X∗‖2 + nη2σ2.\n(39)\nWith the definition of Lk in (12), we have\nELk+1 ≤ ρLk + nη2σ2, (40)\nwith\nρ = max\n{ 1− µ(2η − µη2)\n1− a1α ,\nλmax(M)\nγ + λmax(M) , 1− α\n} .\nwhere λmax(M) = 2λmax((I−W)†)− γ.\nRecall all the conditions on the parameters a1, α, and γ to make sure that ρ < 1:\na1α ≤ λn−1(M)\nγ + 2λn−1(M) , (41)\na1α ≤ µ(2η − µη2), (42) α ≥ a1(Cβγ + 1)− √ a21(Cβγ + 1)\n2 − 4(1 + C)Ca1βγ 2(1 + C)a1 =: α0, (43)\nα ≤ a1(Cβγ + 1) + √ a21(Cβγ + 1)\n2 − 4(1 + C)Ca1βγ 2(1 + C)a1 =: α1. (44)\nIn the following, we show that there exist parameters that satisfy these conditions.\nSince we can choose any a1, we let\na1 = 4(1 + C)\nCβγ + 2 ,\nsuch that\na21(Cβγ + 1) 2 − 4(1 + C)Ca1βγ = a21.\nThen we have\nα0 = Cβγ\n2(1 + C) → 0, as γ → 0,\nα1 = Cβγ + 2 2(1 + C) → 1 1 + C , as γ → 0.\nConditions (43) and (44) show a1α ∈ [ 2Cβγ\nCβγ + 2 , 2\n] → [0, 2], if C = 0 or γ → 0.\nHence in order to make (41) and (42) satisfied, it’s sufficient to make\n2Cβγ\nCβγ + 2 ≤ min\n{ λn−1(M)\nγ + 2λn−1(M) , µ(2η − µη2)\n} = min { 2 β − γ 4 β − γ , µ(2η − µη2) } . (45)\nwhere we use λn−1(M) = 2λmax(I−W) − γ = 2 β − γ.\nWhen C > 0, the condition (45) is equivalent to\nγ ≤ min\n{ (3C + 1)− √ (3C + 1)2 − 4C Cβ , 2µη(2− µη) [2− µη(2− µη)]Cβ } . (46)\nThe first term can be simplified using (3C + 1)− √ (3C + 1)2 − 4C Cβ ≥ 2 (3C + 1)β due to √ 1− x ≤ 1− x2 when x ∈ (0, 1).\nTherefore, for a given stepsize η, if we choose γ ∈ ( 0,min { 2 (3C + 1)β , 2µη(2− µη) [2− µη(2− µη)]Cβ }) and\nα ∈ [ Cβγ\n2(1 + C) ,min {Cβγ + 2 2(1 + C) , 2− βγ 4− βγ Cβγ + 2 4(1 + C) , µη(2− µη)Cβγ + 2 4(1 + C) }] ,\nthen, all conditions (41)-(44) hold.\nNote that γ < 2(3C+1)β implies γ < 2 β , which ensures the positive definiteness of M over span{I− W} in Lemma 4. Note that η ≤ 2µ+L ensures\nµη(2− µη)Cβγ + 2 4(1 + C) ≤ Cβγ + 2 2(1 + C) . (47)\nSo, we can simplify the bound for α as α ∈ [ Cβγ\n2(1 + C) ,min {2− βγ 4− βγ Cβγ + 2 4(1 + C) , µη(2− µη)Cβγ + 2 4(1 + C) }] .\nLastly, taking the total expectation on both sides of (40) and using tower property, we complete the proof for C > 0.\nProof of Corollary 1. Let’s first define κf = Lµ and κg = λmax(I−W) λ+min(I−W) = λmax(I −W)λmax((I − W)†).\nWe can choose the stepsize η = 1L such that the upper bound of γ is\nγupper =min { 2 (3C + 1)β ,\n2 κf ( 2− 1κf ) [ 2− 1κf ( 2− 1κf )] Cβ , 2 β } ≥ min { 2 (3C + 1)β , 1 κfCβ } ,\ndue to x(2−x)2−x(2−x) ≥ x 2−x ≥ x when x ∈ (0, 1).\nHence we can take γ = min{ 1(3C+1)β , 1 κfCβ }.\nThe bound of α is α ∈ [ Cβγ\n2(1 + C) ,min { 2− βγ 4− βγ Cβγ + 2 4(1 + C) , 1 κf (2− 1 κf ) Cβγ + 2 4(1 + C) }] When γ is chosen as 1κfCβ , pick\nα = Cβγ\n2(1 + C) =\n1\n2(1 + C)κf . (48)\nWhen 1(3C+1)β ≤ 1 κfCβ , the upper bound of α is\nαupper = min { 2− βγ 4− βγ Cβγ + 2 4(1 + C) , 1 κf (2− 1 κf ) Cβγ + 2 4(1 + C) } = min { 6C + 1\n12C + 3 , 1 κf (2− 1 κf )\n} 7C + 2\n4(C + 1)(3C + 1) ≥ min { 6C + 1\n12C + 3 , 1\nκf\n} 7C + 2\n4(C + 1)(3C + 1) .\nIn this case, we pick\nα = min\n{ 6C + 1\n12C + 3 , 1\nκf\n} 7C + 2\n4(C + 1)(3C + 1) . (49)\nNote α = O (\n1 (1+C)κf ) since 6C+112C+3 is lower bounded by 1 3 . Hence in both cases (Eq. (48) and\nEq. (49)), α = O (\n1 (1+C)κf\n) , and the third term of ρ is upper bounded by\n1− α ≤ max { 1− 1\n2(1 + C)κf , 1−min\n{ 6C + 1\n12C + 3 , 1\nκf\n} 7C + 2\n4(1 + C)(3C + 1) } In two cases of γ, the second term of ρ becomes\n1− γ 2λmax((I−W)†) = max\n{ 1− 1\n2Cκfκg , 1− 1 (1 + 3C)κg\n}\nBefore analysing the first term of ρ, we look at a1α in two cases of γ. When γ = 1κfCβ ,\na1α = 2Cβγ\nCβγ + 2 =\n2 2κf + 1 ≤ 1 κf .\nWhen γ = 1(3C+1)β ,\na1α = min\n{ 6C + 1\n(12C + 3) , 1\nκf } ≤ 1 κf .\nIn both cases, a1α ≤ 1κf . Therefore, the first term of ρ becomes\n1− µη(2− µη) 1− a1α ≤ 1− 1κf (2− 1 κf ) 1− 1κf = 1− 1− 1κf κf − 1 = 1− 1 κf .\nTo summarize, we have ρ ≤ 1−min { 1\nκf ,\n1\n2Cκfκg ,\n1\n(1 + 3C)κg ,\n1\n2(1 + C)κf ,min\n{ 6C + 1\n12C + 3 , 1\nκf\n} 7C + 2\n4(1 + C)(3C + 1)\n}\nand therefore\nρ = max { 1−O ( 1 (1 + C)κf ) , 1−O ( 1 (1 + C)κg ) , 1−O ( 1 Cκfκg )} .\nWith full-gradient (i.e., σ = 0), we get −accuracy solution with the total number of iterations\nk ≥ Õ((1 + C)(κf + κg) + Cκfκg).\nWhen C = 0, i.e., there is no compression, the iteration complexity recovers that of NIDS, Õ (κf + κg) .\nWhen C ≤ κf+κgκfκg+κf+κg , the complexity is improved to that of NIDS, i.e., the compression doesn’t harm the convergence in terms of the order of the coefficients.\nProof of Corollary 2. Note that (xk)> = Xk and 1n×1X∗ = X∗, then n∑ i=1 E‖xki − xk‖2 = E ∥∥Xk − 1n×1Xk∥∥2\n= E ∥∥Xk −X∗ +X∗ − 1n×1Xk∥∥2\n= E ∥∥∥∥Xk −X∗ − 1n×11>n×1n (Xk −X∗) ∥∥∥∥ ≤ E‖Xk −X∗‖2\n≤ ρEL k−1 + nη2σ2(1− ρ)−1\n1− a1α\n≤ 2ρkL0 + 2nη 2σ2\n1− ρ . (50)\nThe last inequality holds because we have a1α ≤ 1/2.\nProof of Corollary 3. From the proof of Theorem 1, when C = 0, we can set γ = 1, α = 1, and a1 = 0. Plug those values into ρ, and we obtain the convergence rate for NIDS." }, { "heading": "E.6 PROOF OF THEOREM 2", "text": "Proof of Theorem 2. In order to get exact convergence, we pick diminishing step-size, set α = Cβγ 2(1+C) , a1α = 2Cβγk Cβγk+2 , θ1 = 12λmax((I−W)†) and θ2 = Cβ 2(1+C) , then\nρk = max\n{ 1− µηk(2− µηk)− a1α\n1− a1α , 1− θ1γk, 1− θ2γk } If we further pick diminishing ηk and γk such that µηk(2− µηk)− a1α ≥ a1α, then\nµηk(2− µηk)− a1α 1− a1α ≥ a1α 1− a1α = 2Cβγk 2− Cβγk ≥ Cβγk.\nNotice that Cβγ ≤ 23 since (3C + 1) − √ (3C + 1)2 − 4C is increasing in C > 0 with limit 23 at ∞. In this case we only need,\nγk ∈ ( 0,min { (3C + 1)−√(3C + 1)2 − 4C Cβ , 2µηk(2− µηk) [4− µηk(2− µηk)]Cβ , 2 β }) . (51)\nAnd ρk ≤ max {1− Cβγk, 1− θ1γk, 1− θ2γk} ≤ 1− θ3γk\nif θ3 = min{θ1, θ2} and note that θ2 ≤ Cβ. We define\nLk := (1− a1αk)‖Xk −X∗‖2 + (2η2k/γk)E‖Dk+1 −D∗‖2(I−W)† + a1‖H k −X∗‖2.\nHence ELk+1 ≤ (1− θ3γk)ELk + nσ2η2k.\nFrom a1α ≤ µηk(2−µηk)2 , we get\n4Cβγk Cβγk + 2 ≤ µηk(2− µηk).\nIf we pick γk = θ4ηk, then it’s sufficient to let\n2Cβθ4ηk ≤ µηk(2− µηk).\nHence if θ4 < µCβ and let η∗ = 2(µ−Cβθ4) µ2 , then ηk = γk θ4 ∈ (0, η∗) guarantees the above discussion and ELk+1 ≤ (1− θ3θ4ηk)ELk + nσ2η2k.\nSo far all restrictions for ηk are\nηk ≤ min { 2\nµ+ L , η∗ } and\nηk ≤ 1\nθ4 min\n{ (3C + 1)− √ (3C + 1)2 − 4C Cβ , 2 β }\nLet θ5 = min { 2 µ+L , η∗, (3C+1)− √ (3C+1)2−4C Cβθ4 , 2βθ4 } , ηk = 1Bk+A and D = max { AL0, 2nσ 2 θ3θ4 } ,\nwe claim that if we pick B = θ3θ42 and some A, by setting ηk = 2 θ3θ4k+2A , we get\nELk ≤ D Bk +A .\nInduction: When k = 0, it’s obvious. Suppose previous k inequalities hold. Then\nELk+1 ≤ ( 1− 2θ3θ4\nθ3θ4k + 2A\n) 2D\nθ3θ4k + 2A +\n4nσ2\n(θ3θ4k + 2A)2 .\nMultiply M := (θ3θ4k + θ3θ4 + 2A)(θ3θ4k + 2A)(2D)−1 on both sides, we get MELk+1 ≤ ( 1− 2θ3θ4\nθ3θ4k + 2A\n) (θ3θ4k + θ3θ4 + 2A) + 4nσ2(θ3θ4k + θ3θ4 + 2A)\n2D(θ3θ4k + 2A)\n= 2D(θ3θ4k + 2A− 2θ3θ4)(θ3θ4k + θ3θ4 + 2A) + 4nσ2(θ3θ4k + θ3θ4 + 2A)\n2D(θ3θ4k + 2A)\n= 2D(θ3θ4k + 2A) 2 + 4nσ2(θ3θ4k + 2A)− 4Dθ3θ4(θ3θ4k + 2A) + 2Dθ3θ4(θ3θ4k + 2A) 2D(θ3θ4k + 2A)\n+ −4D(θ3θ4)2 + 4nσ2θ3θ4\n2D(θ3θ4k + 2A)\n≤θ3θ4k + 2A.\nHence ELk+1 ≤ 2D\nθ3θ4(k + 1) + 2A\nThis induction holds for any A such that ηk is feasible, i.e.\nη0 = 1\nA ≤ θ5.\nHere we summarize the definition of constant numbers:\nθ1 = 1\n2λmax((I−W)†) , θ2 =\nCβ\n2(1 + C) , (52) θ3 = min{θ1, θ2}, θ4 ∈ ( 0, µ\nCβ\n) , η∗ =\n2(µ− Cβθ4) µ2 , (53)\nθ5 = min\n{ 2\nµ+ L , η∗,\n(3C + 1)− √ (3C + 1)2 − 4C\nCβθ4 ,\n2\nβθ4\n} . (54)\nTherefore, let A = 1θ5 and ηk = 2θ5 θ3θ4θ5k+2 , we get\n1 n ELk ≤\n2max {\n1 nL\n0, 2σ 2θ5\nθ3θ4 } θ3θ4θ5k + 2 .\nSince 1− a1αk ≥ 1/2, we complete the proof." } ]
2,021
null
SP:c0924c1c4d4132e6d80e24103c243780438f8a89
[ "This paper introduces an approach called action guidance, made to address issues in more standard applications of reward shaping. The main idea of their approach is that there are two different kinds of agents, one (auxiliary agents) that learn from shaped reward functions alone and the other (main agent(s)) that learn only from the actual sparse rewards. The authors made use of a simplified RTS domain and demonstrated that their approach outperformed a more naive shaped reward approach. In addition they demonstrated an ablation study on positive learning optimization. ", "The paper introduces an approach for learning policies across multiple MDPs and using those policies to improve learning performance on the task that the agent designer cares about. The approach assumes that a set of MDPs are provided to the learning agent, and that all of the MDPs have the same underlying task but with different reward densities (i.e., some of these MDPs have shaped rewards, and thus are faster to learn from). The approach operates by training the main agent to imitate the actions chosen by the other agents that are trained on the MDPs with shaped reward functions." ]
Training agents using Reinforcement Learning in games with sparse rewards is a challenging problem, since large amounts of exploration are required to retrieve even the first reward. To tackle this problem, a common approach is to use reward shaping to help exploration. However, an important drawback of reward shaping is that agents sometimes learn to optimize the shaped reward instead of the true objective. In this paper, we present a novel technique that we call action guidance that successfully trains agents to eventually optimize the true objective in games with sparse rewards while maintaining most of the sample efficiency that comes with reward shaping. We evaluate our approach in a simplified real-time strategy (RTS) game simulator called μRTS. Training agents using Reinforcement Learning with sparse rewards is often difficult (Pathak et al., 2017). First, due to the sparsity of the reward, the agent often spends the majority of the training time doing inefficient exploration and sometimes not even reaching the first sparse reward during the entirety of its training. Second, even if the agents have successfully retrieved some sparse rewards, performing proper credit assignment is challenging among complex sequences of actions that have led to theses sparse rewards. Reward shaping (Ng et al., 1999) is a widely-used technique designed to mitigate this problem. It works by providing intermediate rewards that lead the agent towards the sparse rewards, which are the true objective. For example, the sparse reward for a game of Chess is naturally +1 for winning, -1 for losing, and 0 for drawing, while a possible shaped reward might be +1 for every enemy piece the agent takes. One of the critical drawbacks for reward shaping is that the agent sometimes learns to optimize for the shaped reward instead of the real objective. Using the Chess example, the agent might learn to take as many enemy pieces as possible while still losing the game. A good shaped reward achieves a nice balance between letting the agent find the sparse reward and being too shaped (so the agent learns to just maximize the shaped reward), but this balance can be difficult to find. In this paper, we present a novel technique called action guidance that successfully trains the agent to eventually optimize over sparse rewards while maintaining most of the sample efficiency that comes with reward shaping. It works by constructing a main policy that only learns from the sparse reward function RM and some auxiliary policies that learn from the shaped reward function RA1 , RA2 , . . . , RAn . During training, we use the same rollouts to train the main and auxiliary policies and initially set a high-probability of the main policy to take action guidance from the auxiliary policies, that is, the main policy will execute actions sampled from the auxiliary policies. Then the main policy and auxiliary policies are updated via off-policy policy gradient. As the training goes on, the main policy will get more independent and execute more actions sampled from its own policy. Auxiliary policies learn from shaped rewards and therefore make the training sampleefficient, while the main policy learns from the original sparse reward and therefore makes sure that the agents will eventually optimize over the true objective. We can see action guidance as combining reward shaping to train auxiliary policies interlieaved with a sort of imitation learning to guide the main policy from these auxiliary policies. We examine action guidance in the context of a real-time strategy (RTS) game simulator called μRTS for three sparse rewards tasks of varying difficulty. For each task, we compare the performance of training agents with the sparse reward function RM, a shaped reward function RA1 , and action guidance with a singular auxiliary policy learning from RA1 . The main highlights are:
[ { "affiliations": [], "name": "SHAPED REWARDS" } ]
[ { "authors": [ "Pieter Abbeel", "Andrew Y Ng" ], "title": "Apprenticeship learning via inverse reinforcement learning", "venue": "In Proceedings of the twenty-first international conference on Machine learning,", "year": 2004 }, { "authors": [ "Marcin Andrychowicz", "Filip Wolski", "Alex Ray", "Jonas Schneider", "Rachel Fong", "Peter Welinder", "Bob McGrew", "Josh Tobin", "OpenAI Pieter Abbeel", "Wojciech Zaremba" ], "title": "Hindsight experience replay", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Michael Bain", "Claude Sammut" ], "title": "A framework for behavioural cloning", "venue": "In Machine Intelligence", "year": 1995 }, { "authors": [ "Marc Bellemare", "Sriram Srinivasan", "Georg Ostrovski", "Tom Schaul", "David Saxton", "Remi Munos" ], "title": "Unifying count-based exploration and intrinsic motivation", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Yuri Burda", "Harri Edwards", "Deepak Pathak", "Amos Storkey", "Trevor Darrell", "Alexei A. Efros" ], "title": "Large-scale study of curiosity-driven learning", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Thomas Degris", "Martha White", "Richard S Sutton" ], "title": "Off-policy actor-critic", "venue": "arXiv preprint arXiv:1205.4839,", "year": 2012 }, { "authors": [ "Thomas G Dietterich" ], "title": "Hierarchical reinforcement learning with the maxq value function decomposition", "venue": "Journal of artificial intelligence research,", "year": 2000 }, { "authors": [ "Logan Engstrom", "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Firdaus Janoos", "Larry Rudolph", "Aleksander Madry" ], "title": "Implementation matters in deep rl: A case study on ppo and trpo", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Rein Houthooft", "Xi Chen", "Yan Duan", "John Schulman", "Filip De Turck", "Pieter Abbeel" ], "title": "Curiositydriven exploration in deep reinforcement learning via bayesian neural networks. 2016", "venue": null, "year": 2016 }, { "authors": [ "Shengyi Huang", "Santiago Ontañón" ], "title": "A closer look at invalid action masking in policy gradient algorithms", "venue": "arXiv preprint arXiv:2006.14171,", "year": 2020 }, { "authors": [ "Shengyi Huang", "Santiago Ontañón" ], "title": "Comparing observation and action representations for deep reinforcement learning in μrts", "venue": null, "year": 2019 }, { "authors": [ "Anssi Kanervisto", "Christian Scheller", "Ville Hautamäki" ], "title": "Action space shaping in deep reinforcement learning", "venue": "arXiv preprint arXiv:2004.00980,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Sergey Levine", "Aviral Kumar", "George Tucker", "Justin Fu" ], "title": "Offline reinforcement learning: Tutorial, review, and perspectives on open problems", "venue": "arXiv preprint arXiv:2005.01643,", "year": 2020 }, { "authors": [ "Manuel Lopes", "Tobias Lang", "Marc Toussaint", "Pierre-Yves Oudeyer" ], "title": "Exploration in modelbased reinforcement learning by empirically estimating learning progress", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In Proceedings of the 27th international conference on machine learning", "year": 2010 }, { "authors": [ "Sanmit Narvekar", "Peter Stone" ], "title": "Learning curriculum policies for reinforcement learning", "venue": "arXiv preprint arXiv:1812.00285,", "year": 2018 }, { "authors": [ "Andrew Y Ng", "Daishi Harada", "Stuart Russell" ], "title": "Policy invariance under reward transformations: Theory and application to reward shaping", "venue": null, "year": 1999 }, { "authors": [ "Zhen-Jia Pang", "Ruo-Ze Liu", "Zhou-Yu Meng", "Yi Zhang", "Yang Yu", "Tong Lu" ], "title": "On reinforcement learning for full-length game of starcraft", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A. Efros", "Trevor Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Martin Riedmiller", "Roland Hafner", "Thomas Lampe", "Michael Neunert", "Jonas Degrave", "Tom Van de Wiele", "Volodymyr Mnih", "Nicolas Heess", "Jost Tobias Springenberg" ], "title": "Learning by playingsolving sparse reward tasks from scratch", "venue": "arXiv preprint arXiv:1802.10567,", "year": 2018 }, { "authors": [ "Tom Schaul", "Daniel Horgan", "Karol Gregor", "David Silver" ], "title": "Universal value function approximators", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "Highdimensional continuous control using generalized advantage estimation", "venue": "arXiv preprint arXiv:1506.02438,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Maxwell Svetlik", "Matteo Leonetti", "Jivko Sinapov", "Rishi Shah", "Nick Walker", "Peter Stone" ], "title": "Automatic curriculum graph generation for reinforcement learning agents", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Matthew E. Taylor", "Peter Stone", "Yaxin Liu" ], "title": "Transfer learning via inter-task mappings for temporal difference learning", "venue": "J. Mach. Learn. Res.,", "year": 2007 }, { "authors": [ "Oriol Vinyals", "Timo Ewalds", "Sergey Bartunov", "Petko Georgiev", "Alexander Sasha Vezhnevets", "Michelle Yeo", "Alireza Makhzani", "Heinrich Küttler", "John Agapiou", "Julian Schrittwieser" ], "title": "Starcraft ii: A new challenge for reinforcement learning", "venue": "arXiv preprint arXiv:1708.04782,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Igor Babuschkin", "Wojciech M Czarnecki", "Michaël Mathieu", "Andrew Dudzik", "Junyoung Chung", "David H Choi", "Richard Powell", "Timo Ewalds", "Petko Georgiev" ], "title": "Grandmaster level in starcraft ii using multi-agent reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Ziyu Wang", "Victor Bapst", "Nicolas Heess", "Volodymyr Mnih", "Remi Munos", "Koray Kavukcuoglu", "Nando de Freitas" ], "title": "Sample efficient actor-critic with experience", "venue": "replay. arXiv preprint arXiv:1611.01224,", "year": 2016 }, { "authors": [ "Deheng Ye", "Zhao Liu", "Mingfei Sun", "Bei Shi", "Peilin Zhao", "Hao Wu", "Hongsheng Yu", "Shaojie Yang", "Xipeng Wu", "Qingwei Guo" ], "title": "Mastering complex control in moba games with deep reinforcement learning", "venue": "In AAAI,", "year": 2020 } ]
[ { "heading": null, "text": "Training agents using Reinforcement Learning with sparse rewards is often difficult (Pathak et al., 2017). First, due to the sparsity of the reward, the agent often spends the majority of the training time doing inefficient exploration and sometimes not even reaching the first sparse reward during the entirety of its training. Second, even if the agents have successfully retrieved some sparse rewards, performing proper credit assignment is challenging among complex sequences of actions that have led to theses sparse rewards. Reward shaping (Ng et al., 1999) is a widely-used technique designed to mitigate this problem. It works by providing intermediate rewards that lead the agent towards the sparse rewards, which are the true objective. For example, the sparse reward for a game of Chess is naturally +1 for winning, -1 for losing, and 0 for drawing, while a possible shaped reward might be +1 for every enemy piece the agent takes. One of the critical drawbacks for reward shaping is that the agent sometimes learns to optimize for the shaped reward instead of the real objective. Using the Chess example, the agent might learn to take as many enemy pieces as possible while still losing the game. A good shaped reward achieves a nice balance between letting the agent find the sparse reward and being too shaped (so the agent learns to just maximize the shaped reward), but this balance can be difficult to find.\nIn this paper, we present a novel technique called action guidance that successfully trains the agent to eventually optimize over sparse rewards while maintaining most of the sample efficiency that comes with reward shaping. It works by constructing a main policy that only learns from the sparse reward function RM and some auxiliary policies that learn from the shaped reward function RA1 , RA2 , . . . , RAn . During training, we use the same rollouts to train the main and auxiliary policies and initially set a high-probability of the main policy to take action guidance from the auxiliary policies, that is, the main policy will execute actions sampled from the auxiliary policies. Then the main policy and auxiliary policies are updated via off-policy policy gradient. As the training goes on, the main policy will get more independent and execute more actions sampled from its own policy. Auxiliary policies learn from shaped rewards and therefore make the training sampleefficient, while the main policy learns from the original sparse reward and therefore makes sure that the agents will eventually optimize over the true objective. We can see action guidance as combining reward shaping to train auxiliary policies interlieaved with a sort of imitation learning to guide the main policy from these auxiliary policies.\nWe examine action guidance in the context of a real-time strategy (RTS) game simulator called µRTS for three sparse rewards tasks of varying difficulty. For each task, we compare the performance of training agents with the sparse reward function RM, a shaped reward function RA1 , and action guidance with a singular auxiliary policy learning from RA1 . The main highlights are:\nAction guidance is sample-efficient. Since the auxiliary policy learns from RA1 and the main policy takes action guidance from the auxiliary policy during the initial stage of training, the main policy is more likely to discover the first sparse reward more quickly and learn more efficiently. Empirically, action guidance reaches almost the same level of sample efficiency as reward shaping in all of the three tasks tested.\nThe true objective is being optimized. During the course of training, the main policy has never seen the shaped rewards. This ensures that the main policy, which is the agent we are really interested in, is always optimizing against the true objective and is less biased by the shaped rewards. As an example, Figure 1 shows that the main policy trained with action guidance eventually learns to win the game as fast as possible, even though it has only learned from the match outcome reward (+1 for winning, -1 for losing, and 0 for drawing). In contrast, the agents trained with reward shaping learn more diverse sets of behaviors which result in high shaped reward.\nTo support further research in this field, we make our source code available at GitHub1, as well as all the metrics, logs, and recorded videos2." }, { "heading": "1 RELATED WORK", "text": "In this section, we briefly summarize the popular techniques proposed to address the challenge of sparse rewards.\nReward Shaping. Reward shaping is a common technique where the human designer uses domain knowledge to define additional intermediate rewards for the agents. Ng et al. (1999) show that a slightly more restricted form of state-based reward shaping has better theoretical properties for preserving the optimal policy.\nTransfer and Curriculum Learning. Sometimes learning the target tasks with sparse rewards is too challenging, and it is more preferable to learn some easier tasks first. Transfer learning leverages this idea and trains agents with some easier source tasks and then later transfer the knowledge through value function (Taylor et al., 2007) or reward shaping (Svetlik et al., 2017). Curriculum learning further extends transfer learning by automatically designing and choosing a full sequences of source tasks (i.e. a curriculum) (Narvekar & Stone, 2018).\nImitation Learning. Alternatively, it is possible to directly provide examples of human demonstration or expert replay for the agents to mimic via Behavior Cloning (BC) (Bain & Sammut, 1995), which uses supervised learning to learn a policy given the state-action pairs from expert replays. Alternatively, Inverse Reinforcement Learning (IRL) (Abbeel & Ng, 2004) recovers a reward function from expert demonstrations to be used to train agents.\nCuriosity-driven Learning. Curiosity driven learning seeks to design intrinsic reward functions (Burda et al., 2019) using metrics such as prediction errors (Houthooft et al., 2016) and “visit counts” (Bellemare et al., 2016; Lopes et al., 2012). These intrinsic rewards encourage the agents to explore unseen states.\nGoal-oriented Learning. In certain tasks, it is possible to describe a goal state and use it in conjunction with the current state as input (Schaul et al., 2015). Hindsight experience replay (HER) (Andrychowicz et al., 2017) develops better utilization of existing data in experience replay by replaying each episode with different goals. HER is shown to be an effective technique in sparse rewards tasks.\nHierarchical Reinforcement Learning (HRL). If the target task is difficult to learn directly, it is also possible to hierarchically structure the task using experts’ knowledge and train hierarchical agents, which generally involves a main policy that learns abstract goals, time, and actions, as well as auxiliary policies that learn primitive actions and specific goals (Dietterich, 2000). HRL is especially popular in RTS games with combinatorial action spaces (Pang et al., 2019; Ye et al., 2020).\nThe most closely related work is perhaps Scheduled Auxiliary Control (SAC-X) (Riedmiller et al., 2018), which is an HRL algorithm that trains auxiliary policies to perform primitive actions with\n1https://github.com/anonymous-research-code/action-guidance 2Blinded for peer review\nshaped rewards and a main policy to schedule the use of auxiliary policies with sparse rewards. However, our approach differs in the treatment of the main policy. Instead of learning to schedule auxiliary policies, our main policy learns to act in the entire action space by taking action guidance from the auxiliary policies. There are two intuitive benefits to our approach since our main policy learns in the full action space. First, during policy evaluation our main policy does not have to commit to a particular auxiliary policy to perform actions for a fixed number of time steps like it is usually done in SAC-X. Second, learning in the full action space means the main policy will less likely suffer from the definition of hand-crafted sub-tasks, which could be incomplete or biased." }, { "heading": "2 BACKGROUND", "text": "We consider the Reinforcement Learning problem in a Markov Decision Process (MDP) denoted as (S,A, P, ρ0, r, γ, T ), where S is the state space,A is the discrete action space, P : S×A×S → [0, 1] is the state transition probability, ρ0 : S → [0, 1] is the the initial state distribution, r : S × A→ R is the reward function, γ is the discount factor, and T is the maximum episode length. A stochastic policy πθ : S ×A→ [0, 1], parameterized by a parameter vector θ, assigns a probability value to an action given a state. The goal is to maximize the expected discounted return of the policy:\nEτ [ T−1∑ t=0 γtrt ] , where τ is the trajectory (s0, a0, r0, s1, . . . , sT−1, aT−1, rT−1) and s0 ∼ ρ0, st ∼ P (·|st−1, at−1), at ∼ πθ(·|st), rt = r (st, at)\nPolicy Gradient Algorithms. The core idea behind policy gradient algorithms is to obtain the policy gradient ∇θJ of the expected discounted return with respect to the policy parameter θ. Doing gradient ascent θ = θ + ∇θJ therefore maximizes the expected discounted reward. Earlier work proposes the following policy gradient estimate to the objective J (Sutton & Barto, 2018):\ngpolicy,θ = Eτ∼πθ [ T−1∑ t=0 ∇θ log πθ(at|st)Gt ] ,\nwhere Gt = ∑∞ k=0 γ\nkrt+k denotes the discounted return following time t. This gradient estimate, however, suffers from large variance (Sutton & Barto, 2018) and the following gradient estimate is suggested instead:\ngpolicy,θ = Eτ [ ∇θ\nT−1∑ t=0 log πθ(at|st)A(τ, V, t)\n] ,\nwhere A(τ, V, t) is the General Advantage Estimation (GAE) (Schulman et al., 2015), which measures “how good is at compared to the usual actions”, and V : S → R is the state-value function." }, { "heading": "3 ACTION GUIDANCE", "text": "The key idea behind action guidance is to create a main policy that trains on the sparse rewards, and creating some auxiliary policies that are trained on shaped rewards. During the initial stages of training, the main policy has a high probability to take action guidance from the auxiliary policies, that is, the main policy can execute actions sampled from the auxiliary policies, rather than from its own policy. As the training goes on, this probability decreases, and the main policy executes more actions sampled from its own policy. During training, the main and auxiliary policies are updated via off-policy policy gradient. Our use of auxiliary policies makes the training sample-efficient, and our use of the main policy, who only sees its own sparse reward, makes sure that the agent will eventually optimize over the true objective of sparse rewards. In a way, action guidance can be seen as training agents using shaped rewards, while having the main policy learn by imitating from them.\nSpecifically, let us defineM as the MDP that the main policy learns from andA = {A1,A2, ...,Ak} be a set of auxiliary MDPs that the auxiliary policies learn from. In our constructions, M and A share the same state, observation, and action space. However, the reward function forM is RM, which is the sparse reward function, and reward functions for A are RA1 , ..., RAk , which are the shaped reward functions. For each of these MDPs E ∈ S = {M}∪A above, let us initialize a policy πθE parameterized by parameters θE , respectively. Furthermore, let us use πS = {πθE |E ∈ S} to denote the set of these initialized policies.\nAt each timestep t, let us use some exploration strategy S that selects a policy πb ∈ πS to sample an action at given st. At the end of the episode, each policy πθ ∈ πS can be updated via its off-policy policy gradient (Degris et al., 2012; Levine et al., 2020):\nEτ∼πθb [( T−1∏ t=0 πθ (at|st) πθb (at|st) ) T−1∑ t=0 ∇θ log πθ (at|st)A(τ, V, t) ] (1)\nWhen πθ = πθb , the gradient in Equation 1 means on-policy policy gradient update for πθ. Otherwise, the objective means off-policy policy gradient update for πθ." }, { "heading": "3.1 PRACTICAL ALGORITHM", "text": "The gradient in Equation 1 is unbiased, but its product of importance sampling ratio(∏T−1 t=0\nπθ(at|st) πθb (at|st)\n) is known to cause high variance (Wang et al., 2016). In practice, we clip the\ngradient the same way as Proximal Policy Gradient (PPO) (Schulman et al., 2017):\nLCLIP (θ) = Eτ∼πθb [ T−1∑ t=0 [∇θmin (ρt(θ)A(τ, V, t), clip (ρt(θ), ε)A(τ, V, t))] ] (2)\nρt(θ) = πθ (at|st) πθb (at|st) , clip (ρt(θ), ε) = 1− ε if ρt(θ) < 1− ε 1 + ε if ρt(θ) > 1 + ε ρt(θ) otherwise\nDuring the optimization phase, the agent also learns the value function and maximize the policy’s entropy. We therefore optimize the following joint objective for each πθ ∈ πS :\nLCLIP (θ) = LCLIP (θ)− c1LV F (θ) + c2S[πθb ], (3)\nwhere c1, c2 are coefficients, S is an entropy bonus, and LV F is the squared error loss for the value function associated with πθ as done by Schulman et al. (2017). Although action guidance can be\nconfigured to leverage multiple auxiliary policies that learn diversified reward functions, we only use one auxiliary policy for the simplicity of experiments. In addition, we use -greedy as the exploration strategy S for determining the behavior policy. That is, at each timestep t, the behavior policy is selected to be πθM with probability 1− and πθD for D ∈ A with probability (note that is is different from the clipping coefficient ε of PPO). Additionally, is set to be a constant 0.95 at start for some period of time steps (e.g. 800,000), which we refer to as the shift period (the time it takes to start “shifting” focus away from the auxiliary policies), then it is set to linearly decay to end for some period of time steps (e.g. 1,000,000), which we refer to as the adaptation period (the time it takes for the main policy to fully “adapt” and become more independent). Lastly, we included a pseudocode of action guidance in Algorithm 1 at the Appendix." }, { "heading": "3.2 POSITIVE LEARNING OPTIMIZATION", "text": "During our initial experiments, we found the main policy sometimes did not learn useful policies. Our hypothesis is that this was because the main policy is updated with too many trajectories with zero reward. Doing a large quantities of updates of these zero-reward trajectories actually causes the policy to converge prematurely, which is manifested by having low entropy in the action probability distribution.\nTo mitigate this issue of having too many zero-reward trajectories, we use a preliminary code-level optimization called Positive Learning Optimization (PLO). After collecting the rollouts, PLO works by skipping the gradient update for πθE ∈ πS and its value function if the rollouts contains no reward according to RE . Intuitively, PLO makes sure that the main policy learns from meaningful experience that is associated with positive rewards. To confirm its effectiveness, we provide an ablation study of PLO in the experiment section." }, { "heading": "4 EVALUATION", "text": "We use µRTS3 as our testbed, which is a minimalistic RTS game maintaining the core features that make RTS games challenging from an AI point of view: simultaneous and durative actions, large branching factors and real-time decision making. To interface with µRTS, we use gymmicrorts4 (Huang & Ontañón, 2020) to conduct our experiments. The details of gym-microrts as a RL interface can be found at Appendix A.1." }, { "heading": "4.1 TASKS DESCRIPTION", "text": "We examine the three following sparse reward tasks with a range of difficulties. For each task, we compare the performance of training agents with the sparse reward function RM, a shaped reward function RA1 , and action guidance with a single auxiliary policy learning from RA1 . Here are the descriptions of these environments and their reward functions.\n1. LearnToAttack: In this task, the agent’s objective is to learn move to the other side of the map where the enemy units live and start attacking them. ItsRM gives a +1 reward for each valid attack action the agent issues. This is of sparse reward because the action space is so large: the agent could have build a barracks or produce a unit; it is unlikely that the agents will by chance issue lots of moving actions (out of 6 action types) with correct directions (out of 4 directions) and then start attacking. ItsRA1 gives the difference between previous and current Euclidean distance between the enemy base and its closet unit owned by the agent as the shaped reward in addition to RM.\n2. ProduceCombatUnits: In this task, the agent’s objective is to learn to build as many combat units as possible. Its RM gives a +1 reward for each combat unit the agent produces. This is a more challenging task because the agent needs to learn 1) harvest resources, 2) produce barracks, 3) produce combat units once enough resources are gathered, 4) move produced combat units out of the way so as to not block the production of new combat units. Its RA1 gives +1 for constructing every building (e.g. barracks), +1 for harvesting resources, +1 for returning resources, and +7 for each combat unit it produces.\n3https://github.com/santiontanon/microrts 4https://github.com/vwxyzjn/gym-microrts\n3. DefeatRandomEnemy: In this task, the agent’s objective is to defeat a biased random bot of which the attack, harvest and return actions have 5 times the probability of other actions. Additionally, the bot subjects to the same gym-microrts’ limitation (See Appendix A.2) as the agents used in our experiments. Its RM gives the match outcome as the reward (-1 on a loss, 0 on a draw and +1 on a win). This is the most difficult task we examined because the agent is subject to the full complexity of the game, being required to make both macrodecisions (e.g. deciding the high-level strategies to win the game) and micro-decisions (e.g. deciding which enemy units to attack. In comparison, its RA1 gives +5 for winning, +1 for harvesting one resource, +1 for returning resources, +1 for producing one worker, +0.2 for constructing every building, +1 for each valid attack action it issues, +7 for each combat unit it produces, and +(0.2∗d) where d is difference between previous and current Euclidean distance between the enemy base and its closet unit owned by the agent." }, { "heading": "4.2 AGENT SETUP", "text": "We use PPO (Schulman et al., 2017) as the base DRL algorithm to incorporate action guidance. The details of the implementation, neural network architecture, hyperparameters, proper handling of µRTS’s action space and invalid action masking (Huang & Ontañón, 2020) can be found in Appendix B. We compared the following strategies:\n1. Sparse reward (first baseline). This agent is trained with PPO on RM for each task. 2. Shaped reward (second baseline). This agent is trained with PPO on RA1 for each task. 3. Action guidance - long adaptation. The agent is trained with PPO + action guidance with\nshift = 2, 000, 000 time steps, adaptation = 7, 000, 000 time steps, and end = 0.0 4. Action guidance - short adaptation. The agent is trained with PPO + action guidance\nwith shift = 800, 000 time steps, adaptation = 1, 000, 000 time steps, and end = 0.0 5. Action guidance - mixed policy. The agent is trained with PPO + action guidance with\nshift = 2, 000, 000 time steps and adaptation = 2, 000, 000 time steps, and end = 0.5. We call this agent “mixed policy” because it will eventually have 50% chance to sample actions from the main policy and 50% chance to sample actions from the auxiliary policy. It is effectively having mixed agent making decisions jointly.\nAlthough it is desirable to add SAC-X to the list of strategies compared, it was not designed to handle domains with large discrete action spaces. Lastly, we also toggle the PLO option for action guidance - long adaptation, action guidance - short adaptation, action guidance - mixed policy, and sparse reward training strategies for a preliminary ablation study." }, { "heading": "4.3 EXPERIMENTAL RESULTS", "text": "Each of the 6 strategies is evaluated in 3 tasks with 10 random seeds. We report the results in Table 1. From here on, we use the term “sparse return” to denote the episodic return according to RM, and “shaped return” the episodic return according toRA1 . All the learning curves can be found in Appendix C. Below are our observations.\nAction guidance is almost as sample-efficient as reward shaping. Since the auxiliary policy learns from RA1 and the main policy takes a lot of action guidance from the auxiliary policy during the shift period, the main policy is more likely to discover the first sparse reward more quickly and learn more efficiently. As an example, Figure 2 demonstrates such sample-efficiency in ProduceCombatUnits, where the agents trained with sparse reward struggle to obtain the very first reward. In comparison, most action guidance related agents are able to learn almost as fast as the agents trained with shaped reward.\nAction guidance eventually optimizes the sparse reward. This is perhaps the most important contribution of our paper. Action guidance eventually optimizes the main policy over the true objective, rather than optimizing shaped rewards. Using the ProduceCombatUnits task as an example, the agent trained with shaped reward would only start producing combat units once all the resources have been harvested, probably because the +1 reward for harvesting and returning resources are easy to retrieve and therefore the agents exploit them first. Only after these resources are exhausted would the agents start searching for other sources of rewards then learn producing combat units.\nIn contrast, the main policy of action guidance - short adaptation w/ PLO are initially guided by the shaped reward agent during the shift period. During the adaptation period, we find the main policy starts to optimize against the real objective by producing the first combat unit as soon as possible. This disrupts the behavior learned from the auxiliary policy and thus cause a visible degrade in the main policy’s performance during 1M and 2M timesteps as shown in Figure 2. As the adaption period comes to an end, the main policy becomes fully independent and learn to produce combat units and harvest resources concurrently. This behavior matches the common pattern observed in professional RTS game players and is obviously more desirable because should the enemy attack early, the agent will have enough combat units to defend.\nIn the DefeatRandomEnemy task, the agents trained with shaped rewards learn a variety of behaviors; some of them learn to do a worker rush while others learn to focus heavily on harvesting resources and producing units. This is likely because the agents could get similar level of shaped rewards despite having diverse set of behaviors. In comparison, the main policy of action guidance - long adaptation w/ PLO would start optimizing the sparse reward after the shift period ends; it almost always learns to do a worker rush, which an efficient way to win against a random enemy as shown in Figure 1.\nThe hyper-parameters adaptation and shift matter. Although the agents trained with action guidance - short adaptation w/ PLO learns the more desirable behavior, they perform considerably worse in the harder task of DefeatRandomEnemy. It suggests the harder that task is perhaps the longer adaptation should be set. However, in ProduceCombatUnits, agents trained with action guidance - long adaptation w/ PLO exhibits the same category of behavior as agents trained with shaped reward, where the agent would only start producing combat units once all the resources have been harvested. A reasonable explanation is that higher adaptation gives more guidance to the main policy to consistently find the sparse reward, but it also inflicts more bias on how the task should be accomplished; lower adaption gives less guidance but increase the likelihood for the main policy to find better ways to optimize the sparse rewards.\nPositive Learning Optimization results show large variance. We found PLO to be an interesting yet sometimes effective optimization in stabilizing the performance for agents trained with action guidance. However, the results show large variance: PLO either significantly helps the agents or make them much worse. As a motivating example, Figure 2 showcases the actual sparse return of 10 seeds in ProduceCombatUnits, where agents trained with action guidance - short adaptation and PLO seem to always converge while agents trained without PLO would only sometimes converge. However, PLO actually hurt the performance of action guidance - long adaptation in ProduceCombatUnits by having a few degenerate runs as shown in Figure 2. It is also worth noting the PLO does not help the sparse reward agent at all, suggesting PLO is a an optimization somewhat unique to action guidance.\nAction guidance - mixed policy is viable. According to Table 1, agents trained with action guidance - mixed policy with or without PLO seem to perform relatively well in all three tasks examined. This is a interesting discovery because it suggests action guidance could go both ways: the auxiliary policies could also benefit from the learned policies of the main policy. An alternative perspective is to consider the main policy and the auxiliary policies as a whole entity that mixes different reward functions, somehow making joint decision and collaborating to accomplish common goals." }, { "heading": "5 CONCLUSIONS", "text": "In this paper, we present a novel technique called action guidance that successfully trains the agent to eventually optimize over sparse rewards yet does not lose the sample efficiency that comes with reward shaping, effectively getting the best of both worlds. Our experiments with DefeatRandomEnemy in particular show it is possible to train a main policy on the full game of µRTS using only the match outcome reward, which suggests action guidance could serve as a promising alternative to the training paradigm of AlphaStar (Vinyals et al., 2019) that uses supervised learning with human replay data to bootstrap an agent. As part of our future work, we would like to scale up the approach to defeat stronger opponents." } ]
2,020
null
SP:d197f9ea345b135b417400d791002f18baad39e7
[ "The authors of this manuscript propose an unsupervised learning framework for 3D segmentation of biomedical images. Specifically, the proposed method learns effective representations for 3D patches using variational autoencoder (VAE) with a hyperbolic latent space. Its main contribution lies at that it introduces a new unsupervised learning framework including hyperbolic convolutional VAE and hierarchical triplet loss. This work conducts experiments on toy dataset, the Brain Tumor Segmentation dataset, and cryo-EM data. The experiment demonstrates competitive performance of the proposed method.", "The paper considers learning hyperbolic representations for unsupervised 3D segmentation. Since the general task of producing annotations for 3D data can be expensive (e.g. for segmentation in dense voxel grids), this is an important problem. The paper proposes to learn hierarchical data structures (e.g. 3D biomedical images) with a hyperbolic variational autoencoder. The paper adapts different metric learning approaches, such as triplet loss and computing a Frechet mean on Riemannian manifolds for clustering. " ]
There exists a need for unsupervised 3D segmentation on complex volumetric data, particularly when annotation ability is limited or discovery of new categories is desired. Using the observation that much of 3D volumetric data is innately hierarchical, we propose learning effective representations of 3D patches for unsupervised segmentation through a variational autoencoder (VAE) with a hyperbolic latent space and a proposed gyroplane convolutional layer, which better models the underlying hierarchical structure within a 3D image. We also introduce a hierarchical triplet loss and multi-scale patch sampling scheme to embed relationships across varying levels of granularity. We demonstrate the effectiveness of our hyperbolic representations for unsupervised 3D segmentation on a hierarchical toy dataset, BraTS whole tumor dataset, and cryogenic electron microscopy data.
[]
[ { "authors": [ "Gregor Bachmann", "Gary Becigneul", "Octavian-Eugen Ganea" ], "title": "Constant curvature graph convolutional networks", "venue": "arXiv preprint arXiv:1911.05076,", "year": 2019 }, { "authors": [ "Spyridon Bakas", "Hamed Akbari", "Aristeidis Sotiras", "Michel Bilello", "Martin Rozycki", "Justin S Kirby", "John B Freymann", "Keyvan Farahani", "Christos Davatzikos" ], "title": "Advancing the cancer genome atlas glioma mri collections with expert segmentation labels and radiomic features", "venue": "Scientific data,", "year": 2017 }, { "authors": [ "Spyridon Bakas", "Mauricio Reyes", "Andras Jakab", "Stefan Bauer", "Markus Rempfler", "Alessandro Crimi", "Russell Takeshi Shinohara", "Christoph Berger", "Sung Min Ha", "Martin Rozycki" ], "title": "Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the brats challenge", "venue": "arXiv preprint arXiv:1811.02629,", "year": 2018 }, { "authors": [ "Gary Becigneul", "Octavian-Eugen Ganea" ], "title": "Riemannian adaptive optimization methods", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Mathilde Caron", "Piotr Bojanowski", "Armand Joulin", "Matthijs Douze" ], "title": "Deep clustering for unsupervised learning of visual features", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Ines Chami", "Rex Ying", "Christopher Re", "Jure Leskovic" ], "title": "Hyperbolic graph convolutional neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Jianxu Chen", "Lin Yang", "Yizhe Zhang", "Mark Alber", "Danny Z Chen" ], "title": "Combining fully convolutional and recurrent neural networks for 3d biomedical image segmentation", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Özgün Çiçek", "Ahmed Abdulkadir", "Soeren S Lienkamp", "Thomas Brox", "Olaf Ronneberger" ], "title": "3d unet: learning dense volumetric segmentation from sparse annotation", "venue": "In International conference on medical image computing and computer-assisted intervention,", "year": 2016 }, { "authors": [ "Adrian V Dalca", "John Guttag", "Mert R Sabuncu" ], "title": "Anatomical priors in convolutional networks for unsupervised biomedical segmentation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Qi Dou", "Lequan Yu", "Hao Chen", "Yueming Jin", "Xin Yang", "Jing Qin", "Pheng-Ann Heng" ], "title": "3d deeply supervised network for automated segmentation of volumetric medical images", "venue": "Medical image analysis,", "year": 2017 }, { "authors": [ "Octavian-Eugen Ganea", "Gary Becigneul", "Thomas Hoffmann" ], "title": "Hyperbolic neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Albert Gu", "Frederic Sala", "Beliz Gunel", "Christopher Re" ], "title": "Learning mixed-curvature representations in products of model spaces", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Shir Gur", "Lior Wolf", "Lior Golgher", "Pablo Blinder" ], "title": "Unsupervised microvascular image segmentation using an active contours mimicking neural network", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Mohammad Hesam Hesamian", "Wenjing Jia", "Xiangjian He", "Paul Kennedy" ], "title": "Deep learning techniques for medical image segmentation: Achievements and challenges", "venue": "Journal of digital imaging,", "year": 2019 }, { "authors": [ "Xu Ji", "João F Henriques", "Andrea Vedaldi" ], "title": "Invariant information clustering for unsupervised image classification and segmentation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Zeyu Jiang", "Changxing Ding", "Minfeng Liu", "Dacheng Tao" ], "title": "Two-stage cascaded u-net: 1st place solution to brats challenge 2019 segmentation task", "venue": "In International MICCAI Brainlesion Workshop,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Titinunt Kitrungrotsakul", "Xian-Hua Han", "Yutaro Iwamoto", "Lanfen Lin", "Amir Hossein Foruzan", "Wei Xiong", "Yen-Wei Chen" ], "title": "Vesselnet: A deep convolutional neural network with multi pathways for robust hepatic vessel segmentation", "venue": "Computerized Medical Imaging and Graphics,", "year": 2019 }, { "authors": [ "Harold W Kuhn" ], "title": "The hungarian method for the assignment problem", "venue": "Naval research logistics quarterly,", "year": 1955 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "CJ Burges" ], "title": "Mnist handwritten digit database", "venue": "ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist,", "year": 2010 }, { "authors": [ "Aaron Lou", "Isay Katsman", "Qingxuan Jiang", "Serge Belongie", "Ser-Nam Lim", "Christopher De Sa" ], "title": "Differentiating through the fr\\’echet mean", "venue": "arXiv preprint arXiv:2003.00335,", "year": 2020 }, { "authors": [ "Emile Mathieu", "Charline Le Lan", "Chris J. Maddison", "Yee Whye Tee Ryota Tomioka" ], "title": "Continuous hierarchical representations with poincaré variational auto-encoders", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Bjoern H Menze", "Andras Jakab", "Stefan Bauer", "Jayashree Kalpathy-Cramer", "Keyvan Farahani", "Justin Kirby", "Yuliya Burren", "Nicole Porz", "Johannes Slotboom", "Roland Wiest" ], "title": "The multimodal brain tumor image segmentation benchmark (brats)", "venue": "IEEE transactions on medical imaging,", "year": 1993 }, { "authors": [ "Takayasu Moriya", "Holger R Roth", "Shota Nakamura", "Hirohisa Oda", "Kai Nagara", "Masahiro Oda", "Kensaku Mori" ], "title": "Unsupervised segmentation of 3d medical images based on clustering and deep representation learning", "venue": "In Medical Imaging", "year": 2018 }, { "authors": [ "Yoshihiro Nagano", "Shoichiro Yamaguchi", "Yasuhiro Fujita", "Masanori Koyama" ], "title": "A wrapped normal distribution on hyperbolic space for gradient-based learning", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Jakub Nalepa", "Michal Myller", "Yasuteru Imai", "Ken ichi Honda", "Tomomi Takeda", "Marek Antoniak" ], "title": "Unsupervised segmentation of hyperspectral images using 3d convolutional autoencoders", "venue": "IEEE Geoscience and Remote Sensing Letters,", "year": 2020 }, { "authors": [ "Maximilian Nickel", "Douwe Kiela" ], "title": "Learning continuous hierarchies in the lorentz model of hyperbolic geometry", "venue": "arXiv preprint arXiv:1806.03417,", "year": 2018 }, { "authors": [ "Maximillian Nickel", "Douwe Kiela" ], "title": "Poincaré embeddings for learning hierarchical representations", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "arXiv preprint arXiv:1401.4082,", "year": 2014 }, { "authors": [ "Rik Sarkar" ], "title": "Low distortion delaunay embedding of trees in hyperbolic plane", "venue": "In International Symposium on Graph Drawing,", "year": 2011 }, { "authors": [ "Abraham A Ungar" ], "title": "Hyperbolic trigonometry and its application in the poincaré ball model of hyperbolic geometry", "venue": "Computers & Mathematics with Applications,", "year": 2001 }, { "authors": [ "Abraham Albert Ungar" ], "title": "A gyrovector space approach to hyperbolic geometry", "venue": "Synthesis Lectures on Mathematics and Statistics,", "year": 2008 }, { "authors": [ "Zhenlin Xu", "Marc Niethammer" ], "title": "Deepatlas: Joint semi-supervised learning of image registration and segmentation", "venue": "In International Conference on Medical Image Computing and ComputerAssisted Intervention,", "year": 2019 }, { "authors": [ "Jianwei Yang", "Devi Parikh", "Dhruv Batra" ], "title": "Joint unsupervised learning of deep representations and image clusters", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Amy Zhao", "Guha Balakrishnan", "Fredo Durand", "John V Guttag", "Adrian V Dalca" ], "title": "Data augmentation using learned transformations for one-shot medical image segmentation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Hao Zheng", "Yizhe Zhang", "Lin Yang", "Peixian Liang", "Zhuo Zhao", "Chaoli Wang", "Danny Z Chen" ], "title": "A new ensemble learning framework for 3d biomedical image segmentation", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "hyperplane. Ganea" ], "title": "2018) defined the gyroplane operator fa,p from this formulation by replacing each component with its hyperbolic equivalent", "venue": null, "year": 2018 } ]
[ { "heading": null, "text": "There exists a need for unsupervised 3D segmentation on complex volumetric data, particularly when annotation ability is limited or discovery of new categories is desired. Using the observation that much of 3D volumetric data is innately hierarchical, we propose learning effective representations of 3D patches for unsupervised segmentation through a variational autoencoder (VAE) with a hyperbolic latent space and a proposed gyroplane convolutional layer, which better models the underlying hierarchical structure within a 3D image. We also introduce a hierarchical triplet loss and multi-scale patch sampling scheme to embed relationships across varying levels of granularity. We demonstrate the effectiveness of our hyperbolic representations for unsupervised 3D segmentation on a hierarchical toy dataset, BraTS whole tumor dataset, and cryogenic electron microscopy data." }, { "heading": "1 INTRODUCTION", "text": "Recent advances in technology have greatly increased both the availability of 3D data, as well as the need to process and learn from 3D data. In particular, technologies such as magnetic resonance imaging and cryogenic electron microscopy (cryo-EM) have led to greater availability of 3D voxel data. Deep learning is a promising technique to do so, but producing annotations for 3D data can be extremely expensive, especially for richer tasks such as segmentation in dense voxel grids. In some cases, labels may also be impossible to produce due to the limitations of current knowledge, or may introduce bias if we want to conduct scientific discovery. Unsupervised learning, which does not require annotations, is a promising approach for overcoming these limitations.\nIn this work, we tackle the challenging problem of unsupervised segmentation on complex 3D voxel data by addressing the essential challenge of representation learning. We expand from prior literature in the hyperbolic domain that conducts classification in simple data to the task of segmentation in 3D images, which requires significantly more representation discriminability. In order to learn effective representations, we need to capture the structure of our input data. We observe that 3D images often have inherent hierarchical structure: as a biomedical example, a cryo-EM tomogram of a cell has a hierarchy that at the highest level comprises the entire cell; at a finer level comprises organelles such as the mitochondria and nucleus; and at an even finer level comprises sub-structures such as the nucleolus of a nucleus or proteins within organelles. For downstream analysis, we are typically interested in the unsupervised discovery and segmentation of structures spanning multiple levels of hierarchy. However, prior work on representation learning for unsupervised 3D segmentation does not explicitly model hierarchical structure between different regions of a 3D image. We argue that this hampers the ability to leverage hierarchical relationships to improve segmentation in complex 3D images.\nOur key insight is that we can utilize a hyperbolic embedding space to learn effective hierarchical representations of voxel regions in 3D images. Hyperbolic representations have been proposed as a continuous way to represent hierarchical data, as trees can be embedded in hyperbolic space with arbitrarily low error (Sarkar, 2011). These methods have shown promise for modeling data types such as natural language word taxonomies (Nickel & Kiela, 2017; 2018), graphs (Nickel & Kiela, 2017; Mathieu et al., 2019; Ovinnikov, 2019; Chami et al., 2019), as well as simple MNIST (LeCun et al., 2010) image data for classification (Mathieu et al., 2019). To the best of our knowledge, our work is the first to introduce learning hyperbolic representations to capture hierarchical structure\namong subregions of complex 3D images, and to utilize the learned hyperbolic representations to perform a complex computer vision task such as segmentation.\nOur approach for learning hyperbolic representations of 3D voxel grid data is based on several key innovations. First, to handle larger and more complex 3D data such as biomedical images, we propose a hyperbolic 3D convolutional VAE along with a new gyroplane convolutional layer that respects hyperbolic geometry. Second, we enhance our VAE training objective with a novel self-supervised hierarchical triplet loss that helps our model learn hierarchical structure within the VAE’s hyperbolic latent space. Finally, since our goal in segmentation is to learn hierarchy within voxel regions of 3D input, we present a multi-scale sampling scheme such that our 3D VAE can simultaneously embed hierarchical relationships across varying levels of granularity.\nIn summary, our key contributions are as follows:\n• We introduce a hyperbolic 3D convolutional VAE with a novel gyroplane convolutional layer that scales the learning of hyperbolic representations to complex 3D data.\n• We propose a multi-scale sampling scheme and hierarchical triplet loss in order to encode hierarchical structure in the latent space and perform 3D unsupervised segmentation.\n• We demonstrate the effectiveness of our approach through experiments on a synthetic 3D toy dataset, the Brain Tumor Segmentation (BraTS) dataset (Menze et al., 2014; Bakas et al., 2017; 2018), and cryo-EM data." }, { "heading": "2 RELATED WORK", "text": "Segmentation on 3D voxel data Since 3D voxel grids are dense, computer vision tasks such as supervised segmentation are commonly performed using deep learning architectures with 3D convolutional layers (Chen et al., 2016; Dou et al., 2017; Hesamian et al., 2019; Zheng et al., 2019). However, due to the challenges of obtaining voxel-level segmentations in 3D, there has been significant effort in finding semi-supervised approaches, including using labels only from several fully annotated 2D slices of an input volume (Çiçek et al., 2016), using a smaller set of segmentations with joint segmentation and registration (Xu & Niethammer, 2019), and using one segmented input in conjunction with other unlabelled data (Zhao et al., 2019).\nUnsupervised approaches for 3D segmentation are useful not only for further reducing the manual annotation effort required, but also for scientific discovery tasks where we lack the sufficient knowledge to provide representative training examples for structures of interest. Moriya et al. (2018) extends to 3D data an iterative approach of feature learning followed by clustering (Yang et al., 2016). Nalepa et al. (2020) uses a 3D convolutional autoencoder architecture and performs clustering of the latent representations. Another approach, (Dalca et al., 2018), uses a network pre-trained on manual segmentations from a separate dataset to perform unsupervised segmentation of 3D biomedical images. However, this limits applicability to areas where we already have a dataset with manual annotations and makes it unsuitable for unbiased unsupervised discovery. Gur et al. (2019) and Kitrungrotsakul et al. (2019) developed unsupervised methods for 3D segmentation of vessel structures, but these are specialized and do not generalize to the segmentation of other structures. Beyond unsupervised 3D segmentation, there has been work such as Ji et al. (2019) that performs unsupervised 2D segmentation based on a mutual information objective, and Caron et al. (2018), which proposes using the clustered output of an encoder as pseudo-labels. While these methods can be applied to 2D slices of a 3D volume to perform 3D segmentation, they generally suffer limitations due to insufficient modeling of the 3D spatial information. None of the aforementioned approaches explicitly model hierarchical structure, which is the main focus of our work.\nHyperbolic representations A recent line of work has employed hyperbolic space to model hierarchical structure, with the intuition that tree structures can be naturally embedded into continuous hyperbolic space (Nickel & Kiela, 2017). Several works have proposed hyperbolic variational autoencoders (VAEs) as an unsupervised method to learn hyperbolic representations. Ovinnikov (2019) proposes a Wasserstein autoencoder on the Poincaré ball model of hyperbolic geometry. Nagano et al. (2019) proposes a VAE on the hyperboloid model of hyperbolic geometry where the last layer of the encoder is an exponential map, and derives a reparametrisable sampling scheme for the wrapped normal distribution, which they use for the prior and posterior. Mathieu et al. (2019)\nproposes a VAE on the Poincaré ball model of hyperbolic geometry. In addition to having the last layer of the encoder be an exponential map, Mathieu et al. (2019) also proposes to have the first layer of the decoder be the gyroplane layer proposed by Ganea et al. (2018) in order to better handle the geometry of the hyperbolic latent space, and applies their model to MNIST image classification. Our work differs by introducing an approach for learning hyperbolic representations that models the hierarchy between sub-volumes of complex 3D images, and uses a novel hierarchical triplet loss and sampling scheme to capture relationships among multiple levels of granularity in a given input.\nIn addition, a related field of study has sought to generalize traditional Euclidean neural networks or their components to non-Euclidean spaces. Ganea et al. (2018) proposes hyperbolic feed-forward and recurrent architectures based on the theory of gyrovector spaces. Building on this work, Chami et al. (2019) propose a hyperbolic graph convolutional network. Other works such as Bachmann et al. (2019); Becigneul & Ganea (2019); Gu et al. (2019) have also proposed learning with a product space of manifolds. Our work generalizes a layer of Ganea et al. (2018) in order to create and use a new hyperbolic convolutional layer, which we call the gyroplane convolutional layer." }, { "heading": "3 PRELIMINARIES", "text": "Hyperbolic Space Hyperbolic space is a non-Euclidean space with constant negative curvature. Curvature is a measure of the deviation of the geometry from a flat plane (Chami et al., 2019). There are five equivalent models of hyperbolic geometry. Following previous work (Mathieu et al., 2019; Ganea et al., 2018; Lou et al., 2020), we use the Poincaré ball model. Hyperbolic space can be considered the continuous version of trees (Nickel & Kiela, 2017), making it a natural choice for embedding hierarchical data. Trees can be embedded in the Poincaré ball with arbitrarily low error (Sarkar, 2011), and like the leaves of a tree, the area of a disc in the Poincaré ball increases exponentially with the radius. Unlike trees, hyperbolic space is smooth, permitting deep learning.\nPoincaré ball model of hyperbolic geometry The Poincaré ball (of curvature c = −1) is the open ball of radius 1 centered at the origin equipped with the metric tensor gp = (λx)2ge, where the conformal factor λx = 21−||x||2 and ge is Euclidean metric tensor (i.e., the usual dot product). Formally, this makes the Poincaré ball a Riemannian manifold. The distance dp between points on the Poincaré ball is given by:\ndp(x, y) = cosh −1 ( 1 + 2 ||x− y||2\n(1− ||x||2)(1− ||y||2)\n) (1)\nThe exponential and logarithm maps are a useful way to map from Euclidean space to the Poincaré ball and vice versa (in general, to map from a tangent space to a Riemannian manifold and vice versa). On the Poincaré ball, the exponential and logarithm maps have the closed forms\nexpz(v) = z ⊕ ( tanh ( λz||v||\n2\n) v\n||v||\n) , logz(y) = 2\nλz tanh−1(|| − z ⊕ y||) −z ⊕ y || − z ⊕ y|| (2)\nwhere ⊕ denotes Mobius addition, which was first introduced by Ungar (2001) as a way to define vector operations on hyperbolic space (see Appendix)." }, { "heading": "4 METHODS", "text": "In this section, we describe our approach for learning hyperbolic representations of subvolumes (3D patches) from 3D voxel grid data. We propose a model that comprises a 3D convolutional variational autoencoder (VAE) with hyperbolic representation space and a new gyroplane convolutional layer, along with a novel hierarchical triplet loss and a multi-scale sampling scheme that facilitates learning hierarchical structure within the hyperbolic latent space. To produce segmentations, we cluster the learned hyperbolic representations. In Section 4.1, we describe our VAE framework as well as our proposed gyroplane convolutional layer and hierarchical triplet loss. In Section 4.2, we introduce our approach of hyperbolic clustering for segmentation." }, { "heading": "4.1 UNSUPERVISED HYPERBOLIC REPRESENTATION LEARNING", "text": "3D Hyperbolic VAE framework The VAE framework (Kingma & Welling, 2013; Rezende et al., 2014) is widely used for unsupervised representation learning, but requires new innovations to lean effective hierarchical representations in 3D image data. Our proposed hyperbolic VAE consists of a 3D convolutional encoder which maps sampled 3D patches of the input volume into hyperbolic space and produces the parameters of the variational posterior, as well as a 3D convolutional decoder which reconstructs the patch from sampled latent hyperbolic representations. The last layer of the encoder is an exponential map that ensures that the output is in hyperbolic space, and the first layer of the decoder is our proposed gyroplane convolutional layer which maps hyperbolic space to Euclidean space. This ensures that both the encoder and decoder respect the hyperbolic geometry of the latent space. We use the wrapped normal distribution as our prior and posterior distribution (see Appendix). Figure 1 illustrates an overview of this VAE framework.\nOur variational autoencoder takes as input a patch of fixed size m × m × m. This allows our model to learn representations of subvolumes that can subsequently be used to perform voxel-level segmentation of the whole 3D volume. To learn hierarchical structure in the 3D scene of each input, we generate training examples using a multi-scale sampling scheme that samples patches of size r × r × r, where r is randomly sampled. We use two sampling schemes, one for input of smaller sizes and one for input of larger sizes. In both schemes, for a given 3D volume, we sample i patch centers vi uniformly.\nIn the sampling scheme for smaller inputs, we sample r ∼ U(`min, `max), where `min, `max are hyperparameters. The patch is then upsampled or downsampled to sizem×m×m. For larger inputs, we observe that semantic changes tend to occur on a logarithmic scale, so we instead first sample e ∼ U(`min, `max) and then set r = 2e. This sampling scheme is motivated by the intuition that for larger patches, a small change in r is less likely to correspond to significant semantic difference.\nGyroplane convolutional layer Since Rn = R× . . .×R, high-dimensional Euclidean spaces can be decomposed into a product of low-dimensional Euclidean spaces. An equivalent decomposition does not hold for arbitrary Riemannian manifolds, making it difficult to generalize the usual (Euclidean) convolutional layer to arbitrary Riemannian manifolds. For manifolds that are products of manifolds, we can generalize the usual convolution by replacing the Euclidean affine transformation with an affine transformation on the manifold. For the Poincaré ball, one analogue of the Euclidean affine transformation is the gyroplane operator fa,p (see Appendix). The details are as follows: for simplicity, suppose x is a 4D tensor containing elements of the Poincaré ball and our kernel size is k × k × k, with an odd k value. Our gyroplane convolutional layer is defined as:\nyr,s,t = r+bk/2c∑ α=r−bk/2c s+bk/2c∑ β=s−bk/2c t+bk/2c∑ γ=t−bk/2c fa,p(xα,β,γ) (3)\nOur gyroplane convolutional layer can be extended in the same way as Euclidean convolutional layers to incorporate even kernel size k, input and output channels, padding, stride, and dilation. Our model’s encoder mean output (µ in Figure 1) can be interpreted as a product of Poincaré balls, justifying our definition and use of the gyroplane convolutional layer.\nHierarchical triplet loss As our model is trained on patches of the whole 3D volume, the hierarchical structure of the volume is not readily apparent from the individual inputs. To help the model infer hierarchical structure, we provide self-supervision in the form of a hierarchical triplet loss where positive examples are sub-patches of an anchor patch and negative examples are patches that do not overlap with the anchor patch.\nTo sample 3D patches for the triplet loss, we first generate an anchor patch centered at voxel v with size r × r × r according to one of the above sampling schemes. A positive child patch is generated as a smaller sub-patch of the anchor patch as follows: the positive child patch is centered at v with size rchild× rchild× rchild, where rchild ∼ U(`min, r− rgap), and rgap is a hyperparameter representing the gap in size between the anchor size and the child size. A negative child is a patch of size rchild × rchild × rchild centered at vneg, where vneg is sampled uniformly from the set of voxels w such that a patch of size rchild × rchild × rchild centered at w does not overlap with the anchor patch.\nOur choice of positive and negative patches is motivated by the compositional hierarchy of 3D volumes. Our hierarchical triplet loss encourages the anchor patch and a sub-patch (positive child) to have similar representations, while encouraging the anchor patch and a distant patch (negative child) to have dissimilar representations. In hyperbolic space, this has the interpretation of belonging to the same hierarchy and belonging to different hierarchies, respectively. We learn hierarchy within a 3D image through this triplet loss.\nThe hierarchical triplet loss can be formulated with any dissimilarity measure d between the encoder outputs µ (see Figure 1) of the anchor µp, positive child µpos, and negative child µneg. For our model, we take d to be the Poincaré ball distance dp and define our triplet loss with margin α as:\nLtriplet(µp, µpos, µneg) := max(0,dp(µp, µpos)− dp(µp, µneg) + α) (4)\nThis formulation can be extended to any metric space by taking the dissimilarity measure d to be the space’s metric. In particular, for our ablations using an Euclidean latent space we take the dissimilarity measure d to be the Euclidean distance.\nOptimization We optimize a loss function that can be decomposed as an evidence lower bound (ELBO) loss and our new hierarchical triplet loss that encourages the learning of hierarchical structure in the latent representations. The total loss can be formulated as Ltotal = LELBO + βLtriplet, where β is a hyperparameter that controls the strength of the triplet loss." }, { "heading": "4.2 SEGMENTATION BY CLUSTERING REPRESENTATIONS", "text": "Hyperbolic clustering In 3D segmentation, we seek to assign each voxel v a segmentation label sv ∈ {1, . . . , n}, where n is the number of segmentation classes. We perform segmentation by clustering the representations of patches centered at each voxel. We first generate latent representations µv for each voxel v by running our trained VAE on a patch of fixed size p × p × p centered at v, upsampled or downsampled to encoder input size m×m×m if necessary. We then cluster the µv into n clusters, and produce a segmentation by assigning each v the cluster label of µv . Clustering is done using a k-means algorithm that respects hyperbolic geometry, which we derive by replacing the Euclidean centroid and distance computations of classical k-means with their appropriate counterparts in Riemannian geometry, the Fréchet mean and manifold distance. We calculate the Fréchet mean using the algorithm of Lou et al. (2020)." }, { "heading": "5 EXPERIMENTS", "text": "Though our method is general to any 3D voxelized grid data, we evaluate on several biomedical datasets due to the availability of annotated 3D voxel data in the field. We evaluate our method quantitatively on both a synthetic 3D toy dataset simulating biological image data, as well as the\nBraTS tumor segmentation dataset. Our biologically-inspired toy dataset allows quantitative evaluation of segmentation at multiple levels of hierarchy, while the BraTS dataset is a well-known benchmark for 3D MRI segmentation. We also demonstrate the use of unsupervised segmentation for discovering new biological features in real-world cryo-EM data.\nFor all models, the encoder of our variational autoencoder is comprised of four 3D convolutional layers with kernel size 5 of increasing filter depth {16, 32, 64, 128}. The decoder is of the same structure, except with decreasing filter depth and a gyroplane convolutional layer as the initial layer. We use β = 1e3 as the weighting factor between LELBO and Ltriplet and α = 0.2 as the triplet margin, and train the model using the Adam optimizer (Kingma & Ba, 2014). We fix the representation dimension to be d = 2. For training on the toy dataset, we sample 3D volume sizes uniformly, and for BraTS and the cryo-EM dataset we sample using an exponential scale (see Section 4.1). For inference, we obtain the latent representations of 5× 5× 5 patches densely across the full volume, and then perform hyperbolic k-means clustering, where the number of clusters k is a hyperparameter that controls the granularity of the segmentation. For quantitative evaluation, we then use the Hungarian algorithm (Kuhn, 1955) to match each predicted segmentation class with a corresponding ground truth label." }, { "heading": "5.1 BIOLOGICALLY-INSPIRED TOY DATASET", "text": "Since most 3D image datasets are not annotated at multiple levels of hierarchy, we first generate a hierarchical toy dataset to enable more thorough evaluation of the effectiveness of our model for unsupervised 3D segmentation. We note that datasets such as ShapeNet (Chang et al., 2015) are unsuitable since they contain 3D shape models instead of 3D voxel grid data, which is the focus of our work. Our toy dataset is inspired by cryo-EM tomograms of cells. Each volume in our toy dataset contains multiple levels of hierarchy with objects at each level determined by texture and size. Figure 2 shows an example input volume with sampled slices shown.\nEach 3D image of our toy dataset consists of a background and a large sphere which represents a cell, which we will refer to as Level 1 of the image’s hierarchy. The large sphere contains a mediumsize sphere and cube meant to represent cellular substructures such as vesicles, which we will refer to as Level 2. In turn, each of these shapes contains two smaller objects of the same shape in Level 3. The color, size, and location of each shape vary randomly. We also apply biologically realistic noise in the form of pink noise. More details can be found in the Appendix.\nTo measure the ability of our model to capture the hierarchy of the toy dataset, we separately evaluate on the three levels of hierarchy defined above and use the average class DICE score to compare segmentation performance. Since our model is unsupervised, segmentation classes are assigned to ground truth labels using the Hungarian algorithm. See results in Table 1 and Table 8.\nComparison with prior approaches Table 1 shows quantitative comparison of our method with prior state-of-the-art 3D unsupervised, 2D unsupervised (which we extend to 3D), and semisupervised models. As unsupervised 3D segmentation is a relatively unexplored field, we provide these baselines with different levels of supervision for additional reference. Çiçek et al. (2016) was trained with 2% of the ground truth slices in each of the xy, yz, and xz planes, and Zhao et al. (2019) was trained with one fully annotated atlas. Ji et al. (2019) was implemented using the authors’ original code and extrapolated to 3D. For Nalepa et al. (2020) and Moriya et al. (2018), we re-implemented their methods as the original code was unavailable. Our model performs signifi-\ncantly better at all levels of hierarchy compared to its unsupervised counterparts, and comparably to the semi-supervised approach of Zhao et al. (2019).\nAblation Table 8 presents ablation studies on the hierarchical toy dataset comparing our contributions: Euclidean vs. hyperbolic representations, the addition of our gyroplane convolutional layer, and the addition of our hierarchical triplet loss. The Base Euclidean configuration is the 3D convolutional VAE with Euclidean latent space, no gyroplane convolutional layer, and trained with just the ELBO loss. The Triplet Euclidean configuration adds the hierarchical triplet loss to the base Euclidean configuration. The Base Hyperbolic configuration is the same as the Base Euclidean configuration except with hyperbolic latent space. The Triplet configuration is the hyperbolic analogue of the Euclidean Triplet configuration, and GyroConv configurations have the addition of the gyroplane convolutional layer.\nHyperbolic representations outperform their Euclidean counterparts in all experiments. We attribute this to the more efficient and better organization of hyperbolic representations. When we introduce the hierarchical triplet loss, performance improves significantly for our hyperbolic models, but performance for our Euclidean model does not improve as much, likely due to information loss in representing hierarchical input. Introducing the gyroplane convolutional layer shows clear improvement over our Base Hyperbolic model, which shows the benefit of having a layer that respects the geometry of the latent space. The combination of the triplet loss and gyroplane convolutional layer exhibits the most gain over the Base Hyperbolic model, but only small gains over the model with just the added triplet loss. This shows the importance of the our triplet loss for learning effective hierarchical representations." }, { "heading": "5.2 BRAIN TUMOR SEGMENTATION CHALLENGE DATASET", "text": "The BraTS 2019 dataset is a public, well-established benchmark dataset containing 3D MRI scans of brain tumors along with per-voxel ground truth annotations of tumor segmentation masks. The scans are of dimension 200 × 200 × 155 and have four modalities; we use the FLAIR modality, which is the most commonly used one-modality input. We use the same evaluation metric as in the BraTS challenge, and compare DICE score on whole tumor (WT) segmentation, which is detectable solely from FLAIR. There are 259 high grade glioma (HGG) labelled training examples, which we split into 180 train examples, 39 validation examples, and 40 test examples. We do not use the official validation or test sets because the ground truth annotations for these sets are not publicly available. Table 3 shows the comparison of our results against prior work; we train all baselines on the specified data split for fair comparison. The only exception is the current state-of-the-art fully-\nsupervised result (Jiang et al., 2019) in Table 3, which also uses all 4 modalities. We show this for reference as an upper bound; the reported number is trained on the full train set and evaluated on the BraTS test set.\nOur best model performs significantly better than the unsupervised baselines, and in addition outperforms one 3D semi-supervised model. This illustrates the ability of our hyperbolic latent representations to effectively capture the hierarchical structure in individual brain scans. We use a granular segmentation with three clusters for quantitative evaluation in order to capture the tumor, brain, and background, then use the Hungarian algorithm for assignment. In addition, we also show qualitative results for our model (see Figure 3), which include byproduct segmentations from the same model with different numbers of clusters specified, showcasing additionally discovered features in the scan that could also be clinically useful." }, { "heading": "5.3 CRYOGENIC ELECTRON MICROSCOPY TOMOGRAMS", "text": "Finally, we show an example of unsupervised 3D segmentation in a real-world scenario where unsupervised discovery is important. Cryogenic electron microscopy is a technique that images cells at cryogenic temperatures with a beam of electrons. The value of each voxel is the electron density at that location, and is created through reconstruction from tilt slices of ±70 degrees from electron microscopy. Cryo-EM tomograms are a rich source of biological data, capturing many subcellular features that are unknown or unexplored. We train our model on three 512× 512× 250 cryo-E0M tomograms of cells collected from a research laboratory, and run inference on a fourth tomogram. Figure 3 shows segmentations produced by our model on a mitochondria from the evaluation tomogram, using the proposed hyperbolic embedding space vs. Euclidean embedding space, and at a coarse and finer level of granularity. Unlike the Euclidean approach, the hyperbolic approach discovers a fine-grained class corresponding to small features on the mitochondria, which may be macromolecular aggregates. As an example of performing unsupervised discovery with our model, the discovered features can now be investigated for their chemical identities and functions." }, { "heading": "6 CONCLUSION", "text": "We propose a method for learning hyperbolic representations of subvolumes in 3D voxel grid data, that is based on a hyperbolic 3D convolutional VAE with a new gyroplane convolutional layer that respects hyperbolic geometry. We enhance the VAE training objective with a self-supervised hierarchical triplet loss that facilitates learning hierarchical structure within the VAE’s hyperbolic latent space, and a multi-scale sampling scheme. We demonstrate that hyperbolic clustering of learned voxel-level representations can be used to achieve state-of-the-art unsupervised 3D segmentation, on a hierarchical toy dataset and the BraTS dataset. We also illustrate the promise of using our model for unsupervised scientific discovery on an example of cryogenic electron microscopy data." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 RIEMANNIAN MANIFOLDS", "text": "In this section we give a more complete introduction to Riemannian manifolds, of which hyperbolic space is an example. Riemannian manifolds are spaces that locally resemble Euclidean space. To define this mathematically, we first introduce a manifold as a set of pointsM that locally resembles the Euclidean space Rn. Associated with each point x ∈ M is a vector space called the tangent space at x, denoted TxM, which is the space of all directions a curve on the manifold M can tangentially pass through point x. A metric tensor g defines an inner product gx on every tangent space, and a Riemannian manifold is a manifold M together with a metric tensor g. Distance on a Riemannian manifold as can defined as the following. Let γ : [a, b] → M be a curve on the manifold M. The length of γ is defined to be ∫ b a |γ′(t)|γ(t)dt and denoted L(γ). The distance between any two points x,y on the manifold is defined as dM(x,y) = inf L(γ), where the inf is taken over all curves γ that begin at x and end at y. This distance makesM a metric space. The exponential map expx(v) : TxM → M is a useful way to map vectors from the (Euclidean) tangent space to the manifold. The exponential map is defined as expx(v) = γ(1), where γ is the unique geodesic, the shortest possible curve between two points, starting at x with starting direction v. Intuitively, one can think of the exponential map as telling us how to travel one step starting from a point x on the manifold in the v direction. The logarithmic map logv(x) : M → TxM is the inverse of the exponential map, and maps vectors back to Euclidean space." }, { "heading": "A.2 GYROVECTOR OPERATIONS IN THE POINCARÉ BALL", "text": "Gyrovector operations were first introduced by Ungar (2008) to generalize the Euclidean theory of vector spaces to hyperbolic space. Mobius addition is the Poincaré ball analogue of vector addition in Euclidean spaces. The closed-form expression for Mobius addition on the Poincaré ball with negative curvature c is Mathieu et al. (2019):\nz ⊕c y = (1 + 2c〈z, y〉+ c||y||2)z + (1− c||z||2)y\n1 + 2c〈z, y〉+ c2||z||2||y||2 (5)\nAs one might hope anticipate, when c = 0 we recover Euclidean vector addition. Additionally, the analogue of Euclidean vector subtraction is Mobius subtraction, which is defined as x c y = x ⊕c (−y), and the analogue of Euclidean scalar multiplication is Mobius scalar multiplication, which can be defined for a scalar r as (Ganea et al., 2018):\nr ⊗c x = 1√ c tanh(r tanh−1( √ c||x||)) x ||x|| (6)\nwhere we also recover Euclidean scalar multiplication when c = 0. In this paper, we only consider the Poincaré ball with fixed constant negative curvature c = 1, which allows us to drop the dependence on c." }, { "heading": "A.3 WRAPPED NORMAL DISTRIBUTION", "text": "The importance of the normal distribution in Euclidean space has led to many attempts to generalize the normal distribution to Riemannian manifolds. The wrapped normal distribution is one popular way to do this (Mathieu et al., 2019; Nagano et al., 2019). The wrapped normal distribution can be defined on an arbitrary Riemannian manifold as the push-forward measure obtained by mapping the normal distribution in Euclidean space along the manifold’s exponential map. The probability density function of the wrapped normal with mean µ and covariance Σ is:\nNP (z|µ,Σ) = NE(λµ(z)|0,Σ) ( dp(µ, z)\nsinh(dp(µ, z))\n) (7)\nwhere the subscripts P,E indicate whether the distribution is over the Poincaré ball or Euclidean space, respectively. To use the wrapped normal in a VAE, we require both a way to sample from the wrapped normal as well as a way to train its parameters. Mathieu et al. (2019) provides a reparametrization and sampling scheme for the wrapped normal on the Poincaré ball." }, { "heading": "A.4 GYROPLANE OPERATOR", "text": "The gyroplane layer can be thought of as a hyperbolic affine transformation, and is motivated by the fact we can express a Euclidean affine transformation as 〈a, z−p〉 = sign(〈a, z−p〉)||a||dE(z,Ha,p) (Ganea et al., 2018), where dE is Euclidean distance and Ha,p = {z ∈ Rp|〈a, z − p〉 = 0}. Ha,p is called the decision hyperplane. Ganea et al. (2018) defined the gyroplane operator fa,p from this formulation by replacing each component with its hyperbolic equivalent:\nfa,p(z) = sign ( 〈a, logp(z)〉p ) |a|pdp(z,Ha,p) (8)\nwhereHa,p is the hyperbolic decision boundary Ha,p = {z ∈ B|〈a, logp(z)〉 = 0}, and the distance to the hyperbolic decision boundary dp(z,Ha,p) is\ndp(z,Ha,p) = sinh −1 (\n2|〈−p⊕ z, a〉| (1− || − p⊕ z||2)||a|\n) (9)" }, { "heading": "A.5 TOY DATASET", "text": "Our biologically-inspired toy dataset has 120 total volumes, which we split into 80 train examples, 20 validation examples, and 20 test examples. Each toy volume in our dataset is 50 × 50 × 50 and contains multiple levels of hierarchy.\nThe first level of hierarchy (Level 1) is a an outer sphere centered in the volume of radius r ∼ N (25, 1). Using a cell analogy, this represents the outer cell. The second level (Level 2) consists of spheres (“vesicles”) and cuboids (“mitochondria”), both of which are textured, hollow, and completely contained within the outer cell wall. The positions are randomly sampled with radius of r ∼ N (8, 0.5) and with side length of s ∼ 2 · N (8, 0.5). In the third level (Level 3) we introduce small spheres and cuboids (“proteins”) in the vesicle spheres and mitochondria cubloids respectively. The Level 3 proteins are randomly positioned with radius of r ∼ N (2, 0.2) and with side length of s ∼ 2 · N (3, 0.15). Each instance of a shape with a particular size is also given its own unique texture to mimic the different organelles of the cell. The color of each object is chosen randomly, according to a standard normal distribution. We also apply pink noise with magnitude m = 0.25 to the volume as it is commonly seen in biological data.\nIn addition, we have also added a biologically-inspired toy dataset with irregular shapes for evaluating datasets with different characteristics. This dataset was created through applying smooth noise to the boundaries of each shape. Specifically, we generate smooth noise by first sampling random points in our voxel grid and random values according to a Gaussian distribution, and interpolate to get a smooth noise. We then use this smooth noise function to perturb the points that fall within the interior of the three largest shapes. See an example of the dataset in Figure 4.\nWe demonstrate our method’s performance in comparison to prior work on the aforementioned irregular dataset in Table 4, and an ablation study applied on the same irregular dataset in Table 5, both with error bars over four independent runs.\nWe note that in Table 4, our proposed method outperforms prior work significantly on the irregular dataset, following our initial observations from Table 1 to show state-of-the-art performance. We can see that while all methods show slight decrease in performance, our method is still able to maintain the lead in performance as compared to prior work across all levels.\nFor ablations on the irregular toy dataset in Table 5, we find that our best models with hyperbolic latent space still outperform models with Euclidean latent space, as with our original toy dataset. We also demonstrate that the gyroplane convolutional layer and hierarchical triplet loss are both effective compared to the base hyperbolic configuration. However, despite it being effective compared to the base hyperbolic configuration, models with hyperbolic hierarchical triplet loss performed less well across the board as compared to the original toy dataset. We hypothesize that this is due to the specific challenges that the irregular dataset brings, for example, needing to recognize noisy instances of irregular shape as the same class in Levels 2 and 3. Therefore, our proposed gyroplane convolutional layer by itself is able to add more effective learning capacity, and shows significant improvement. The added hierarchical triplet loss performs less well on the irregular dataset than in our original toy dataset because in our multi-patch sampling method, each patch is sampled at\nrandom capturing parts of the 3D input. Since the boundary of the shape changes in every image, with random sampling learning is more difficult for our hierarchical triplet loss. We don’t see the same phenomenon for Level 1 since background/foreground segmentation is an easier task. We conclude that with the level of irregularity added to our dataset (see examples in Figure 4), the gyroplane convolutional layer with the hyperbolic latent space provides more effectiveness than the triplet loss.\nWe also note that in real-world datasets, such as in our work in cryogenic electron microscopy, the overall shapes of each class of object is similar, and do not contain such dramatic irregularity. For example, vesicles are almost-circular ellipses with only slight eccentricity (deformations with slight stretch), but without distinctive irregularities and protrusions in our irregular dataset. Overall, our experiments demonstrate that different components of our method are useful for different scenarios, and that our method overall robustly outperforms prior work across data with different characteristics. All hyperbolic configurations of our method seen in Table 4 outperform past unsupervised methods, and our approach of learning hyperbolic representations of complex 3D data for segmentation is more effective than methods with canonical Euclidean representations.\nLast, for runtime on the toy datasets, our implementations of the proposed models take between five to eight hours to train on a single Titan RTX GPU for both Euclidean and Hyperbolic variants. We note that for our current implementation, hyperbolic k-means clustering takes on the order of a few hours versus minutes for Euclidean k-means. However, this is because we are using our own unoptimized implementation based on recent research in fast Frechet mean algorithms, and standard packages such as scikit-learn do not include hyperbolic k-means algorithms. The Euclidean k-means algorithms in these packages are heavily optimized with parallelization. We anticipate that such optimization would bring the hyperbolic k-means’s runtime to the order of the Euclidean k-means, as the computational complexity of the algorithms are similar in practice." }, { "heading": "A.6 BRATS DATASET", "text": "We also conduct an ablation study on the BraTS dataset with each of our added components with error bars over 4 independent runs. Results are shown in Table 6. We can see that our best Hyperbolic model outperforms our best Euclidean model significantly. The addition of the triplet loss improved both Euclidean and Hyperbolic models, while the Hyperbolic models see more improvement due to ability to encode hierarchy in hyperbolic latent space. Our gyroplane convolutional layer also improves performance, while both of our additions jointly improve upon our Hyperbolic baseline, showing the benefit of these added components to learning effective representations.\nWe include the average and 95 percentile Hausdorff distance as complementary evaluation metrics on the BraTS dataset. See Table 7. We show performance on our method compared to other unsupervised baselines; our model outperforms all prior methods on both metrics." }, { "heading": "A.7 EVALUATION", "text": "We use DICE score to quantitatively evaluate segmentation performance. The DICE score is defined as the following:\nDICE = 2TP\n2TP + FN + FP (10)\nwhere TP is the number of true positives, FN is the number of false negatives, and FP is the number of false positives. For our toy dataset, we first assign predicted classes to ground truth labels using the Hungarian algorithm Kuhn (1955), then evaluate using the average class DICE score. For the BraTS dataset Menze et al. (2014); Bakas et al. (2017; 2018), we evaluate DICE of the whole tumor segmentation following official evaluation guidelines.\nWe also use Hausdorff distance to evaluate the worst-case performance of our model. For two sets of points A,B, the directed Hausdorff distance from A to B is defined as\nh(A,B) = max a∈A { min b∈B d(a, b) } (11)\nwhere d is any distance function. We will take d to be the Euclidean distance. The Hausdorff distance is then defined to be\nH(A,B) = max {h(A,B), h(B,A)} (12)\nThe official BraTS evaluation uses 95 percentile Hausdorff distance as measure of model robustness (Bakas et al., 2018)." }, { "heading": "A.8 HYPERPARAMETERS", "text": "We use a single set of hyperparameters on all of our evaluation datasets, and these hyperparameters are not tuned on any of the evaluation datasets. In order to obtain a reasonable set of hyperparameters, we created a completely separate synthetic dataset on which we trained models and tuned hyperparameters. This synthetic dataset was created in a similar manner to our toy dataset; however, we designed it to have different and fewer objects, simpler nesting structure, no noise, and fewer textures. The application of this single set of hyperparameters to our evaluation datasets — our toy dataset, the BraTS dataset, and the cryogenic electron microscopy dataset, demonstrates the robustness of our approach.\nWith the synthesis dataset, we tuned over a range of hyperparameter values using its validation set. This includes weight of triplet loss β = {10−2, 10−1, 1, 101, 102, 103, 104, 105} with the final weight used as β = 103. The patch size for inference was tuned with range p = {5, 10, 15, 20, 40} with chosen size as 5× 5× 5. The number of epochs was tuned with range e = {3, 5, 8, 10, 12, 15} with final epoch number 8.\nThe BraTS 2019 dataset Menze et al. (2014); Bakas et al. (2017; 2018) can be downloaded following directions from https://www.med.upenn.edu/cbica/brats2019/registration. html. We will release our toy dataset with the final code release." }, { "heading": "A.9 MULTI-PATCH SAMPLING", "text": "Our method is designed to model the compositional hierarchy of 3D data, where we often find visual substructures contained within other structures. Based on this idea, we sample triplets of 3D volume patches that capture this notion of hierarchical structure. Triplets are sampled through the following process: First, we sample a 3D patch of data to be the anchor element, and consider this to be the parent in the triplet. Second, we sample a smaller patch of data that is completely contained within the parent patch, and consider this to be the positive child patch. Then, we sample a smaller patch of data that does not overlap with the anchor patch, and consider this to be the negative child patch. See Section 4.1 for further details on sampling procedure. We input the (parent, positive child, negative child) tuples into our hierarchical triplet loss, where the loss encourages the anchor parent and positive child to have closer representations relative to the anchor and the negative child. See Figure 5 for an overview." }, { "heading": "A.10 LATENT DIMENSION ABLATION", "text": "In Section 5.1, Section 5.2, and Section 5.3, our experiments were all run with latent dimension of two. To show the effect of higher latent space dimensions, we report an ablation study for both hyperbolic and Euclidean representations. As expected, the performance increases with dimension for our model with Euclidean latent space, but our model with hyperbolic latent space still outperforms the Euclidean model at all tested dimensions." } ]
2,020
null
SP:70bb2ad8b8a46670e6ee60a6800656c4f2220ad0
[ "This paper considers the deep one-class classification problem. Some recent state of the art in this area is built upon self-supervised learning methods that are trained to predict the rotation applied to a training image, and then use the success of rotation prediction on test images as an outlier score. The paper observes that, while successful on standard benchmarks, this strategy is not robust to unexpected image rotations at test-time. Since, humans are (presumably) able to exhibit rotation invariance during test-time in 1-class classification, this is considered a flaw in existing methods. To rectify this flaw, the paper proposes to use an anomaly score which is the maximum over all possible rotation predictions. The results show that the proposed method outperforms prior approaches when exposed to novel rotations at test time. ", "This paper presents a one-class classifier robust to geometrically-transformed inputs (GROC). A conformity score is proposed that measures how strongly an input image agrees with one of the predefined in-class transformations. Experiments show that the proposed method works well on 3 datasets for out-of-class detection and produces similar scores for in-class images under different transformations." ]
Recent studies on one-class classification have achieved a remarkable performance, by employing the self-supervised classifier that predicts the geometric transformation applied to in-class images. However, they cannot identify inclass images at all when the input images are geometrically-transformed (e.g., rotated images), because their classification-based in-class scores assume that input images always have a fixed viewpoint, as similar to the images used for training. Pointing out that humans can easily recognize such transformed images as the same class, in this work, we aim to propose a one-class classifier robust to geometrically-transformed inputs, named as GROC. To this end, we introduce a conformity score which indicates how strongly an input image agrees with one of the predefined in-class transformations, then utilize the conformity score with our proposed agreement measures for one-class classification. Our extensive experiments demonstrate that GROC is able to accurately distinguish in-class images from out-of-class images regardless of whether the inputs are geometricallytransformed or not, whereas the existing methods fail.
[]
[ { "authors": [ "Liron Bergman", "Yedid Hoshen" ], "title": "Classification-based anomaly detection for general data", "venue": "arXiv preprint arXiv:2005.02359,", "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Izhak Golan", "Ran El-Yaniv" ], "title": "Deep anomaly detection using geometric transformations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jean-Bastien Grill", "Florian Strub", "Florent Altché", "Corentin Tallec", "Pierre H Richemond", "Elena Buchatskaya", "Carl Doersch", "Bernardo Avila Pires", "Zhaohan Daniel Guo", "Mohammad Gheshlaghi Azar" ], "title": "Bootstrap your own latent: A new approach to self-supervised learning", "venue": "arXiv preprint arXiv:2006.07733,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Saurav Kadavath", "Dawn Song" ], "title": "Using self-supervised learning can improve model robustness and uncertainty", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Dongha Lee", "Sehun Yu", "Hwanjo Yu" ], "title": "Multi-class data description for out-of-distribution detection", "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2020 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "arXiv preprint arXiv:1608.03983,", "year": 2016 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "NIPS Workshop on Deep Learning and Unsupervised Feature Learning,", "year": 2011 }, { "authors": [ "Emanuel Parzen" ], "title": "On estimation of a probability density function and mode", "venue": "The annals of mathematical statistics,", "year": 1962 }, { "authors": [ "Lukas Ruff", "Robert Vandermeulen", "Nico Goernitz", "Lucas Deecke", "Shoaib Ahmed Siddiqui", "Alexander Binder", "Emmanuel Müller", "Marius Kloft" ], "title": "Deep one-class classification", "venue": "In International conference on machine learning,", "year": 2018 }, { "authors": [ "Thomas Schlegl", "Philipp Seeböck", "Sebastian M Waldstein", "Ursula Schmidt-Erfurth", "Georg Langs" ], "title": "Unsupervised anomaly detection with generative adversarial networks to guide marker discovery", "venue": "In International conference on information processing in medical imaging,", "year": 2017 }, { "authors": [ "Bernhard Schölkopf", "Robert C Williamson", "Alex J Smola", "John Shawe-Taylor", "John C Platt" ], "title": "Support vector method for novelty detection", "venue": "In Advances in neural information processing systems,", "year": 2000 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "David MJ Tax", "Robert PW Duin" ], "title": "Support vector data description", "venue": "Machine learning,", "year": 2004 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Houssam Zenati", "Manon Romain", "Chuan-Sheng Foo", "Bruno Lecouat", "Vijay Chandrasekhar" ], "title": "Adversarially learned anomaly detection", "venue": "IEEE International Conference on Data Mining (ICDM),", "year": 2018 }, { "authors": [ "Bo Zong", "Qi Song", "Martin Renqiang Min", "Wei Cheng", "Cristian Lumezanu", "Daeki Cho", "Haifeng Chen" ], "title": "Deep autoencoding gaussian mixture model for unsupervised anomaly detection", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "WideResnet (Zagoruyko", "Komodakis" ], "title": "2016) as the backbone architecture. We adopt the training strategy for multi-label classification, proposed in (Hendrycks et al., 2019). During the training, we use the cosine annealing for scheduled learning (Loshchilov & Hutter, 2016) with initial learning rate 0.1 and Nesterov momentum. The dropout rate (Srivastava et al., 2014) is set to 0.3. All the self-supervised methods based on transformation classification also use the same", "venue": null, "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "One-class classification refers to the problem of identifying whether an input example belongs to a single target class (in-class) or any of novel classes (out-of-class). The main challenge of this task is that only in-class examples are available at training time. Thus, by using only positive examples, a model has to learn the decision boundary that distinguishes in-class examples from out-of-class examples, whose distribution is assumed to be unknown in practice. Early work on one-class classification mainly utilized kernel-based methods (Schölkopf et al., 2000; Tax & Duin, 2004) to find a hypersphere (or hyperplane) enclosing all training in-class examples, or density estimation techniques (Parzen, 1962) to measure the likelihood of an input example.\nIn the era of deep learning, numerous literature have tried to employ deep neural networks to effectively learn the high-dimensional data (e.g., images). Most of them aim to detect out-of-class examples based on density estimation, by adopting the architecture of autoencoders (Ruff et al., 2018; Zong et al., 2018) or generative adversarial networks (GANs) (Schlegl et al., 2017; Zenati et al., 2018). Nevertheless, their supervision is not useful enough to capture the semantic of highdimensional data for a target class, which eventually leads to the limited performance. Recently, there have been several attempts to make use of self-supervised learning (Golan & El-Yaniv, 2018; Hendrycks et al., 2019; Bergman & Hoshen, 2020) for more informative supervision on the target class, and made a major breakthrough to this problem. They build a self-labeled image set by applying a bunch of geometric transformations to training images, then train a classifier to accurately predict the transformation applied to original input images. This approach achieved the state-of-theart performance for one-class classification even without modeling the latent distribution of in-class examples for density estimation.\nHowever, all the aforementioned methods are quite vulnerable to spatial variances within the images, because they were developed based on the assumption that in-class (and out-of-class) images have a fixed viewpoint. In particular, the existing self-supervised methods do not work completely for the inputs with various viewpoints in that their capability of predicting the geometric transformation relies on the fixed viewpoint. Note that humans usually recognize that the images of a target object with different viewpoints belong to the same class; in this sense, the one-class classifiers also\nshould be robust to the viewpoint of input images. In other words, we need to make geometricallytransformed in-class images not to be identified as out-of-class, from the perspective that a geometric transformation (e.g., rotation & x,y-translation) does not change the semantic (i.e., object class) but the viewpoint.\nThe goal of our work is to propose an effective strategy that can circumvent the limitation of viewpoint sensitivity, without compromising the performance for the images with the fixed viewpoint. We first present several evaluation settings for validating the robustness to flexible viewpoints, artificially introduced by geometric transformations. Then, we describe our proposed solution, termed as GROC, which measures a conformity score indicating how confidently an input image matches with one of the predefined (anchor) in-class transformations. In this work, we offer two measures for the conformity score, which are the inner product similarity and the conditional likelihood, and show how they can be optimized by the training in-class images. The empirical experiments on the proposed evaluation scenarios show that GROC considerably outperforms all the other competing methods in terms of the robustness to geometric transformation." }, { "heading": "2 PRELIMINARIES", "text": "" }, { "heading": "2.1 PROBLEM FORMULATION", "text": "Let X be a set of all kinds of images, Xin ⊆ X and Xout = X\\Xin be the sets of all in-class and out-of-class images, respectively. Given training in-class data X trin ⊆ Xin, we consider the one-class classification problem which differentiates in-class and out-of-class data. The problem aims to build a classifier by using only the known in-class data for training. The classifier learns an in-class score function, Sin(x) : X → R, where a higher score indicates that the input x is more likely to be in Xin. Based on the score, the classifier determines whether the input belongs to in-class or not." }, { "heading": "2.2 SELF-SUPERVISED LEARNING METHODS FOR ONE-CLASS CLASSIFICATION", "text": "Recently, the self-supervised learning methods (Golan & El-Yaniv, 2018; Hendrycks et al., 2019; Bergman & Hoshen, 2020) have achieved the state-of-the-art performance in one-class classification. For self-supervised learning, they first create a self-labeled dataset and use it to train a multi-class classifier. Concretely, let T = {T0, · · · , Ti, · · · , TK−1} be a set of predefined (anchor) geometric transformations, where T0(x) = x is the identity mapping and each transformation Ti is a composition of multiple unit transformations (i.e., rotation & x,y-translation). The self-labeled dataset consists of transformed images and their corresponding labels.\nDself = {(Ti(x), i)|x ∈ X trin , 0 ≤ i < K}, (1) where Ti(·) is the i-th transformation operator and its label i is the transformation id of Ti(·). Using the self-labeled dataset, these methods train a softmax classifier based on a multi-class classification loss (i.e., cross-entropy) for a discrimination among the transformations. For one-class classification, they define an in-class score under the assumption that a well-trained classifier would better predict the transformation for the in-class images than that for the out-of-class images. In the end, the in-class score for an unseen image x is defined by the sum of softmax probabilities that its transformed images are correctly classified as their labels (Golan & El-Yaniv, 2018; Bergman & Hoshen, 2020).\nSin(x) = K−1∑ i=0 p (y = i|Ti(x)) , (2)\nwhere p (y = i|Ti(x)) is the softmax probability that Ti(x) is classified as the i-th transformation. The state-of-the-art method based on this self-supervised approach (Hendrycks et al., 2019) significantly improves the performance by formulating the classification task in a multi-label manner. Since each transformation is determined by the combination of unit transformations from three categories1, (i.e., rotation, (horizontal) x-translation, and (vertical) y-translation), the unit transformations applied to an input image can be independently predicted for each category. Thus, they adopt a\n1They build the set of transformations T by the combination of the following unit transformations: rotation ∈ {0◦, 90◦, 180◦, 270◦}, x-translation ∈ {−8, 0,+8}, and y-translation ∈ {−8, 0,+8}.\nsoftmax head for each transformation category, then train a classifier to predict the degree of transformations within each category. The final in-class score is also replaced with the one summarizing all the softmax heads, each of which is for the unit transformation applied to the input." }, { "heading": "3 METHOD", "text": "" }, { "heading": "3.1 MOTIVATION", "text": "The underlying concept of the self-supervised methods based on transformation classification is to learn discriminative features of in-class images, in order to classify various viewpoints caused by the geometric transformations. The precondition for this approach is that the viewpoint of training images is always the same, otherwise the classifier cannot be trained due to the inconsistent supervision. However, at test time, the input images can have different viewpoints from those appearing in the training images. We remark that the images of the same object with different viewpoints belong to the same class, as usually recognized by humans. In this sense, it is desired that in-class images with various viewpoints are identified as in-class, not out-of-class. That is, the robustness to geometric transformations should be considered for one-class classification. Sea lion, n02077923\nIn this respect, the existing self-supervised methods totally fail to compute the effective inclass scores for inputs with various viewpoints. We observe that they produce an undesirable in-class score especially when the input image has the same (or similar) viewpoint represented by the anchor transformations T \\{T0}. For example, suppose a classifier is trained on Dself with the transformations T of clockwise rotations {0◦, 90◦, 180◦, 270◦}. Given two images of sea lions x′ and x′′, let x′ have the same viewpoint with the training images and x′′ have the 90◦ rotated viewpoint, which is equivalent to T1(x′). As illustrated in Figure 1, the softmax probability of each transformed image has a high value for the input x′, but the one for x′′ has a low value. Consequently, it cannot correctly identify x′′ as in-class, though it comes from the target class as well. We point out that setting the target label of each transformed image to the applied transformation is not valid any longer when an input viewpoint is changed.\nA straightforward solution for this challenge is augmenting the training dataset so that it can cover various viewpoints of in-class images. Unfortunately, the data augmentation technique is not applicable because it results in inconsistent supervision for the task of discriminating the viewpoints, which is the learning objective of the self-supervised methods. On the other hand, there exist several one-class classification methods (Ruff et al., 2018; Zong et al., 2018) that can adopt the data augmentation technique. However, they cannot achieve the performance as high as the self-supervised methods even in the case that all input images have a fixed viewpoint; this will be further discussed in Section 4. To sum up, we need to consider another strategy to develop a robust one-class classifier that works well even for the input images having various viewpoints." }, { "heading": "3.2 PROPOSED SETUPS", "text": "We first propose three evaluation setups for testing the robustness to various viewpoints: 1) fixed viewpoint, 2) anchor viewpoint, and 3) random viewpoint. We artificially introduce the spatial variance (i.e., the changes of the viewpoint) in test images by using the geometric transformations. Note that X te denotes the test data, which contains both in-class and out-of-class images. Fixed viewpoint setup. In this setup, we consider only the fixed viewpoint that is used for training, as done in previous work. We do not change the viewpoint of the original test images, X tefv = X te.\nAnchor viewpoint setup. This setup is designed for verifying the robustness to the viewpoints induced by the anchor transformations. We build a test dataset by X teav = {T (x)|T ∼ T ,x ∈ X te}, where T is randomly sampled from the set of the anchor transformations T for each image x. Random viewpoint setup. The random viewpoint setup further considers the geometric transformations that are not included in the set of anchor transformations. We first define the superset of T , denoted by T ∗, including a number of transformations with continuous degrees. A test dataset for this setup is built by X terv = {T (x)|T ∼ T ∗,x ∈ X te}, where T is sampled for each image x. As a preliminary result, we plot the in-class score distributions for in-class and out-of-class test images, computed by the state-of-the-art self-supervised method (Hendrycks et al., 2019). In Figure 2, we observe that the score distributions of in-class and out-of-class images in X tefv are clearly distinguishable, which supports the great performance for one-class classification. On the contrary, the two score distributions are almost overlapping with each other in cases of X teav and X terv , strongly indicating that they fail to figure out in-class images due to their various viewpoints.\nWe additionally investigate the performance drop for geometrically/non-geometrically transformed inputs. In Figure 2(d), it is obvious that geometric transformations make the self-supervised method totally malfunction, while non-geometric transformations (e.g., brightness, contrast, sharpness, and color temperature) hardly degrade the final performance for one-class classification." }, { "heading": "3.3 PROPOSED STRATEGY", "text": "To deal with the viewpoint sensitivity of the self-supervised methods, we note that the in-class images match better with the in-class transformations than the out-of-class images do, regardless of their viewpoints. Our proposed strategy, named as GROC, defines the in-class score by the sum of the conformity scores for K transformed images; Sconf (·; T ) calculates how conformable an input image is to the given set T . Formally, it is defined by the maximum similarity between the representation of an input image and that of each anchor transformation:\nSin(x) = K−1∑ i=0 Sconf (Ti(x); T ), where Sconf (x; T ) = max Tj∈T [sim (x, Tj)] . (3)\nThe foremost condition for GROC is that the representations of the anchor transformations should be discriminative, so that the similarity measure can effectively capture the viewpoint of input images. Note that the similarity between an image x and a transformation Tj , denoted by sim(x, Tj), can be defined in various ways. In the following subsections, we offer two similarity measures for the conformity score, respectively modeled by inner product similarity and conditional likelihood, and present how the representations of input images and anchor transformations are optimized." }, { "heading": "3.3.1 INNER PRODUCT SIMILARITY FOR CONFORMITY SCORE", "text": "To model the similarity measure for the conformity score, we use our encoder network f (·;θ) : X → Rd which outputs the representation of an input image, and the weight matrix W ∈ RK×d that parameterizes the representations of K anchor transformations. We first present GROC-IP whose similarity measure is simply computed by the inner product of f(x;θ) and wj :\nsim (x, Tj) = f(x;θ) >wj . (4)\nBased on the inner product similarity, the encoder network needs to map all in-class images with the same viewpoint close to their corresponding transformation vector, while keeping the transformation vectors far from each other. In other words, it has to extract discriminative features for classifying the input images according to their viewpoint; this has been already achieved by the conventional softmax classifier in a self-supervised manner.\nTherefore, we adopt the optimization strategy provided by the existing self-supervised methods for one-class classification. After we build a softmax classifier by adding the linear classification layer of weights W on the top of the encoder network f(·;θ), we train both the weight matrix and the network by using the cross-entropy loss with Dself . In case of GROC-IP, the conformity score becomes equivalent to the maximum logit value computed by the softmax classifier." }, { "heading": "3.3.2 CONDITIONAL LIKELIHOOD FOR CONFORMITY SCORE", "text": "Our second method, GROC-CL, defines the similarity measure by using the likelihood of an input image conditioned on each transformation. For simplicity, we assume that the conditional likelihood has the form of an isotropic Gaussian distribution, whose mean is µj ∈ Rd and standard deviation is σj ∈ R+ for the condition j (i.e., transformation Tj). In short, GROC-CL models the representation of Tj as (µj , σj) rather than wj . Similar to Section 3.3.1, all the Gaussian distributions need to be separable for a discrimination among different viewpoints. Based on this assumption, the similarity between x and Tj is defined by the log-likelihood as follows.\nsim (x, Tj) = logN (f(x;θ)|µj , σ2j I) ≈ −\n( ‖f(x;θ)− µj‖22\n2σ2j + log σdj\n) . (5)\nNote that this similarity can be interpreted as the Mahalanobis distance between f(x;θ) andµj with the covariance matrix σ2j I . The challenge here is to optimize the encoder network so that its outputs follow N (µj , σ2j I) for all in-class inputs having the viewpoint corresponding to Tj . We train the parameters for the Gaussian distributions (µ, σ) and the network f(·;θ) by the following objective.\nmax θ,µ,σ,b ∑ x∈X trin K−1∑ j=0\n[ sim (Tj(x), Tj) + 1\nν log exp (sim (Tj(x), Tj) + bj)∑K−1 k=0 exp (sim (Tj(x), Tk) + bk)\n] , (6)\nwhere bj ∈ R is the bias term for transformation j. As discussed in (Lee et al., 2020), this objective aims to learn the discriminative conditional likelihoods modeled by K separable Gaussian distributions. To be precise, the first term is enforcing that f(Tj(x)) follows the j-th Gaussian distribution, and the second term makes them distinguishable based on Gaussian discriminant analysis." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 COMPETING METHODS", "text": "In our experiments, we consider a variety of approaches to one-class classification as the competing methods. We choose One-class SVM (OCSVM) (Schölkopf et al., 2000) and Deep SVDD (DSVDD) (Ruff et al., 2018) as the non-self-supervised methods. OCSVM is a classical kernelbased method for one-class classification, which finds a maximum-margin hyperplane the separates enclosing most of the training in-class examples. DSVDD, a deep learning variant of OCSVM, explicitly models the latent space in which training in-class examples gather to a specific center point.\nThe main competitors are the self-supervised methods based on transformation classification: Geometric Transformation (GT) (Golan & El-Yaniv, 2018) and Multi-labeled Geometric Transformation (MGT) (Hendrycks et al., 2019). The details of the methods are presented in Section 2.2. For the anchor transformations, GT adopts four transformation categories (i.e., horizontal flipping, x,y-translation, and rotation), while MGT excludes horizontal flipping from the above categories.\nThe last competing method is SimCLR (Chen et al., 2020) which learns the transformation-invariant representations of input images in a self-supervised manner. Since it is optimized to maximize the agreement among the images differently-transformed from a single image, it is capable of alleviating\nthe viewpoint sensitivity to some degree. Several recent work on representation learning based on this approach (Chen et al., 2020; Grill et al., 2020; He et al., 2020) showed the remarkable performance for a wide range of downstream tasks. Note that SimCLR is not originally designed for one-class classification, thus we tailor it for our task. We define the final in-class score by\nSin(x) = K−1∑ i=1 f(T0(x);θ) >f(Ti(x);θ) ‖f(T0(x);θ)‖2‖f(Ti(x);θ)‖2 . (7)\nFor its optimization, we use the set of the anchor transformations adopted by MGT. More details of SimCLR are provided in Appendix A." }, { "heading": "4.2 DATASETS", "text": "We validate the effectiveness of the proposed methods using three benchmark image datasets: CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and SVHN (Netzer et al., 2011). We scale the pixel values of all images to be in [−1, 1] as done in (Golan & El-Yaniv, 2018). Note that CIFAR100 has 20 super-classes and we use these super-classes rather than 100 full classes." }, { "heading": "4.3 EXPERIMENTAL SETTINGS", "text": "Following the experimental setting of the previous studies (Golan & El-Yaniv, 2018; Ruff et al., 2018; Zenati et al., 2018), we employ the one-vs-all evaluation scheme. For the dataset with C classes, we generate C one-class classification settings; the images of a target class are regarded as in-class data, and the other images belonging to C − 1 classes are regarded as out-of-class data. We use the Area Under the ROC curve (AUROC) as an evaluation metric.\nWe build the set of the anchor transformations by the combination of the following unit transformations: x-translation ∈ {−8, 0,+8}, y-translation ∈ {−8, 0,+8}, and rotation ∈ {0◦, 90◦, 180◦, 270◦}. As presented in Section 3.2, we evaluate our method by using the three proposed setups: fixed viewpoint, anchor viewpoint, and random viewpoint. For the random viewpoint setup, we build the test set by randomly sampling the transformation degrees for the x,y-translation in the range of [−8, 8], and for the rotation in the range of [0◦, 360◦]. In case of SVHN, we exclude the x,y-translation because it may change the semantic (i.e., class label) of the images; the label of each image is determined by the digit in the middle of the image." }, { "heading": "4.4 EXPERIMENTAL RESULTS", "text": "" }, { "heading": "4.4.1 COMPARISONS WITH SELF-SUPERVISED METHODS", "text": "The experimental results of the self-supervised methods on the three setups are presented in Table 1. For each dataset, the original class name is replaced by its class id due to the limited space. In summary, our methods effectively overcome the limitation of viewpoint sensitivity, without compromising the performance in the fixed viewpoint setup. We analyze the results from various perspectives.\nFixed viewpoint setup. In this setup, the state-of-the-art competitor MGT consistently shows the best results, and our methods show the comparable results to MGT. We also observe that the performance of SimCLR is not as good as those of the classification-based methods. Note that the classification-based methods aim for a discrimination among the differently-transformed images, whereas SimCLR tries to make them indistinguishable. From this observation, we can conclude that directly learning the transformation-invariant visual features is less effective to identify in-class and out-of-class images for one-class classification.\nAnchor viewpoint setup. In case that the input images have anchor viewpoints, the classificationbased methods fail to distinguish in-class images from out-of-class images; their performances are even worse than a random guess whose AUROC is 0.5. Because their capability of discriminating the geometric transformations depends on the fixed viewpoint, they do not work at all for the input images with various viewpoints, as discussed in Section 3.1. In contrast, our methods considerably outperform all the competing methods while providing outstanding performances robust to the changes of the viewpoint. These results show that our methods successfully identify the inclass images irrespective of their viewpoint, by the help of the conformity score that measures how\nC IF\nA R\n-1 00\n0 0.57 / 0.75 / 0.77 / 0.77 / 0.78 0.55 / 0.43 / 0.43 / 0.71 / 0.72 0.55 / 0.44 / 0.47 / 0.62 / 0.64 1 0.64 / 0.69 / 0.72 / 0.74 / 0.72 0.58 / 0.46 / 0.43 / 0.66 / 0.62 0.60 / 0.48 / 0.47 / 0.60 / 0.62 2 0.63 / 0.71 / 0.71 / 0.75 / 0.82 0.49 / 0.44 / 0.48 / 0.70 / 0.75 0.51 / 0.48 / 0.47 / 0.65 / 0.70 3 0.58 / 0.78 / 0.80 / 0.80 / 0.80 0.55 / 0.40 / 0.41 / 0.72 / 0.75 0.50 / 0.48 / 0.47 / 0.52 / 0.65 4 0.63 / 0.78 / 0.80 / 0.81 / 0.81 0.53 / 0.42 / 0.46 / 0.76 / 0.76 0.50 / 0.46 / 0.43 / 0.70 / 0.72 5 0.59 / 0.66 / 0.67 / 0.67 / 0.66 0.56 / 0.45 / 0.49 / 0.60 / 0.59 0.53 / 0.48 / 0.51 / 0.53 / 0.55 6 0.50 / 0.86 / 0.88 / 0.84 / 0.85 0.48 / 0.43 / 0.46 / 0.77 / 0.82 0.44 / 0.46 / 0.46 / 0.59 / 0.58 7 0.67 / 0.65 / 0.66 / 0.67 / 0.69 0.56 / 0.46 / 0.47 / 0.60 / 0.62 0.57 / 0.48 / 0.48 / 0.59 / 0.56 8 0.50 / 0.84 / 0.87 / 0.85 / 0.87 0.48 / 0.43 / 0.46 / 0.78 / 0.80 0.45 / 0.47 / 0.46 / 0.69 / 0.75 9 0.42 / 0.88 / 0.92 / 0.91 / 0.91 0.41 / 0.39 / 0.41 / 0.89 / 0.87 0.41 / 0.38 / 0.39 / 0.76 / 0.77\n10 0.60 / 0.85 / 0.87 / 0.85 / 0.85 0.54 / 0.39 / 0.44 / 0.82 / 0.81 0.56 / 0.42 / 0.43 / 0.73 / 0.74 11 0.53 / 0.84 / 0.86 / 0.86 / 0.87 0.46 / 0.41 / 0.43 / 0.80 / 0.81 0.48 / 0.47 / 0.44 / 0.70 / 0.70 12 0.54 / 0.82 / 0.84 / 0.83 / 0.84 0.47 / 0.43 / 0.45 / 0.77 / 0.80 0.47 / 0.45 / 0.45 / 0.68 / 0.72 13 0.65 / 0.60 / 0.62 / 0.60 / 0.59 0.63 / 0.51 / 0.49 / 0.56 / 0.55 0.61 / 0.49 / 0.47 / 0.51 / 0.52 14 0.52 / 0.92 / 0.93 / 0.90 / 0.90 0.44 / 0.40 / 0.42 / 0.86 / 0.84 0.44 / 0.42 / 0.43 / 0.73 / 0.70 15 0.56 / 0.69 / 0.70 / 0.71 / 0.72 0.55 / 0.45 / 0.46 / 0.65 / 0.65 0.53 / 0.47 / 0.49 / 0.60 / 0.61 16 0.55 / 0.76 / 0.80 / 0.80 / 0.76 0.52 / 0.48 / 0.43 / 0.73 / 0.68 0.52 / 0.49 / 0.46 / 0.66 / 0.66 17 0.65 / 0.90 / 0.93 / 0.92 / 0.92 0.55 / 0.36 / 0.41 / 0.88 / 0.84 0.52 / 0.38 / 0.39 / 0.79 / 0.76 18 0.52 / 0.89 / 0.90 / 0.90 / 0.92 0.44 / 0.40 / 0.40 / 0.87 / 0.86 0.41 / 0.47 / 0.42 / 0.75 / 0.76 19 0.53 / 0.85 / 0.89 / 0.87 / 0.85 0.45 / 0.40 / 0.41 / 0.81 / 0.79 0.47 / 0.43 / 0.39 / 0.73 / 0.71 avg 0.57 / 0.79 / 0.81 / 0.80 / 0.81 0.51 / 0.43 / 0.44 / 0.75 / 0.75 0.50 / 0.45 / 0.45 / 0.66 / 0.67\nSV H\nN\n0 0.79 / 0.71 / 0.86 / 0.92 / 0.92 0.79 / 0.39 / 0.21 / 0.92 / 0.92 0.63 / 0.43 / 0.44 / 0.78 / 0.80 1 0.68 / 0.72 / 0.80 / 0.89 / 0.90 0.68 / 0.35 / 0.20 / 0.89 / 0.89 0.52 / 0.42 / 0.43 / 0.75 / 0.77 2 0.75 / 0.98 / 0.96 / 0.94 / 0.94 0.75 / 0.30 / 0.23 / 0.94 / 0.94 0.53 / 0.40 / 0.41 / 0.75 / 0.76 3 0.63 / 0.91 / 0.94 / 0.93 / 0.92 0.63 / 0.35 / 0.41 / 0.93 / 0.91 0.60 / 0.42 / 0.42 / 0.75 / 0.73 4 0.63 / 0.98 / 0.98 / 0.95 / 0.96 0.63 / 0.33 / 0.21 / 0.95 / 0.96 0.54 / 0.42 / 0.42 / 0.71 / 0.71 5 0.86 / 0.96 / 0.95 / 0.95 / 0.94 0.86 / 0.30 / 0.32 / 0.95 / 0.94 0.69 / 0.39 / 0.40 / 0.83 / 0.78 6 0.69 / 0.96 / 0.96 / 0.91 / 0.91 0.69 / 0.31 / 0.35 / 0.91 / 0.91 0.61 / 0.36 / 0.39 / 0.80 / 0.78 7 0.75 / 0.99 / 0.98 / 0.96 / 0.96 0.75 / 0.30 / 0.23 / 0.96 / 0.96 0.46 / 0.39 / 0.43 / 0.70 / 0.74 8 0.67 / 0.77 / 0.88 / 0.92 / 0.91 0.67 / 0.36 / 0.33 / 0.92 / 0.91 0.59 / 0.44 / 0.46 / 0.71 / 0.67 9 0.65 / 0.97 / 0.97 / 0.90 / 0.89 0.65 / 0.32 / 0.31 / 0.90 / 0.89 0.60 / 0.34 / 0.43 / 0.79 / 0.78\navg 0.71 / 0.90 / 0.93 / 0.93 / 0.92 0.71 / 0.33 / 0.28 / 0.93 / 0.92 0.58 / 0.40 / 0.42 / 0.76 / 0.75\nconfidently an input image matches with one of the in-class transformations. Interestingly, the performance of SimCLR is higher than that of the classification-based methods in this setup. This is because its learning objective that encourages the multiple transformations of an input image to be similar with each other makes the one-class classifier less affected by the viewpoint.\nRandom viewpoint setup. In the hardest setting where the input images have random viewpoints, the classification-based methods cannot beat a random guessing, similarly to the anchor viewpoint setup. Our methods perform the best for all the datasets, which strongly indicates the robustness to the changes of the viewpoint. In conclusion, both of the our methods (i.e., GROC-IP and GROCCL) are able to correctly classify the images having diverse viewpoints into in-class or out-of-class, even for the viewpoints that have not been seen during the training." }, { "heading": "4.4.2 COMPARISONS WITH NON-SELF-SUPERVISED METHODS", "text": "Figure 3 presents the comparison results against the non-self-supervised methods on the three evaluation setups. Due to the limited space, we report the score averaged over C in-class settings for\neach dataset. We also provide the results of DSVDD that adopts the data augmentation technique, denoted by DSVDD+; the training in-class images are randomly augmented by T ∼ T ∗. For all the setups, the competing methods show poorer performances compared to our methods. Specifically, the data augmentation technique results in the limited performance gains in the anchor/random setups, whereas it even brings an adverse effect in the fixed setup. This implies that the simple approach is not sufficient to address the viewpoint sensitivity of the one-class classifiers." }, { "heading": "4.4.3 FURTHER ANALYSIS", "text": "We also provide in-depth analyses on the performance of the self-supervised methods, for the SVHN dataset. In the fixed viewpoint setup, we observe that there is a distinct performance improvement of our GROC over MGT, especially for the cases of in-class 0, 1, and 8. Figure 4 shows the in-class score distributions obtained by GROC and MGT, where the class 0 is set to in-class. Since the digit ‘0’ has a symmetric shape, it is difficult for MGT to differentiate its transformations between the rotation 0◦ and 180◦ (or, 90◦ and 270◦). For this reason, as illustrated in the rightmost figure, MGT outputs relatively low scores for the images having a single ‘0’ but high scores for the images having other digits around ‘0’. On the contrary, GROC produces the similar (and high) in-class scores for these images (i.e., with or without other digits), which can be separated from the scores of out-ofclass images. This helps GROC to less overlap the in-class scores between in-class and out-of-class images, and as a result, it leads to higher AUROC compared to MGT.\nOn the other hand, for the cases of in-class 6 and 9, the performance of GROC slightly degrades because the 180◦ rotation of the out-of-class digit ‘9’ is more likely to be conformable to the inclass digit ‘6’, and vice versa. Nevertheless, under the assumption that the input images have various viewpoints, it is impossible even for humans to accurately figure out whether the images that look like ‘6’ (or ‘9’) belong to in-class or out-of-class." }, { "heading": "5 CONCLUSION", "text": "This paper proposes a novel one-class classification method robust to geometric transformations, which effectively addresses the challenge that in-class images cannot be correctly distinguished from out-of-class images when they have various viewpoints. We first present new evaluation setups that cover the diverse viewpoints by artificially introducing the spatial variance into test images. Then, we define the conformity-based in-class score so as to measure how strongly an input image is conformable to one of the anchor transformations, whose representations are optimized to be discriminative. The extensive experiments demonstrate that the proposed GROC keeps its outstand-\ning performance even in the anchor/random viewpoint setups where the input images have various viewpoints, whereas the state-of-the-art methods perform even worse than a random guessing." }, { "heading": "A SIMCLR", "text": "In the experiments, we slightly modified SimCLR (Chen et al., 2020) for the one-class classification task. The main idea of SimCLR is learning representations by maximizing the agreement among the images differently-transformed from the same image via a contrastive loss in the latent space. For the optimization of SimCLR, we use the set of the anchor transformations T that is adopted by the classification-based self-supervised method (i.e., MGT); this set is different from the one used by the original SimCLR. Let x ∈ X trin and x̃ = T (x) where T is a transformation operator randomly sampled from the set of the anchor transformations T . Given a batch B = {x1, · · · ,xN} ⊂ X trin , we define B̃ = {x̃1, x̃2, · · · , x̃2N−1, x̃2N} where x̃2k−1 and x̃2k are generated by applying different transformations to each image xk in the batch. The loss function for a pair of two differentlytransformed images (x̃2k−1, x̃2k) from the input image xk is defined as follows.\nl(x̃2k−1, x̃2k) = − log exp (sim (f (x̃2k−1;θ) , f (x̃2k;θ)) /τ)∑2N\ni=1 I[i 6= (2k − 1)]exp (sim (f (x̃2k−1;θ) , f (x̃i;θ)) /τ) , (8)\nwhereN is the number of images in a batch, f (·) is an encoder network including a projection layer, and sim(u,v) = u>v/‖u‖2‖v‖2 is the cosine similarity. It is worth noting that we include the projection layer in the encoder network in order to obtain the transformation-invariant representations, unlike the original version of SimCLR that discards the projection layer for their downstream tasks.\nIn the end, the objective function of SimCLR for a batch B is defined as\nLSimCLR = 1\n2N N∑ k=1 [l (x̃2k−1, x̃2k) + l (x̃2k, x̃2k−1)] . (9)\nB IMPLEMENTATION DETAILS\nWe choose a 16-4 WideResnet (Zagoruyko & Komodakis, 2016) as the backbone architecture. We adopt the training strategy for multi-label classification, proposed in (Hendrycks et al., 2019). During the training, we use the cosine annealing for scheduled learning (Loshchilov & Hutter, 2016) with initial learning rate 0.1 and Nesterov momentum. The dropout rate (Srivastava et al., 2014) is set to 0.3. All the self-supervised methods based on transformation classification also use the same backbone architecture and training hyperparameters with ours. For DSVDD, we use LeNet (LeCun et al., 1998) style network as described in the paper and implementation.2 For SimCLR, we employ the ResNet18 (He et al., 2016) with the fixed τ value of 0.5. For GROC-CL, the regularization coefficient ν for optimizing the conditional likelihoods is set to 0.0001." }, { "heading": "C EXPERIMENTAL RESULTS", "text": "In Table 2, we report the full comparison results with the non-self-supervised methods for one-class classification (summarized in Figure 3)." }, { "heading": "D THEORETICAL BACKGROUNDS FOR GROC-CL", "text": "GROC-CL basically utilizes the encoder network f that induces the latent space, where the similarity between the representation of an input image and that of each anchor transformation is modeled by an isotropic Gaussian distribution (conditioned on the transformation). Formally, the similarity between x and Tj can be described as sim (x, Tj) = log p (x|Tj) = logN ( f (x) |µj , σ2j I ) , where p (x|Tj) is the class-conditional (or transformation-conditional) probability. As discussed in Section 3.3, the representation of each anchor transformation should be distinguishable from the others’, which is the foremost condition for GROC, in order to effectively calculate the conformity score and identify in-class/out-of-class images from the score. In this sense, GROCCL can be understood from the perspective of Gaussian Discriminant Analysis (GDA). To this end,\n2https://github.com/lukasruff/Deep-SVDD-PyTorch\nwe optimize the encoder network by maximizing the posterior probability of a transformed image Tj(x) having the maximum similarity with the transformation j, which is denoted by p (Tj |Tj(x)). For simplicity, we assume that the prior probability for each class (or transformation) follows the Bernoulli distribution, i.e., p (Tj) = βj/ ∑ k βk.\np (Tj |Tj(x)) = p (Tj) p (Tj(x)|Tj)∑K−1\nk=0 p (Tk) p (Tj(x)|Tk)\n= exp ( − ( 2σ2j )−1 ‖f (Tj(x))− µj‖2 − log σdj + log βj)∑K−1\nk=0 exp ( − (2σ2k) −1 ‖f (Tj(x))− µk‖2 − log σdk + log βk )\n= exp (sim (Tj(x), Tj) + bj)∑K−1\nk=0 exp (sim (Tj(x), Tk) + bk) .\n(10)\nNote that taking the log of this equation becomes equivalent to the second term in Equation (6).\nIn addition, we need to force the empirical class-conditional distribution to follow the isotropic Gaussian distribution and also approximate the empirical class mean to the obtained class mean µj . Thus, we minimize the Kullback-Leibler (KL) divergence between the j-th empirical classconditional distributionPj and the corresponding Gaussian distributionN ( µj , σ 2 j I ) . The empirical class-conditional distribution for transformation j is defined as follows.\nPj = 1 |X trin | ∑\nx∈X trin\nδ (z− f (Tj(x))) , (11)\nwhere δ (·) is the Dirac measure. Finally, the KL divergence is obtained by\nKL ( Pj‖N ( µj , σ 2 j I )) = − ∫ 1 |X trin | ∑\nx∈X trin\nδ (z− f (Tj(x))) log [ 1( 2πσ2j )d/2 exp(−‖z− µj‖22σ2j )] dz\n+\n∫ 1 |X trin | ∑\nx∈X trin\nδ (z− f (Tj(x))) log 1 |X trin | ∑ x∈X trin δ (z− f (Tj(x))) dz = − 1|X trin | ∑ x∈X trin log [ 1( 2πσ2j )d/2 exp(−‖f (Tj(x))− µj‖22σ2j )] + log 1 |X trin |\n= 1 |X trin | ∑\nx∈X trin\n( ‖f (Tj(x))− µj‖2\n2σ2j + log σdj\n) + constant\n= − 1|X trin | ∑\nx∈X trin\nsim (Tj(x), Tj) + constant.\n(12)\nThe final form of the KL divergence is derived by using the definition of the Dirac measure. After the constant term is excluded, it becomes the same with the first term in Equation (6). Minimizing the KL term for all the classes (or transformations) matches the empirical class-conditional distributions with the isotropic Gaussian distributions.\nC IF\nA R\n-1 0\n0 0.65 / 0.65 / 0.80 / 0.76 / 0.75 0.63 / 0.64 / 0.65 / 0.70 / 0.68 0.60 / 0.63 / 0.69 / 0.63 / 0.64 1 0.41 / 0.54 / 0.45 / 0.95 / 0.97 0.38 / 0.48 / 0.45 / 0.91 / 0.94 0.39 / 0.46 / 0.49 / 0.84 / 0.88 2 0.65 / 0.67 / 0.62 / 0.85 / 0.84 0.65 / 0.62 / 0.63 / 0.76 / 0.73 0.64 / 0.63 / 0.64 / 0.64 / 0.65 3 0.50 / 0.52 / 0.54 / 0.77 / 0.77 0.49 / 0.52 / 0.54 / 0.73 / 0.69 0.49 / 0.45 / 0.53 / 0.62 / 0.61 4 0.75 / 0.76 / 0.76 / 0.90 / 0.90 0.74 / 0.72 / 0.75 / 0.85 / 0.83 0.72 / 0.73 / 0.74 / 0.72 / 0.71 5 0.51 / 0.51 / 0.54 / 0.88 / 0.88 0.50 / 0.48 / 0.49 / 0.83 / 0.83 0.48 / 0.45 / 0.45 / 0.72 / 0.73 6 0.72 / 0.75 / 0.72 / 0.90 / 0.93 0.71 / 0.72 / 0.73 / 0.81 / 0.83 0.70 / 0.68 / 0.73 / 0.72 / 0.76 7 0.51 / 0.52 / 0.58 / 0.95 / 0.96 0.50 / 0.51 / 0.53 / 0.91 / 0.89 0.50 / 0.43 / 0.54 / 0.77 / 0.77 8 0.68 / 0.68 / 0.41 / 0.92 / 0.94 0.54 / 0.53 / 0.50 / 0.88 / 0.87 0.54 / 0.36 / 0.61 / 0.80 / 0.80 9 0.49 / 0.52 / 0.37 / 0.90 / 0.93 0.38 / 0.37 / 0.39 / 0.86 / 0.88 0.38 / 0.40 / 0.40 / 0.79 / 0.83\navg 0.59 / 0.61 / 0.58 / 0.88 / 0.89 0.55 / 0.56 / 0.57 / 0.82 / 0.82 0.54 / 0.52 / 0.58 / 0.73 / 0.74\nC IF\nA R\n-1 00\n0 0.66 / 0.64 / 0.58 / 0.77 / 0.78 0.65 / 0.48 / 0.58 / 0.71 / 0.72 0.64 / 0.49 / 0.58 / 0.62 / 0.64 1 0.52 / 0.56 / 0.47 / 0.74 / 0.72 0.51 / 0.57 / 0.57 / 0.66 / 0.62 0.49 / 0.45 / 0.54 / 0.60 / 0.62 2 0.52 / 0.56 / 0.39 / 0.75 / 0.82 0.52 / 0.52 / 0.54 / 0.70 / 0.75 0.48 / 0.55 / 0.61 / 0.65 / 0.70 3 0.51 / 0.58 / 0.47 / 0.80 / 0.80 0.50 / 0.53 / 0.57 / 0.72 / 0.75 0.48 / 0.53 / 0.44 / 0.52 / 0.65 4 0.52 / 0.58 / 0.46 / 0.81 / 0.81 0.51 / 0.53 / 0.56 / 0.76 / 0.76 0.46 / 0.31 / 0.56 / 0.70 / 0.72 5 0.44 / 0.50 / 0.43 / 0.67 / 0.66 0.44 / 0.50 / 0.47 / 0.60 / 0.59 0.43 / 0.55 / 0.51 / 0.53 / 0.55 6 0.52 / 0.51 / 0.50 / 0.84 / 0.85 0.51 / 0.49 / 0.47 / 0.77 / 0.82 0.51 / 0.42 / 0.40 / 0.59 / 0.58 7 0.59 / 0.63 / 0.62 / 0.67 / 0.69 0.59 / 0.58 / 0.55 / 0.60 / 0.62 0.54 / 0.56 / 0.58 / 0.59 / 0.56 8 0.67 / 0.67 / 0.63 / 0.85 / 0.87 0.66 / 0.63 / 0.69 / 0.78 / 0.80 0.67 / 0.66 / 0.65 / 0.69 / 0.75 9 0.69 / 0.69 / 0.51 / 0.91 / 0.91 0.50 / 0.52 / 0.48 / 0.89 / 0.87 0.55 / 0.42 / 0.47 / 0.76 / 0.77\n10 0.75 / 0.82 / 0.57 / 0.85 / 0.85 0.62 / 0.71 / 0.67 / 0.82 / 0.81 0.61 / 0.58 / 0.67 / 0.73 / 0.74 11 0.61 / 0.61 / 0.53 / 0.86 / 0.87 0.61 / 0.60 / 0.58 / 0.80 / 0.81 0.60 / 0.54 / 0.52 / 0.70 / 0.70 12 0.68 / 0.63 / 0.65 / 0.83 / 0.84 0.67 / 0.65 / 0.64 / 0.77 / 0.80 0.66 / 0.62 / 0.66 / 0.68 / 0.72 13 0.59 / 0.65 / 0.66 / 0.60 / 0.59 0.59 / 0.60 / 0.62 / 0.56 / 0.55 0.58 / 0.66 / 0.65 / 0.51 / 0.52 14 0.44 / 0.45 / 0.46 / 0.90 / 0.90 0.43 / 0.43 / 0.46 / 0.86 / 0.84 0.44 / 0.45 / 0.46 / 0.73 / 0.70 15 0.60 / 0.65 / 0.59 / 0.71 / 0.72 0.60 / 0.57 / 0.59 / 0.65 / 0.65 0.55 / 0.58 / 0.58 / 0.60 / 0.61 16 0.66 / 0.65 / 0.60 / 0.80 / 0.76 0.65 / 0.58 / 0.60 / 0.73 / 0.68 0.63 / 0.58 / 0.59 / 0.66 / 0.66 17 0.71 / 0.76 / 0.70 / 0.92 / 0.92 0.61 / 0.48 / 0.53 / 0.88 / 0.84 0.57 / 0.58 / 0.61 / 0.79 / 0.76 18 0.52 / 0.50 / 0.44 / 0.90 / 0.92 0.45 / 0.49 / 0.43 / 0.87 / 0.86 0.48 / 0.48 / 0.47 / 0.75 / 0.76 19 0.55 / 0.56 / 0.45 / 0.87 / 0.85 0.49 / 0.43 / 0.44 / 0.81 / 0.79 0.49 / 0.47 / 0.45 / 0.73 / 0.71\navg 0.59 / 0.61 / 0.53 / 0.80 / 0.81 0.56 / 0.54 / 0.55 / 0.75 / 0.75 0.54 / 0.53 / 0.55 / 0.66 / 0.67\nSV H\nN\n0 0.54 / 0.62 / 0.55 / 0.92 / 0.92 0.53 / 0.51 / 0.52 / 0.92 / 0.92 0.52 / 0.53 / 0.54 / 0.78 / 0.80 1 0.51 / 0.55 / 0.50 / 0.89 / 0.90 0.50 / 0.51 / 0.52 / 0.89 / 0.89 0.50 / 0.53 / 0.52 / 0.75 / 0.77 2 0.52 / 0.55 / 0.50 / 0.94 / 0.94 0.51 / 0.50 / 0.50 / 0.94 / 0.94 0.50 / 0.50 / 0.51 / 0.75 / 0.76 3 0.51 / 0.53 / 0.50 / 0.93 / 0.92 0.50 / 0.49 / 0.50 / 0.93 / 0.91 0.50 / 0.50 / 0.51 / 0.75 / 0.73 4 0.50 / 0.52 / 0.50 / 0.95 / 0.96 0.49 / 0.49 / 0.50 / 0.95 / 0.96 0.48 / 0.51 / 0.51 / 0.71 / 0.71 5 0.52 / 0.54 / 0.52 / 0.95 / 0.94 0.51 / 0.49 / 0.51 / 0.95 / 0.94 0.51 / 0.51 / 0.51 / 0.83 / 0.78 6 0.51 / 0.56 / 0.50 / 0.91 / 0.91 0.50 / 0.49 / 0.50 / 0.91 / 0.91 0.51 / 0.49 / 0.51 / 0.80 / 0.78 7 0.51 / 0.56 / 0.52 / 0.96 / 0.96 0.50 / 0.51 / 0.52 / 0.96 / 0.96 0.49 / 0.51 / 0.51 / 0.70 / 0.74 8 0.50 / 0.54 / 0.49 / 0.92 / 0.91 0.49 / 0.48 / 0.49 / 0.92 / 0.91 0.49 / 0.50 / 0.51 / 0.71 / 0.67 9 0.52 / 0.52 / 0.51 / 0.90 / 0.89 0.51 / 0.49 / 0.51 / 0.90 / 0.89 0.51 / 0.49 / 0.52 / 0.79 / 0.78\navg 0.51 / 0.55 / 0.51 / 0.93 / 0.92 0.50 / 0.50 / 0.51 / 0.93 / 0.92 0.50 / 0.51 / 0.52 / 0.76 / 0.75" } ]
2,020
null
SP:fdf6eccb626f29ace14ead921e976448e2dd8bb8
[ "The Authors show that scaling factors with hand-crafted or learnable methods are not so important when training Binary Weight Networks (BWNs), while the change of weight signs is crucial. They make two observations: The weight signs of the primary binary sub-networks are determined and fixed at the early training stage. Binary kernels in the convolutional layers of final models tend to be centered on a limited number of fixed structural patterns. Based on these observations, they propose a new method called binary kernel quantization to further compress BWNs. ", "This paper proposes some interesting observations for training BWNs. 1: The scaling factors can be removed with batch normalization used. 2: The signs of the weights with large norms are determined and fixed at the early training stage. 3: The binary weight networks can be further compressed. Moreover, the authors provide some empirical visualizations and results to demonstrate its analysis. However, the paper seems to be incomplete and needs to be further improved. " ]
Binary Weight Networks (BWNs) have significantly lower computational and memory costs compared to their full-precision counterparts. To address the nondifferentiable issue of BWNs, existing methods usually use the Straight-ThroughEstimator (STE). In the optimization, they learn optimal binary weight outputs represented as a combination of scaling factors and weight signs to approximate 32-bit floating-point weight values, usually with a layer-wise quantization scheme. In this paper, we begin with an empirical study of training BWNs with STE under the settings of using common techniques and tricks. We show that in the context of using batch normalization after convolutional layers, adapting scaling factors with either hand-crafted or learnable methods brings marginal or no accuracy gain to final model, while the change of weight signs is crucial in the training of BWNs. Furthermore, we observe two astonishing training phenomena. Firstly, the training of BWNs demonstrates the process of seeking primary binary sub-networks whose weight signs are determined and fixed at the early training stage, which is akin to recent findings on the lottery ticket hypothesis for efficient learning of sparse neural networks. Secondly, we find binary kernels in the convolutional layers of final models tend to be centered on a limited number of the most frequent binary kernels, showing binary weight networks may has the potential to be further compressed, which breaks the common wisdom that representing each weight with a single bit puts the quantization to the extreme compression. To testify this hypothesis, we additionally propose a binary kernel quantization method, and we call resulting models Quantized Binary-Kernel Networks (QBNs). We hope these new experimental observations would shed new design insights to improve the training and broaden the usages of BWNs.
[]
[ { "authors": [ "Jan Achterhold", "Jan M Kohler", "Anke Schmeink", "Tim Genewein" ], "title": "Variational network quantization", "venue": null, "year": 2018 }, { "authors": [ "Milad Alizadeh", "Javier Fernandez-Marques", "Nicholas D Lane", "Yarin Gal" ], "title": "An empirical study of binary neural networks", "venue": "optimisation. In ICLR,", "year": 2019 }, { "authors": [ "Alexander G Anderson", "Cory P Berg" ], "title": "The high-dimensional geometry of binary neural networks. 2018", "venue": null, "year": 2018 }, { "authors": [ "Lei Jimmy Ba", "Rich Caruana" ], "title": "Do deep nets really need to be deep", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Yu Bai", "Yu-Xiang Wang", "Edo Liberty" ], "title": "Proxquant: Quantized neural networks via proximal operators. 2019", "venue": null, "year": 2019 }, { "authors": [ "Joseph Bethge", "Haojin Yang", "Marvin Bornstein", "Christoph Meinel" ], "title": "Back to simplicity: How to train accurate bnns from scratch", "venue": null, "year": 1812 }, { "authors": [ "Zhaowei Cai", "Xiaodong He", "Jian Sun", "Nuno Vasconcelos" ], "title": "Deep learning with low precision by half-wave gaussian quantization", "venue": null, "year": 2017 }, { "authors": [ "Shangyu Chen", "Wenya Wang", "Sinno Jialin Pan" ], "title": "Metaquant: Learning to quantize by learning to penetrate non-differentiable quantization", "venue": null, "year": 2019 }, { "authors": [ "Matthieu Courbariaux", "Yoshua Bengio", "Jean-Pierre David" ], "title": "Binaryconnect: Training deep neural networks with binary weights during propagations", "venue": "In NIPS,", "year": 2015 }, { "authors": [ "Matthieu Courbariaux", "Itay Hubara", "Daniel Soudry", "Ran El-Yaniv", "Yoshua Bengio" ], "title": "Binarized neural networks: Training neural networks with weights and activations constrained to +1 or -1", "venue": null, "year": 2016 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks. 2019", "venue": null, "year": 2019 }, { "authors": [ "Angus Galloway", "Graham W Taylor", "Medhat Moussa" ], "title": "Attacking binarized neural networks. 2018", "venue": null, "year": 2018 }, { "authors": [ "Ross Girshick", "Jeff Donahue", "Trevor Darrell", "Jitendra Malik" ], "title": "Rich feature hierarchies for accurate object detection and semantic segmentation", "venue": "In CVPR,", "year": 2014 }, { "authors": [ "Yiwen Guo", "Anbang Yao", "Hao Zhao", "Yurong Chen" ], "title": "Network sketching: Exploiting binary structure in deep cnns", "venue": null, "year": 2017 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William J Dally" ], "title": "Learning both weights and connections for efficient neural networks", "venue": "In NIPS,", "year": 2015 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman", "venue": null, "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Koen Helwegen", "James Widdicombe", "Lukas Geiger", "Zechun Liu", "Kwang-Ting Cheng", "Roeland Nusselder" ], "title": "Latent weights do not exist: Rethinking binarized neural network", "venue": null, "year": 2019 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Lu Hou", "Quanming Yao", "James T Kwok" ], "title": "Loss-aware binarization of deep networks. 2017", "venue": null, "year": 2017 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Itay Hubara", "Matthieu Courbariaux", "Daniel Soudry", "Ran El-Yaniv", "Yoshua Bengio" ], "title": "Quantized neural networks: Training neural networks with low precision weights and activations", "venue": "arXiv preprint arXiv:1609.07061,", "year": 2016 }, { "authors": [ "Dahyun Kim", "Pratap Kunal Singh", "Jonghyun Choi" ], "title": "Learning architectures for binary networks", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Tech Report,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In NIPS,", "year": 2012 }, { "authors": [ "Hao Li", "Asim Kadav", "Igor Durdanovic", "Hanan Samet", "Hans Peter Graf" ], "title": "Pruning filters for efficient convnets", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Xiaofan Lin", "Cong Zhao", "Wei Pan" ], "title": "Towards accurate binary convolutional neural network", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Zechun Liu", "Baoyuan Wu", "Wenhan Luo", "Xin Yang", "Wei Liu", "Kwang-Ting Cheng" ], "title": "Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training", "venue": null, "year": 2018 }, { "authors": [ "Jonathan Long", "Evan Shelhamer", "Trevor Darrell" ], "title": "Fully convolutional networks for semantic segmentation", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Brais Martinez", "Jing Yang", "Adrian Bulat", "Georgios Tzimiropoulos" ], "title": "Training binary neural networks with real-to-binary convolutions", "venue": null, "year": 2020 }, { "authors": [ "Asit Mishra", "Debbie Marr" ], "title": "Apprentice: Using knowledge distillation techniques to improve low-precision network", "venue": null, "year": 2018 }, { "authors": [ "Haotong Qin", "Ruihao Gong", "Xianglong Liu", "Xiao Bai", "Jingkuan Song", "Nicu Sebe" ], "title": "Binary neural networks: A survey", "venue": "Pattern Recognition,", "year": 2020 }, { "authors": [ "Mohammad Rastegari", "Vicente Ordonez", "Joseph Redmon", "Ali Joseph" ], "title": "Xnor-net: Imagenet classification using binary convolutional neural networks", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Darabi Sajad", "Mouloud Belbahri", "Matthieu Courbariaux", "Vahid Partovi Nia" ], "title": "Regularized binary network training", "venue": null, "year": 2019 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Vivienne Sze", "Yu-Hsin Chen", "Tien-Ju Yang", "Joel Emer" ], "title": "Efficient processing of deep neural networks:a tutorial and survey", "venue": "Proceedings of the IEEE,", "year": 2017 }, { "authors": [ "Dongqing Zhang", "Jiaolong Yang", "Dongqiangzi Ye", "Gang Hua" ], "title": "Lq-nets: Learned quantization for highly accurate and compact deep neural networks", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Xiangyu Zhang", "Xinyu Zhou", "Mengxiao Lin", "Jian Sun" ], "title": "Shufflenet: An extremely efficient convolutional neural network for mobile devices", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Aojun Zhou", "Anbang Yao", "Yiwen Guo", "Lin Xu", "Yurong Chen" ], "title": "Incremental network quantization: Towards lossless cnns with low-precision weights", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "2016. Shilin Zhu", "Xin Dong", "Su Hao" ], "title": "Binary ensemble neural network: More bits per network", "venue": null, "year": 2016 }, { "authors": [ "2019. Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "In ICLR,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Convolutional Neural Networks (CNNs) have achieved great success in many computer vision tasks such as image classification (Krizhevsky et al., 2012), object detection (Girshick et al., 2014) and semantic segmentation (Long et al., 2015). However, modern CNNs usually have large number of parameters, posing heavy costs on memory and computation. To ease their deployment in resourceconstrained environments, different types of neural network compression and acceleration techniques have been proposed in recent years, such as network pruning (Han et al., 2015; Li et al., 2017), network quantization (Hubara et al., 2016; Rastegari et al., 2016; Zhou et al., 2016), knowledge distillation (Ba & Caruana, 2014; Hinton et al., 2015), efficient CNN architecture engineering and searching (Howard et al., 2017; Zhang et al., 2018b; Zoph & Le, 2017).\nComparatively, network quantization is more commercially attractive as it can not only benefit specialized hardware accelerator designs (Sze et al., 2017), but also can be readily combined with other techniques to get further improved compression and acceleration performance (Mishra & Marr, 2018; Han et al., 2016; Zhou et al., 2017). Quantization methods aim to approximate fullprecision (32-bit floating-point) neural networks with low-precision (low-bit) ones. In particular, the extremely quantized models called Binarized Neural Networks (BNNs) (Courbariaux et al., 2015; 2016; Rastegari et al., 2016) which force the weights or even weights and activations to have 1-bit values (+1 and −1), bringing 32× reduction in model size and making costly 32-bit floating-point\nmultiplications can be replaced by much cheaper binary bit-wise operations. Because of this, how to train accurate BNNs either in a post-training manner or in a training from scratch manner has attracted increasing attention. However, training BNNs poses a non-differentiable issue as converting full-precision weights into binary values leads to zero gradients. To combat this issue, most existing methods use the Straight-Through-Estimator (STE). Although there are few attempts (Achterhold et al., 2018; Chen et al., 2019; Bai et al., 2019; Hou et al., 2017) to learn BNNs without STE by using proximal gradient methods or meta-learning methods, they suffer from worse accuracy and heavier parameter tuning compared to STE based methods. In STE based methods, full-precision weights are retained during training, and the gradients w.r.t. them and their binarized ones are assumed to be the same. In the forward pass of the training, the full-precision weights of the currently learnt model are quantized to binary values for predication loss calculation. In the backward pass, the gradients w.r.t. full-precision weights instead of binary ones are used for model update. To compensating for drastic information loss and training more accurate BNNs, most state of the art STE based methods follow the formulation of (Rastegari et al., 2016) in which the binary weights are represented as a combination of scaling factors and weight signs to approximate 32-bit floating-point weight values layer-by-layer, yet also present a lot of modifications. These modifications include but are not limited to expanding binary weights to have multiple binary bases (Lin et al., 2017; Guo et al., 2017), replacing hand-crafted scaling factors with learnable ones (Zhang et al., 2018a), making an ensemble of multiple binary models (Zhu et al., 2019), searching high-performance binary network architectures (Kim et al., 2020), and designing improved regularization objectives, optimizers and activation functions (Cai et al., 2017; Liu et al., 2018; Helwegen et al., 2019; Martinez et al., 2020).\nThere are also a few works, trying to make a better understanding of the training of BNNs with STE. In (Alizadeh et al., 2019), the authors evaluate some of the widely used tricks, showing that adapting learning rate with a second-moment optimizer is crucial to train BNNs with STE based methods while other tricks such as weights and gradients clipping are less important. Bethge et al. (2019) shows the commonly used techniques such as hand-crafted scaling factors and custom gradients are also not crucial. Sajad et al. (2019) demonstrates learnable scaling factors combined into a modified sign function can enhance the accuracy of BNNs. Anderson & Berg (2018) makes an interpretation of why binary models can approximate their full-precision references in terms of high-dimensional geometry. Galloway et al. (2018) validates that BNNs have surprisingly improved robustness against some adversarial attacks compared to their full-precision counterparts. In this paper, we revisit the training of BNNs, particularly Binary Weight Networks (BWNs) with STE, but in a new perspective, exploring structural weight behaviors in training BWNs.\nOur main contributions are summarized as follows:\n• We use two popular methods (Rastegari et al., 2016) and (Zhang et al., 2018a) for an empirical study, showing both hand-crafted and learnable scaling factors are not that important, while the change of weight signs plays the key role in the training of BWNs, under the settings of using common techniques and tricks.\n• More importantly, we observe two astonishing training phenomena: (1) the training of BWNs demonstrates the process of seeking primary binary sub-networks whose weight signs are determined and fixed at the early training stage, which is akin to recent findings of the lottery ticket hypothesis (Frankle & Carbin, 2019) for training sparse neural networks; (2) binary kernels in the convolutional layers (Conv layers) of final BWNs tend to be centered on a limited number of binary kernels, showing binary weight networks may has the potential to be further compressed. This breaks the common understanding that representing each weight with a single bit puts the quantization to the extreme compression.\n• We propose a binary kernel quantization method to compress BWNs, bringing a new type of BWNs called Quantized Binary-Kernel Networks (QBNs)." }, { "heading": "2 AN EMPIRICAL STUDY ON UNDERSTANDING BWNS’ TRAINING", "text": "In this section we will briefly describe BWNs we use in experiments, implementation details, scaling factors in BWNs, full-precision weight norm, weight sign, and sub-networks in BWNs." }, { "heading": "2.1 DIFFERENT BINARY WEIGHT NETWORKS", "text": "BWNs generally represents those networks with binary weights, and there are several different BWNs existing. Overall they use αB to replace full-precision weight W , where B = sign(W ) and α is proposed to minimize ||αB − W || in an either learnable or calculated way. In following experiments, we use the one implemented in XNor-Net (Rastegari et al., 2016) and denote it as XNor-BWN, and the one implemented in LQ-Net (Zhang et al., 2018a) and denote it as LQ-BWN which is 1-bit weight 32-bit activation version of LQ-Net. Other popular BWN methods like DoReFa-Net and BinaryConnect are similar to these two methods. Both XNor-BWN and LQ-BWN use STE framework, and XNor-BWN uses hand-crafted calculated scaling factors, and LQ-BWN uses learnable scaling factors." }, { "heading": "2.2 IMPLEMENTATION DETAILS AND NOTATION", "text": "Quantization: We directly use open source codes of BWN released by authors, including XNorBWN1 and LQ-BWN2.\nDataset and Network Structure: CIFAR-10 (Krizhevsky & Hinton, 2009) and ImageNet (Russakovsky et al.) are used in our experiments. We use VGG-7 (Simonyan & Zisserman, 2015) and ResNet-20 (He et al., 2016) on CIFAR-10, and ResNet18 on ImageNet. The strcuture is the same as original ones.\nHyper-parameters: We use the same training parameters on each network. The network is trained for 200 epochs. The learning rate is set initially to 0.02 and divided by 10 at 80 and 160 epochs. For random crop, we first use zero pad to resize the image into 40× 40, and random crop into 32× 32. For BWN trained on ImageNet, each is trained for 100 epochs. The initial learning rate is 0.1 and decays 0.1 at 30, 60, 90 epoch. The image is rescaled into 256×256 and then randomly cropped into 224× 224. No additional data augmentations are used. For all networks, weight decay is applied to all Conv layers set to 4× 10−5. Notations: In figures and tables, we will use the following abbreviations for clearer expression. BN: BatchNormalization, LR: Learning Rate. WD: Weight Decay. SF: Scaling Factors. FP: Fullprecision. VGG-7 XNor-BWN: a VGG-7 network using the binarization algorithm of XNor-BWN. ResNet-20 Baseline: a full-precision ResNet-20 only using data augmentation and weight decay without any additional tricks. Other network structures with certain methods are similar to this. Large weights, large magnitude weights, and weights with larger norm have the same meaning to indicate those weights having relatively large absolute values." }, { "heading": "2.3 SCALING FACTORS", "text": "According to previous methods, scaling factors are one essential element in obtaining BWNs. However, according to our experiments and analysis, we find scaling factors are not so important in training BWNs, and they can somehow be ignored without the drop in performance. Here we list four reasons to explain why scaling factors are unimportant.\nA simple proof: BN is a common practice to be used in training BWNs. It contains two operations, Normalization and Affine as shown in Equation1. γ and β are the affine parameters used in BN. = 5e− 4 is used in PyTorch to avoid the error of dividing zero. We use a simple proof to demonstrate that BN can absorb scaling factors as shown in Equation2. This is correct during training when one scaling factor is applied to each output channel under the Conv-BN-ReLU structure.\nx′ = Normalize(x) = x− x̄√ σ2 +\ny = Affine(x′) = γx′ + β (1)\nyα = γ αx− αx̄√ α2σ2 + + β ≈ γ x− x̄√ σ2 + + β = y (2)\n1We use the codes of DoReFa-Net to realize XNor-BWN which is the same as the original implementation. https://github.com/tensorpack/tensorpack/tree/master/examples/DoReFa-Net\n2LQ-BWN is the 1-bit weight 32-bit activation version of LQ-Nets. https://github.com/microsoft/LQ-Nets\nExperimental Results: In the second aspect, we directly go to experimental results. As shown Table.2 in Appendix.B, we train different networks with and without scaling factors. The test accuracy on Cifar-10 and validation accuracy on ImageNet do not show a large difference between the two methods. Later we fix scaling factors of all layers to a certain value and magnify their learning rate according to the fixed scaling factors’ magnitude. The performance does not change when fixing scaling factors. Thus, we conclude with proper learning rate, scaling factors are not essential to train BWNs.\nCompare learnable SF and γ in BN: LQ-BWN uses channel-wise scaling factors. From the experiments in Appendix.C, we find that these channel-wise scaling factors having a high correlation with γ in the BN after corresponding binary Conv. This finding indicates that BN’s γ can replace channel-wise SF to some extent.\nQuantization Error Curve: Another purpose using scaling factors is to reduce the quantization error between full-precision weights and binary weights according to a BNN survey (Qin et al., 2020). By using experiments in Appendix.D we prove that the quantization error is not actually reduced by scaling factors, but weight decay helps on this reduction." }, { "heading": "2.4 WEIGHT NORM, WEIGHT SIGN, AND SUB-NETWORKS IN BWNS", "text": "We already analyse one essential element, scaling factors, in the previous section, and another essential element of BWNs are weights’ signs. In deterministic binarization methods, full-precision weights’ signs decide their binary weights’ signs using a sign() function. In this section, we will discuss the relationship between weight norm, weight sign and how to find primary binary subnetworks in BWNs.\nWeight Histogram: We visualize the full-precision weight distribution in different layers of different networks as shown in Figure.1. Different from a bi-modal distribution, it shows a centered distribution around 0. This again proves that the actual distance or so-called quantization error is very large. And there are many weights close to zero behaving very unstable, which will change their signs with little perturbations. More experiments and visualizations are in Appendix.E.\nFlipping Weights’ Signs: We flip the weights’ signs during the inference section according to weights’ full-precision norm as shown in Figure.12 of Appendix.G. We flip those weights with the largest norm and the smallest norm in two experiments. It shows that even the weights have the same norm after binarization, and the changed norm is the same for the same flipping percentage, there is still a very large gap between the two results. Flipping those weights with large full-precision magnitude will cause a significant performance drop compared to those weights close to zero. This reveals that weights are different where some with small norm can tolerate sign flipping, and those with large norm cannot suffer from changing signs, even though both two kinds of weights have the same norm after binarization.\nTracing Large Weights From the last experiment, we conduct that weights with large norm are vulnerable and important during inference, however, the function of them during training remains unclear. Then we conduct two experiments to tracing these large weights during training. We also use ”these large weights” to indicate these weights having the larger magnitude/larger norm in the\nnetwork that has already finished training. One is to trace these large weights’ signs, to find when these weights’ signs become the same as the ones finishing training. Another is to trace these large weights’ indices, to find when they become the largest weights among all weights.\nThe results of VGG-7 are shown in Figure.3. The results of ResNet-20 in Figure.9 and ResNet18 in 10 are placed in Appendix.F. We find those large weights mostly have been decided in the early training stage. The larger magnitude these weights finally have, the earlier they decide and fix their sign. And this rule also applies to their magnitude, that the final trained weights with larger magnitude become having a larger magnitude in the very early stage. Both curves have a similar trend to the accuracy curve’s trend." }, { "heading": "2.5 PRIMARY BINARY SUB-NETWORKS IN BWNS", "text": "We find that there are weights with the large norm, fixing their signs in the early training stage. These weights are stable and vulnerable when inversing their signs. We name these weights as Primary Binary Sub-Networks. This idea is akin to the lottery ticket hypothesis (Frankle & Carbin, 2019), but the difference is our primary binary sub-networks’ weights usually have fixed signs, and the rest of BWNs are not zero like the pruned networks. The primary binary sub-networks have the same norm for each weight after binarization, but different importance. The lottery ticket is based on full-precision network pruning, and it pays more attention to getting sparse networks using the retraining method, while ours is to claim the meta idea that weights with larger norm are stable and sensitive on signs’ changes. We will show how we utilize this idea in the rest paper." }, { "heading": "2.6 BINARY-KERNEL DISTRIBUTION", "text": "Besides the centered distribution of full precision weights in each layer, we find that there exists another distribution of binary-kernels in each layer. For a binary-kernel with 3 × 3 kernel size, there are 29 possible kernels in total. For easier illustrations, we use 0 to 511 to index these kernels as shown in Figure.4. 3 × 3 kernels are more widely used in common CNN like VGG, ResNet, DenseNet (Huang et al., 2017), and MobileNet (Howard et al., 2017) (except the first Conv layer\nwhich is usually not binarized). From Figure.4, we can find that certain binary-kernels are in favor across different layers and networks." }, { "heading": "3 QUANTIZED BINARY-KERNEL NETWORKS", "text": "In this section, we will introduce Quantized Binary-Kernel Networks (QBNs). In previous sections, we have several conclusions: 1. Scaling factors are not essential to BWNs, which guide us not to concentrate on designing scaling factors since good learning rates help in the most cases; 2. Weights with larger magnitude contribute to the primary binary sub-networks in BWNs, and these large weights are stable but sensitive on sign changing, determined and fixed in the early training stage; 3. Certain binary-kernels are centered on a limited number of the most frequent binary kernels. All these conclusions lead us to propose a new compression algorithm, which will further compress BWNs into a more structural and compact network, and we name this algorithm Quantized BinaryKernel Networks(QBNs). QBN basically is to the ultimate extent to maintain the primary binary sub-networks, changing those smaller weights’ signs and quantize the less frequent kernels to those high frequent kernels to save space." }, { "heading": "3.1 ALGORITHM", "text": "Before training a QBN, we first train an ordinary VGG-7 XNor-BWN on Cifar-10 and extract its last Conv layer’s binary kernel distribution. This has been already done as shown in Figure.5. Then we sort these binary kernels according to their appearance frequency and select top 21, 22, ..., 28 frequent binary kernels. These kernels are called selected binary-kernels K0,1,...,28−1. In the rest of the paper, we use the selected binary kernels to indicate the kernels K0,1,...,28−1 in our algorithm. In our following experiments these selected binary kernels are extracted from one single VGG-7 BWN’s last Conv layer. After pre-processing these and obtaining K0,1,...,28−1, we start to train a QBN using Algorithm.1, which is written with python-style pseudocodes. We use the function where(A,B,C) from NumPy indicating that if the value satisfies A then equals to B otherwise equals to C.\nAlgorithm 1: QBN Parameters: Quantized kernel bit number p, selected kernels K0,1,...,2p−1, hyper-parameter\nthreshold ∆, weight input channel number I , output channel number O, scaling factors α = 0.05\nInput: W E =\nΣt=nt=1 |Wt| n\nfor w in W do if abs(w) > ∆E then\nw = sign(w) end\nend for i in range (I) do\nfor j in range (O) do for m in range (2p) do\nL2(m) = ||Wij −Km||2 end m∗ = argminm(L2(m)) Wij = αKm∗\nend end Return: W\nWe set scaling factors fixed to 0.05 when using default learning rate mentioned in experimental settings of Section.2.2. We use L2 norm to calculate the distance between the full-precision kernel Wij to the selected kernels Km, where the full-precision kernel will be replaced by the selected kernel whose distance to the full-precision kernel is the shortest one during forward." }, { "heading": "3.2 QBN EXPERIMENTS", "text": "We display our QBN experiments on in Table.1, where we use the same experiment settings mention in Section.2.2. Besides different networks and datasets are tested, we also use a different quantized bit on these networks to find how QBN can perform. When we use the quantized bit p < 9, we can use less than 9-bit number to represent the binary-kernel, this provides the compression ability of QBN. We use compressed ratio (CR) which is a number larger than 1 to show the ratio between the original BWNs and the compressed model’s parameters only including binarized layers. In this paper, we do not use 8-bit quantized binary kernels, which have a high computational cost and small compressed ratio." }, { "heading": "4 DISCUSSION ON QBN", "text": "In this section, we will discuss the experimental results of QBN and its potential usage, including model compression, kernel quantization strategies, the existence and transferability of the selected kernels, and other selection of binary-kernels." }, { "heading": "4.1 MODEL COMPRESSION", "text": "With the discovery that BWNs contain primary binary sub-networks, we can reduce the number of parameters to represent a binary-kernel by changing the small magnitude weights’ signs with bearable to the performance of BWNs. For VGG-7 on Cifar-10 and ResNet-18 on ImageNet, we can compress their parameters to an extremely small number by replacing the whole 512 types of 3×3 binary-kernel with fewer types of binary kernels from those 2k selected binary-kernels, and the compressed ratio can be higher than 5×. For ResNet-20 and ResNet-56 which are thinner and have a small number of channels and parameters, they have a low endurance on compression, the compressed ratio can achieve to 1.5× with a bearable accuracy drop (less than 3% on Cifar-10). For a more aggressive compression with very low bit quantization binary-kernels, networks with fewer parameters like ResNet-20’s training stability will drop due to their limited number of parameters. The experimental results are shown in Table.3 in Appendix.H." }, { "heading": "4.2 CONNECTION BETWEEN PRIMARY BINARY SUB-NETWORKS", "text": "We use a hyper-parameter threshold ∆ in Algorithm.1 to bridge QBN and Primary Binary SubNetworks. When ∆ = 0, it means we first binarize all weights, then quantize these binary-kernels to those selected kernels. When ∆ is large enough, it means we directly quantize the full-precision kernels. When ∆E is at a proper range of weight norm, those large weights will be first binarized to ±1. Considering the weight norm is usually a small value (from weight visualization in Figure.1 and Figure.8) compared to 1, these large weights receive a larger penalty by changing their signs during calculating the L2 distance between full-precision kernels and the selected binary-kernels. Thus, ∆ is a hyper-parameter deciding how many portions will be considered as large weights, in the same term, Primary Binary Sub-Networks. According to our experiments of using different ∆ in Figure.16 of Appendix.O, we find that ∆ > 0 is almost better than ∆ = 0. This sign() operation for all weights will eliminate the information of full-precision norm. Overall, these experimental results suggest our settings of ∆ primary binary sub-networks first helping on quantizing binary-kernels, compared to binarizing weights first." }, { "heading": "4.3 QUANTIZATION BIT STRATEGY", "text": "When using low quantization bit for binary-kernels, the performance drop will not be negligible, thus how to assign quantization bit to different layer is important. For VGG-7 and ResNet, they contain much more parameters in higher layers (layers near to the output), which have more channels, but their computational cost is similar in each layer. From the view of model compression, we find that higher layers have a higher endurance for the low-bit quantized kernels compared to lower layers (layers near to the input). Thus, we use low-bits in the last layer/group and use more-bits for the rest layers/groups to avoid bottlenecks." }, { "heading": "4.4 EXISTANCE AND TRANSFERABILITY OF THE SELECTED KERNELS", "text": "To prove the existence of the selected kernels in other cases, and the transferability of our selected kernels, we did experiments on extracting top frequent kernels from different networks and layers and compare them with our selected kernels in Appendix.L. Then we conduct fine-tuning experiments for a pretrained BWN. This will be further studied in Appendix.M." }, { "heading": "4.5 OTHER SELECTION OF BINARY-KERNELS", "text": "We discuss the other selection of binary-kernels in Appendix.N. For very low-bit quantization, we suggest using the most frequent binary-kernels rather than those less frequent ones. For the case like quantization bit p > 4, the choice of binary-kernels is not a essential problem." }, { "heading": "A APPENDIX", "text": "In this appendix, we will display many additional experiments due to the limited pages in the Context. These experiments are all introduced in the Context as supplementary material to strength our main idea and make our contribution more convincing." }, { "heading": "B EXPERIMENTS ON TRAINING BWNS", "text": "In this appendix section, we display our experiments on training different networks using XNorBWN and LQ-BWN as shown in Table.2." }, { "heading": "C COMPARE LEARNABLE SF AND GAMMA IN BN", "text": "In original LQ-Net, they use an algorithm called ’Quantization Error Minimization’ to calculate channel-wise quantizers, which are learnable scaling factors for each channel of Conv layers in binary weight case. Similarly, there is γ in BatchNormalization layer, which also processes the preactivation values in a channel-wise manner. In LQ-BWN, we normalize both γ and scaling factors in such a channel-wise manner, and then plot the relation between these two values in Figure.6. As the figure shows, when the network goes deeper, two values behave highly correlated to each other, especially when the Conv layer is wide. Their high correlation between each other indicates the overlapping function of both.\n0.0 0.2 0.4 0.6 0.8 1.0 LQ-Net Scaling Factors\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nBa tc\nhN or\nm G\nam m\na\nr=0.24\nVGG C1\n0.0 0.2 0.4 0.6 0.8 1.0 LQ-Net Scaling Factors\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nBa tc\nhN or\nm G\nam m\na\nr=0.65\nVGG C3\n0.0 0.2 0.4 0.6 0.8 1.0 LQ-Net Scaling Factors\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nBa tc\nhN or\nm G\nam m\na\nr=0.82\nVGG C5\n0.0 0.2 0.4 0.6 0.8 1.0 LQ-Net Scaling Factors\n0.0\n0.2\n0.4\n0.6\n0.8 1.0 Ba tc hN or m G am m a r=0.0013\nR-20 G1B2C1\n0.0 0.2 0.4 0.6 0.8 1.0 LQ-Net Scaling Factors\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nBa tc\nhN or\nm G\nam m\na\nr=0.003\nR-20 G2B2C1\n0.0 0.2 0.4 0.6 0.8 1.0 LQ-Net Scaling Factors\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nBa tc\nhN or\nm G\nam m\na\nr=0.076\nR-20 G3B2C1\n0.0 0.2 0.4 0.6 0.8 1.0 LQ-Net Scaling Factors\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nBa tc\nhN or\nm G\nam m\na\nr=0.031\nR-18 G1B1C1\n0.0 0.2 0.4 0.6 0.8 1.0 LQ-Net Scaling Factors\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nBa tc\nhN or\nm G\nam m\na\nr=0.072\nR-18 G2B1C1\n0.0 0.2 0.4 0.6 0.8 1.0 LQ-Net Scaling Factors\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nBa tc\nhN or\nm G\nam m\na\nr=0.6\nR-18 G3B1C1\nFigure 6: The relation between the scaling factor of the LQ-BWN and the gamma in BatchNorm layer after the corresponding Conv layer channel by channel after normalizing their values. The X-axis is the scaling factor value and Y-axis is the gamma in the corresponding BatchNorm layer. The point scattered in the figure is a combination of two values after normalization. r in the legend indicates the correlation coefficient. Texts below the figures indicates the certain layer, VGG is VGG-7, R-20 is ResNet-20, R-18 is ResNet-18, C is Conv, G is Group, and B is Block. These abbreviations will also be used in the rest of the paper." }, { "heading": "D QUANTIZATION ERROR CURVE", "text": "In XNor-Net where XNor-BWN is first raised, the scaling factors are proposed to minimize the quantization error in a calculated deterministic way. In the BNN survey (Qin et al., 2020), the authors summarized several BWNs algorithms using ”Minimizing the Quantization Error” which has a common form as shown in Equation3 to design their methods. Thus we plot the quantization error curve of different networks as shown in Figure.7. We can find that in most case the quantization error between full-precision weights and binary weights is not minimized. Therefore we are concerned that it might not be reasonable to use scaling factors to reduce the quantization error, even further, it might not be necessary to reduce the quantization error.\nJ(b, α) = ||x− αb||2 α∗, b∗ = argminα,bJ(b, α) (3)" }, { "heading": "E WEIGHT HISTROGRAM ON OTHER NETWORKS AND LAYERS", "text": "We visualize the full-precision weight histrograms of ResNet-18’s different layers in Figure.8." }, { "heading": "F TRACING LARGE MAGNITUDE WEIGHTS", "text": "We trace the large magnitude weights’ signs and their norm overlapping of ResNet-20 in Figure.9 and ResNet-18 in Figure.10. We also plot the hamming distance of ResNet-18 between the weights during training and weights after training in Figure.11. The experiment contents are the same as the ones in the Context.\n0.75 0.50 0.25 0.00 0.25 0.50 0.75 Weight Value\n0\n5\n10\n15\n20\n25\n30\n35\n40\nPr ob\nab ilit\ny De\nns ity\nG0B1C2 G1B1C2 G2B1C2 G3B1C2\nXNor-BWN R-18\n3 2 1 0 1 2 Weight Value\n0\n5\n10\n15\n20\n25\n30\n35\n40\nPr ob\nab ilit\ny De\nns ity\nC0 G3B1C2\nLQ-BWN R-18 1.0 0.5 0.0 0.5 1.0\nWeight Value 0\n5\n10\n15\n20\n25\nPr ob\nab ilit\ny De\nns ity\nG0B1C1 G1B1C1 G2B1C1 G3B1C1\nXNor-BWN R-18\n3 2 1 0 1 2 Weight Value\n0\n2\n4\n6\n8\n10\n12\n14\nPr ob\nab ilit\ny De\nns ity\nC0 G3B1C2\nLQ-BWN R-18\nFigure 8: Full-precision weight histograms visualization in ResNet-18. We choose the weights from the second Conv in the first Block of each group, and the first Conv layer Conv0.\n0 25 50 75 100 125 150 175 200 Epoch\n75\n80\n85\n90\n95\n100\nPe rc\nen ta\nge\nTop 0.01 Positive Top 0.05 Positive Top 0.1 Positive Top 0.2 Positive Top 0.3 Positive\n0 25 50 75 100 125 150 175 200 Epoch\n80\n85\n90\n95\n100\nPe rc\nen ta\nge\nTop 0.01 Positive Top 0.05 Positive Top 0.1 Positive Top 0.2 Positive Top 0.3 Positive\nXNor-BWN R-20\n0 25 50 75 100 125 150 175 200 Epoch\n0.00\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\n0.35\nHa m\nm in\ng Di\nst an\nce\n0 25 50 75 100 125 150 175 200 Epoch\n60\n70\n80\n90\n100\nPe rc\nen ta\nge\nTop 0.01 Positive Top 0.05 Positive Top 0.1 Positive Top 0.2 Positive Top 0.3 Positive\n0 25 50 75 100 125 150 175 200 Epoch\n60\n70\n80\n90\n100\nPe rc\nen ta ge Top 0.01 Positive Top 0.05 Positive Top 0.1 Positive Top 0.2 Positive Top 0.3 Positive\nLQ-BWN R-20\n0 25 50 75 100 125 150 175 200 Epoch\n0.0\n0.1\n0.2\n0.3\n0.4\nHa m\nm in\ng Di\nst an\nce\nFigure 9: XNor-BWN and LQ-BWN ResNet-20 Weights’ signs tracing and weights’ norm overlapping with positive weights. And the hamming distance between the weights during training and the weights after training." }, { "heading": "G FLIPPING AND PRE-FIXING LARGE MAGNITUDE WEIGHTS AND RETRAINING", "text": "We first flip 10% of weights of a trained BWN, and then fix its weights according to their magnitude and retraining it in Figure.12. The small learning rate in retraining is 0.0002, which is applied to the ordinary 160th to 200th epoch training. The large learning rate is using the original learning rate when training a BWN from scratch. We use three flipping strategy, flipping those largest magnitude weights, random weights, and those smallest magnitude weights." }, { "heading": "H MODEL COMPRESSION ADDITIONAL EXPERIMENTS", "text": "We did more experiment with different quantization bit (together with different compressed ratio) on different networks and datasets as shown in Fig3." }, { "heading": "I BINARY-KERNEL DISTRIBUTION IN EACH LAYER", "text": "We visualize the binary-kernel frequency of other layers and networks in Figure.13. We sum the frequency of top 2p binary-kernels in each group of ResNet-18 with the log-scale x-axis.\nJ INSTABILITY TRAINING ON NETWORKS WITH VERY FEW PARAMETERS\nWhen training QBNs on ResNet-20 with low bit quantizations, the test accuracy’s variance is large, making training unstable. We write down a table on ResNet-20 with the training accuracy Standard Deviation among 5 times in Table.4." }, { "heading": "K EXISTANCE AND TRANSFERABILITY OF THE SELECTED KERNELS", "text": "Selected binary-kernels do exist, and selected binary-kernels can transfer to other different layers, networks, or datasets. We extract different layers’ top 21, 22, ..., 27 frequent kernels percentage over all kernels and display in Table.13. It shows that usually, the imbalanced distribution among BWNs are very common, where only a small number of binary-kernels can take an advantage percentages\nof all binary kernels. This proves that the QBNs algorithm can adapt to a wide range of networks and datasets.\nTo prove the transferability of the selected binary-kernels, Table.3 shows that the experiments are all based on the same batch of the selected kernels, which are extracted from the last Conv layer of a VGG-7 XNor-BWN without any additional tricks. The results on ResNet-20, ResNet-56 on Cifar-10, and ResNet-18 on ImageNet with such residual structures and different layers show how\nthose selected kernels perform well. We display how the selected binary-kernels we use throughout our experiments from the last Conv of VGG-7 appear in other networks’ different layer, plotting a Figure.14 to compare their frequency.\nWe also apply QBNs to fine-tune pre-trained BWNs, when using relatively more quantization bit, the network can usually gain comparable performance without training from scratch. In our QBN algorithm, the computational cost increases when using more quantization bits, thus we can directly fine-tune a pre-train BWN. The experiments in Appendix.M show that we can gain a uniform 7−bit QBN in one epoch, and a uniform 6− bit QBN in two epochs. This provides a higher efficient way to train those high quantization bit QBNs." }, { "heading": "L THE DEGREE OF AGGREGATION OF BINARY-KERNELS", "text": "We do a statistic on how much percentage of the target networks’ top frequent binary-kernels can be covered by our selected binary-kernels from the last Conv of XNor-BWN VGG-7. We use the first figure on the top left of Figure.14 to demonstrate that, when using 1 − bit quantization on binary-kernels of XNor-BWN ResNet-18’s first 4 layers(the first group of ResNet-18), our selected kernels can cover 100% of XNor-BWN ResNet-18’s top 21 frequent kernels. The percentage has already been processed with the number of each kernel in the layer. 100% is the limit that if we directly choose the most frequent kernels from this layer/group as our selected kernels. For the lowest percentage appearing when using 5-bit, there are still more than 80% kernels in the first group’s top 25 frequent kernels can be represented directly by our selected kernels." }, { "heading": "M FINE-TUNE WITH HIGHER QUANTIZATION BIT(ON 6 BIT, 7 BIT)", "text": "We write a Table.5 using relatively higher quantization bit to fine-tune ResNet-18 on ImageNet with one epoch. The pretrained model is the normal ResNet-18 BWN at 90th epoch. We list the number of epochs that the fine-tune requires. For those lower bit, the fine-tuned performance cannot achieve to the one training from scratch." }, { "heading": "N OTHER SELECTION OF BINARY-KERNELS", "text": "Here we list three reasons to illustrate why we use such a way to collect the binary-kernels. 1. If we regard QBNs as a clustering method given cluster numbers, these kernels with high appearance frequency are most likely the cluster centers. 2. We visualize these top frequent kernels as shown\nin Fig.15, and they appear similar to conventional spatial filters, including ”Moving Average Filter”, ”Point Detector”, ”Line Detector” and so on. 3. We use the less frequent binary-kernels as the selected kernels to test if our selection based on frequency is a good choice. We find that using those less frequent kernels, an accuracy drop can be observed in different experiments. Those less frequent kernels are used by inverting the order of top 128 frequent kernels, eg. for 22 kernels, they are 125-th, 126-th, 127-th, and 128-th frequent kernels. Given the experiment results in Table.6, we find that when using very low quantization bit, specifically less than or equal to 3, will significantly hurt the network when using those least frequent kernels. When the quantization bit increases, the difference in performances reduce. Thus we suggest to use those most frequent kernels if training a very low-bit QBN, or finetuning a pre-trained BWN. In other cases like training from scratch with quantization bit more than 3, the frequency of the selected kernels is not a strict deterministic factor.\nO INFLUENCE OF HYPER-PARAMETER ∆\nTo test the influence of the Hyper-Parameter ∆ in our algorithm, we use VGG-7 with all 5-bit quantization kernels and ResNet-56 with FP-9-6-2. We do not use ResNet-20 due to its instability training which we have discussed. The result is shown in Figure.16." }, { "heading": "P THE SELECTED KERNELS FROM OTHER CONV LAYERS", "text": "In previous experiments, our selected binary kernelsKm use the statistic information extracted from one single VGG-7’s last Conv layer and regard them as constant kernels. We further use different strategies to demonstrate that the selected binary kernels have a strong generalization ability to apply on other networks. In the following experiments, we use the selected binary kernels from the last\nlayer of ResNet-20, VGG-7’s 3rd and 5th Conv layers, and from the last layer of ResNet-18. We use these kernels from different strategy to other different networks. The result is shown in Table.7." } ]
2,020
WEIGHTS HAVING STABLE SIGNS ARE IMPORTANT: FINDING PRIMARY SUBNETWORKS
SP:c343c46cd2f33ae06be87cf9b44fbdbd59f335cd
[ "**Overview**: The paper presents a simple regularizer term that aims to force a GAN to generate samples following a uniform distribution over different classes. The regularizer depends on a classifier that works well on an imbalanced or long-tailed dataset. The paper presents experiments on CIFAR-10 and LSUN that were synthetically long-tailed or imbalanced. The results show that the proposed term generates samples that follow a more uniform distribution over classes.", "The paper proposes a regularizer to force an unconditional GAN generator to produce samples that follow a uniform class distribution. To provide feedback to the generator about the class distribution over the generated images, the proposed method utilizes a pretrained classifier on the same (imbalanced) training dataset. Motivated by the exponential forgetting of earlier tasks in neural networks [1], the regularization term encourages the generator to increase the proportion of samples of an infrequent class after a certain number of iterations and vice versa. Empirical studies are performed to show the effectiveness of the regularization: 1) the paper shows that the proposed method enables generating samples with a uniform class distribution with a GAN trained on a dataset with a long-tailed class distribution and (2) that the method benefits in generating universal adversarial perturbations (UAPs) in the data-free scenario. " ]
Generative Adversarial Networks (GANs) have swiftly evolved to imitate increasingly complex image distributions. However, majority of the developments focus on performance of GANs on balanced datasets. We find that the existing GANs and their training regimes which work well on balanced datasets fail to be effective in case of imbalanced (i.e. long-tailed) datasets. In this work we introduce a novel and theoretically motivated Class Balancing regularizer for training GANs. Our regularizer makes use of the knowledge from a pre-trained classifier to ensure balanced learning of all the classes in the dataset. This is achieved via modelling the effective class frequency based on the exponential forgetting observed in neural networks and encouraging the GAN to focus on underrepresented classes. We demonstrate the utility of our contribution in two diverse scenarios: (i) Learning representations for long-tailed distributions, where we achieve better performance than existing approaches, and (ii) Generation of Universal Adversarial Perturbations (UAPs) in the data-free scenario for the large scale datasets, where we bridge the gap between data-driven and data-free approaches for crafting UAPs.
[ { "affiliations": [], "name": "BALANCING GAN" } ]
[ { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "arXiv preprint arXiv:1809.11096,", "year": 2018 }, { "authors": [ "Kaidi Cao", "Colin Wei", "Adrien Gaidon", "Nikos Arechiga", "Tengyu Ma" ], "title": "Learning imbalanced datasets with label-distribution-aware margin loss", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yin Cui", "Menglin Jia", "Tsung-Yi Lin", "Yang Song", "Serge Belongie" ], "title": "Class-balanced loss based on effective number of samples", "venue": null, "year": 2019 }, { "authors": [ "Harm De Vries", "Florian Strub", "Jérémie Mary", "Hugo Larochelle", "Olivier Pietquin", "Aaron C Courville" ], "title": "Modulating early visual processing by language", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR09,", "year": 2009 }, { "authors": [ "Robert Geirhos", "Patricia Rubisch", "Claudio Michaelis", "Matthias Bethge", "Felix A Wichmann", "Wieland Brendel" ], "title": "Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": "arXiv preprint arXiv:1811.12231,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Alexia Jolicoeur-Martineau" ], "title": "The relativistic discriminator: a key element missing from standard gan", "venue": "arXiv preprint arXiv:1807.00734,", "year": 2018 }, { "authors": [ "Alexia Jolicoeur-Martineau" ], "title": "On relativistic f-divergences", "venue": "arXiv preprint arXiv:1901.02474,", "year": 2019 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the national academy of sciences,", "year": 2017 }, { "authors": [ "Alexander Kolesnikov", "Lucas Beyer", "Xiaohua Zhai", "Joan Puigcerver", "Jessica Yung", "Sylvain Gelly", "Neil Houlsby" ], "title": "Large scale learning of general visual representations for transfer", "venue": null, "year": 1912 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Karol Kurach", "Mario Lučić", "Xiaohua Zhai", "Marcin Michalski", "Sylvain Gelly" ], "title": "A large-scale study on regularization and normalization in gans", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Hong Liu", "Rongrong Ji", "Jie Li", "Baochang Zhang", "Yue Gao", "Yongjian Wu", "Feiyue Huang" ], "title": "Universal adversarial perturbation via prior driven uncertainty approximation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp. 2941–2949,", "year": 2019 }, { "authors": [ "Ziwei Liu", "Zhongqi Miao", "Xiaohang Zhan", "Jiayun Wang", "Boqing Gong", "Stella X. Yu" ], "title": "Largescale long-tailed recognition in an open world", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019b", "year": 2019 }, { "authors": [ "Mario Lucic", "Michael Tschannen", "Marvin Ritter", "Xiaohua Zhai", "Olivier Bachem", "Sylvain Gelly" ], "title": "High-fidelity image generation with fewer labels", "venue": null, "year": 1903 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": null, "year": 2014 }, { "authors": [ "Takeru Miyato", "Masanori Koyama" ], "title": "cGANs with projection discriminator", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "arXiv preprint arXiv:1802.05957,", "year": 2018 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Omar Fawzi", "Pascal Frossard" ], "title": "Universal adversarial perturbations", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "K.R. Mopuri", "A. Ganeshan", "R. Venkatesh Babu" ], "title": "Generalizable data-free objective for crafting universal adversarial perturbations", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2019 }, { "authors": [ "Konda Reddy Mopuri", "Utsav Garg", "R. Venkatesh Babu" ], "title": "Fast feature fool: A data independent approach to universal adversarial perturbations", "venue": "In Proceedings of the British Machine Vision Conference (BMVC),", "year": 2017 }, { "authors": [ "Konda Reddy Mopuri", "Utkarsh Ojha", "Utsav Garg", "R. Venkatesh Babu" ], "title": "Nag: Network for adversary generation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Konda Reddy Mopuri", "Phani Krishna Uppala", "R. Venkatesh Babu" ], "title": "Ask, acquire, and attack: Data-free uap generation using class impressions", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV), September 2018b", "year": 2018 }, { "authors": [ "Sankha Subhra Mullick", "Shounak Datta", "Swagatam Das" ], "title": "Generative adversarial minority oversampling", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Augustus Odena", "Christopher Olah", "Jonathon Shlens" ], "title": "Conditional image synthesis with auxiliary classifier gans", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Omid Poursaeed", "Isay Katsman", "Bicheng Gao", "Serge Belongie" ], "title": "Generative adversarial perturbations", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "arXiv preprint arXiv:1511.06434,", "year": 2015 }, { "authors": [ "Suman Ravuri", "Oriol Vinyals" ], "title": "Classification accuracy score for conditional generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Shibani Santurkar", "Ludwig Schmidt", "Aleksander Madry" ], "title": "A classification-based study of covariate shift in gan distributions", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Akash Srivastava", "Lazar Valkov", "Chris Russell", "Michael U Gutmann", "Charles Sutton" ], "title": "Veegan: Reducing mode collapse in gans using implicit variational learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Yu-Xiong Wang", "Deva Ramanan", "Martial Hebert" ], "title": "Learning to model the tail", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Dongxian Wu", "Yisen Wang", "Shu-Tao Xia", "James Bailey", "Xingjun Ma" ], "title": "Skip connections matter: On the transferability of adversarial examples generated with resnets", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yan Wu", "Jeff Donahue", "David Balduzzi", "Karen Simonyan", "Timothy Lillicrap" ], "title": "Logan: Latent optimisation for generative adversarial networks", "venue": "arXiv preprint arXiv:1912.00953,", "year": 2019 }, { "authors": [ "Fisher Yu", "Yinda Zhang", "Shuran Song", "Ari Seff", "Jianxiong Xiao" ], "title": "Lsun: Construction of a large-scale image dataset using deep learning with humans", "venue": "in the loop. CoRR,", "year": 2015 }, { "authors": [ "Ning Yu", "Ke Li", "Peng Zhou", "Jitendra Malik", "Larry Davis", "Mario Fritz" ], "title": "Inclusive gan: Improving data and minority coverage in generative models", "venue": "arXiv preprint arXiv:2004.03355,", "year": 2020 }, { "authors": [ "Chaoning Zhang", "Philipp Benz", "Tooba Imtiaz", "In So Kweon" ], "title": "Understanding adversarial examples from the mutual influence of images and perturbations", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Han Zhang", "Zizhao Zhang", "Augustus Odena", "Honglak Lee" ], "title": "Consistency regularization for generative adversarial networks", "venue": "arXiv preprint arXiv:1910.12027,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Image Generation witnessed unprecedented success in recent years following the invention of Generative Adversarial Networks (GANs) by Goodfellow et al. (2014). GANs have improved significantly over time with the introduction of better architectures (Gulrajani et al., 2017; Radford et al., 2015), formulation of superior objective functions (Jolicoeur-Martineau, 2018; Arjovsky et al., 2017), and regularization techniques (Miyato et al., 2018). An important breakthrough for GANs has been the ability to effectively use the information of class conditioning for synthesizing images (Mirza & Osindero, 2014; Miyato & Koyama, 2018). Conditional GANs have been shown to scale to large datasets such as ImageNet (Deng et al., 2009) with 1000 classes (Miyato & Koyama, 2018).\nOne of the major issues with unconditional GANs has been their inability to produce balanced distributions over all the classes present in the dataset. This is seen as problem of missing modes in the generated distribution. A version of the missing modes problem, known as the ‘covariate shift’ problem was studied by Santurkar et al. (2018). One possible reason may be the absence of knowledge about the class distribution P (Y |X)1 of the generated samples during training. Conditional GANs on the other hand, do not suffer from this issue since the class label Y is supplied to the GAN during training. However, it has been recently found by Ravuri & Vinyals (2019) that despite being able to do well on metrics such as Inception Score (IS) (Salimans et al. (2016)) and Frèchet Inception Distance (FID) (Heusel et al., 2017), the samples generated from the state-of-the-art conditional GANs lack diversity in comparison to the underlying training datasets. Further, we observed that although conditional GANs work well in balanced case, they suffer performance degradation in the imbalanced case.\nIn order to address these shortcomings, we propose an orthogonal method (with respect to label conditioning) to induce the information about the class distribution P (Y |X) of generated samples in the GAN framework using a pre-trained classifier. We achieve this by tracking the class distribution of samples produced by the GAN using a pre-trained classifier. The regularizer utilizes the class distribution to penalize excessive generation of samples from the majority classes, thus enforcing\n1Here Y represents labels and X represents data.\nthe GAN to generate samples from minority classes. Our regularizer involves a novel method of modelling the forgetting of samples by GANs, based on the exponential forgetting observed in neural networks (Kirkpatrick et al. (2017)). We infer the implications of our regularizer by a theoretical bound and empirically verify the same.\nWe conduct empirical analysis of the proposed class balancing regularizer in two diverse and challenging scenarios:\n(i) Training GANs for image generation on long-tailed datasets: Generally, even in the long-tailed distribution tasks, the test set is balanced despite the imbalance in the training set. This is because it is important to develop Machine Learning systems that generalize well across all the support regions of the data distribution, avoiding undesired over-fitting to the majority (or head) classes. Hence, it is pertinent to train GANs that can faithfully represent all classes.\n(ii) Transferring the knowledge from a learnt classifier (P (Y |Xt)) to a GAN being trained on arbitrary prior distribution P (Xp): This is a specific situation where the samples from target distribution Xt are unavailable. Instead, discriminative feature knowledge is indirectly available in the form of a trained classifier (P (Y |Xt)). This is a perfect fit for crafting input-agnostic (Universal) adversarial perturbations in the data-free scenario. We show that the proposed regularizer can enable the generated samples to not only extract information about the target data with a trained classifier in the loop, but also represent its support to a greater extent.\nIn summary, our contributions can be listed as follows:\n• We propose a ‘class-balancing’ regularizer that makes use of the statistics (P (Y |X)) of generated samples to promote uniformity while sampling from an unconditional GAN. The effect of our regularizer is depicted both theoretically (Section 3) and empirically (Section 4). • We show that our regularizer enables GANs to learn uniformly across classes even when the training distribution is long-tailed. We observe gains in FID and accuracy of a classifier trained on generated samples. • We also show that by combining a pre-trained classifier (i.e. P (Y |Xt)) trained on a target dataset Xt, with an arbitrary distribution P (Xp), our framework is capable of synthesizing novel samples related to the target dataset. We show that UAPs created on such novel samples generalize to real target data and hence lead to an effective data-free attack. This application is novel to our framework and cannot be realized by conditional GANs." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 GENERATIVE ADVERSARIAL NETWORKS (GANS)", "text": "Generative Adversarial Networks (GANs) are formulated as a two player game in which the discriminator D tries to classify images into two classes: real and fake. The generator G tries to generate images (transforming a noise vector z ∼ Pz ) which fool the discriminator (D) into classifying them as real. The game can be formulated by the following objective:\nmin G max D Ex∼Pr [log(D(x))] + Ez∼Pz [log(1−D(G(z))] (1)\nThe exact optimization for trainingD is computationally prohibitive in large networks and the GAN is trained by alternative minimization using loss functions. Multiple loss functions have been proposed for stabilizing the GAN training. In our work we use the relativistic loss function (JolicoeurMartineau, 2018) which is formulated as:\nLrelD = −E(x,z)∼(Pr,Pz)[log(σ(D(x)−D(G(z)))] (2)\nLrelG = −E(x,z)∼(Pr,Pz)[log(σ(D(G(z))−D(x))] (3)\nThis unconditional GAN formulation does not have any class conditioning and produces different number of samples from different classes (Santurkar et al., 2018). In other words, the distribution is not balanced (uniform) across different classes for the generated data." }, { "heading": "2.2 CONDITIONAL GAN", "text": "The conditional GAN (Mirza & Osindero, 2014) generates images associated with input label y using the following objective:\nmin G max D Ex∼Pr [log(D(x|y))] + Ez∼Pz [log(1−D(G(z|y))] (4)\nThe Auxillary Classifier GAN (ACGAN) (Odena et al., 2017) uses an auxiliary classifier for classification along with normal discriminator to enforce high confidence samples from the conditioned class y. Whereas cGAN with projection (Miyato & Koyama, 2018) uses Conditional Batch Norm (De Vries et al., 2017) in the generator and uses a projection step in the discriminator to provide class information to the GAN. We refer to this method as cGAN in the subsequent sections.\nPossible issue with Conditional GAN in Long-tailed Setting: The objective in eq.(4) can be seen as learning a different G(z|y) and D(x|y) for each of the K classes. In this case the tail classes with fewer samples can suffer from poor generalization as they have very few samples. However, in practice there is parameter sharing among different class generators but still class specific parameters are also present in form of Conditional BatchNorm. We find that performance of conditional GANs degrade more in comparison to unconditonal GANs in the long-tailed scenario (Section 4)." }, { "heading": "3 METHOD", "text": "In our method we propose to introduce a pretrained classifier (C) to provide feedback to the generator about the label distribution P (Y ) over the generated images. The proposed regularizer is added with the generator loss and trained using backpropogation. We first describe the method of modelling in Section 3.1. The exact formulation of the regularizer and its theoretical properties are described in Section 3.2. The overview of our method is presented in Figure (1a)." }, { "heading": "3.1 CLASS STATISTICS FOR GAN", "text": "GAN is a dynamic system in which the generator G has to continuously adapt itself in a way that it is able to fool the discriminator D. During the training, discriminator D updates itself, causing the objective for the generator G also to change. This change in objective can be seen as learning of different tasks for the generator G. In this context, we draw motivation from the seminal work on catastrophic forgetting in neural networks (Kirkpatrick et al., 2017) which shows that a neural\nnetwork trained using SGD suffers from exponential forgetting of earlier tasks when trained on a new task. Based on this, we define effective class frequency N̂ tk of class k at cycle t as:\nN̂ tk = (1− α) ˆN t−1k + c t−1 k (5)\nHere ct−1k is the number of samples of class k produced by the GAN in cycle (t − 1). The class of the sample is determined by the pretrained classifier C. Although D gets updated continuously, the update is slow and requires some iterations to change the form of D. Hence we update the statistics after certain number of iterations which compose a cycle. Here α is the exponential forgetting factor which is set to 0.5 in all our experiments. We normalize the class frequency N̂ tk to obtain discrete effective class distribution:\nN tk = N̂ tk∑ k N̂ t k\n(6)" }, { "heading": "3.2 REGULARIZER FORMULATION", "text": "The regularizer objective is defined as the maximization of the term(Lreg) below:\nmax p̂ ∑ k p̂k log(p̂k) N tk (7)\nwhere p̂ = ∑n i=1 C(G(zi)) n . In other words, p̂ is the average softmax vector (obtained from the classifier C) over the batch of n samples and p̂k is its kth component corresponding to class k. zi corresponds to random noise vector sampled from Pz . If the classifier C recognizes the samples confidently with probability ≈ 1, p̂k can be seen as the approximation to the ratio of the number of samples that belong to class k to the total number of samples in the batch n. TheN tk in the regularizer term is obtained through the update rule in Section 3.1 and is a constant during backpropagation. We want to emphasize that classifier C is not required to be trained on same data as the GAN, instead it can be trained in ways such as semi-supervised learning, few-shot learning, etc. For instance, in section 4.2 we show that a classifier trained in a semi-supervised scenario also enables the GAN to produce a balanced distribution. Hence our approach doesn’t specifically need labelled data which is in contrast to conditional GANs which require labels for each image while training.\nProposition: The maximization of the proposed objective in (7) leads to the following bound on p̂k:\np̂k ≤ e −K(log(K)−1) N\nt k∑\nk N t k −1 (8)\nwhere K is the number of distinct class labels produced by classifier C. Please refer to the appendix Section A.1 for proof of the same.\nImplications of the proposition: The bound on p̂k is inversely related to the exponent of the fraction of effective class frequency N tk/ ∑ kN t k for a given class k. In case of generating a balanced distribution, p̂k = 1/K which leads to the exponential average N tk = 1/K. Hence given sufficient iterations, the p̂k value will achieve the upper bound which signifies tightness of the same. To demonstrate effect of the regularizer empirically, we construct two extreme case examples based on the nature of the bound:\n• IfN tk N ti , ∀i 6= k, then the bound on p̂k would approach e−K(log(K)−1)−1. Hence the network is expected to decrease the proportion of class k samples. • If N tk N ti , ∀i 6= k, then the bound on p̂k will be e−1. Hence the network is expected to increase the proportion of class k samples.\nWe verified the two extreme cases above by training a SNDCGAN (Miyato et al., 2018) (DCGAN with spectral normalization) on CIFAR-10 and fixing N̂ tk (unnormalized version of N t k) across time steps and term it as Nk. Then we initialize Nk to a very large value and a very small value. Results presented in Figure (1b) show that the GAN increases the proportion of samples of class k in case of low Nk and decreases the proportion of samples in case of large Nk. This shows the balancing behaviour of proposed regularizer." }, { "heading": "3.3 COMBINING THE REGULARIZER AND GAN OBJECTIVE", "text": "The regularizer is then combined with the generator loss in the following way: Lg = −E(x,z)∼(Pr,Pz)[log(σ(D(G(z))−D(x)))]− λLreg (9) It has been recently shown (Jolicoeur-Martineau, 2019) that the first term of the loss leads to minimization ofDf (Pg, Pr) that is divergence between real and generated data distribution. The regularizer term ensures that the distribution of classes across generated samples is uniform. The combined objective provides insight into the working of framework, as the first term ensures that the generated images fall in the image distribution and the second term ensures that the distribution of classes is uniform. As Pr comprises of diverse samples from majority class the first objective term ensures that Pg is similarly diverse. The second term in the objective ensures that the discriminative properties of all classes are present uniformly in the generated distribution, which ensures that minority classes get benefit of diversity within the majority classes. This is analogous to approaches that transfer knowledge from majority to minority classes for long-tailed classifier learning (Liu et al., 2019b; Wang et al., 2017)." }, { "heading": "4 EXPERIMENTS", "text": "For evaluating the effectiveness of our balancing regularizer, we conduct two sets of experiments: (i) image generation from long-tailed distributions, and (ii) creating Universal Adversarial Perturbations in the data-free scenario. The goal of the first task is to generate high quality images across all classes and that of the second task is to craft UAPs when the attacker has no access (e.g. due to privacy) to the target data." }, { "heading": "4.1 IMAGE GENERATION FROM LONG-TAILED DISTRIBUTION", "text": "In this experiment we aim to learn a GAN over a long-tailed dataset, which are prevalent in the real world setting. An important aspect of this problem is that it requires to transfer the knowledge from majority classes to minority classes. Several works have focused on learning classifiers for longtailed distributions (Cao et al., 2019; Cui et al., 2019). Yet works focusing on Image Generation using long-tailed dataset are limited. Generative Minority Oversampling (GAMO) (Mullick et al., 2019) attempts to solve the problem by introducing a three player framework. We do not compare our results with GAMO as it is not trivial to extend GAMO to use schemes like Spectral Normalization, and ResGAN like architecture (Gulrajani et al., 2017) which impede fair comparison.\nDatasets: We performed our experiments on two datasets, CIFAR-10 and a subset of LSUN. The LSUN subset consists of 250k training images and 1.5k validation images. The LSUN subset is composed of 5 balanced classes; Santurkar et al. (2018) identify this subset to be a challenging case for GANs to generate uniform distribution of classes. The original CIFAR-10 dataset is composed of 50k training images and 10k validation images. We construct the long-tailed version of the imbalanced dataset by following the same procedure as Cao et al. (2019). Here, images are removed from training dataset to convert it to a long-tailed distribution while the validation set is kept unchanged. The imbalance ratio (ρ) determines the ratio of number of samples in most populated class to the least populated one: ρ = maxk{nk}/mink{nk}. More details can be found in Appendix A.2. Pre-Trained Classifier: An important component of our framework is the pre-trained classifier, a ResNet32 model trained using Deffered Reweighting (DRW) of loss (Cao et al., 2019) on longtailed versions of LSUN and CIFAR-10 datasets. Accuracy of the pre-trained classifiers and training details are present in Appendix A.3.\nGAN Architecture: We used the SNDCGAN architecture for experiments on CIFAR-10 with images of size of 32× 32 and SNResGAN (ResNet architecture with spectral normalization) structure\nfor experiments on LSUN dataset with images size of 64 × 64. For the conditional GAN baselines we conditioned the generator using Conditional BatchNorm. We compare our method to two widely used conditional GANs: ACGAN and cGAN. The other baseline we use is the unconditional GAN (SNDCGAN & SNResGAN) without our regularizer. All the GANs were trained with spectral normalization in the discriminator for stabilization (Miyato et al., 2018).\nTraining Setup: We train GANs with learning rate of 0.002 for both generator and discriminator. We used Adam optimizer with β1 = 0.5 and β2 = 0.999 for SNDCGAN and β1 = 0 and β2 = 0.999 for SNResGAN. We used a batch size of 256 and 1 discriminator update per generator update. As a sanity check, we use the FID values and visual inspection of images on the balanced dataset and verify the range of values from (Kurach et al., 2019). We update the statisticsN tk by update equation in Section 3.1 after every 2000 iterations. Further details are present in Appendix A.6.\nEvaluation We used the following evaluation metrics:\nKL Divergence from Uniform Distribution of labels: Labels for the generated samples are obtained by using the pre-trained classifier (trained on balanced data) as a proxy to annotator. Classification Accuracy (CA): We use the {(X,Y )} pairs from the GAN generated samples to train a ResNet32 classifier and validate it on real data. For the unconditional GANs the label Y is obtained from the classifier trained on long-tailed data. Note that this is similar to Classifier Accuracy Score (Ravuri & Vinyals, 2019). Frèchet Inception Distance (FID): It measures the 2-Wasserstein Distance on distributions obtained from Inception Network (Heusel et al., 2017). We use 10k samples from CIFAR-10 validation set and 10k (2k from each class) fixed random images from LSUN dataset for measuring FID.\nDiscussion of Results: We present our results in the following subsections: 1) Stability: In terms of stability we find that cGAN suffers from early collapse in case of high imbalance (ρ = 100) and stop improving under 10k iterations. Though we don’t claim about instability of cGANs in general, we emphasize that the same GAN which is stable in balanced scenario is unstable in case of long-tailed version of the same dataset. 2) Biased Distribution: Contrary to cGAN, we find that the distribution of classes generated by ACGAN, SNDCGAN and SNResGAN becomes imbalanced. The images obtained by sampling uniformly and labelling by annotator, suffers from a high KL divergence to the uniform distribution. This leads to some classes being almost absent from the distribution of generated samples as shown in Figure 2. In this case, in Table 1 we observe FID score just differs with small margin even if there is presence of large imbalance in class distribution. Our GAN produces class samples uniformly as is evident from the low KL Divergence. 3) Comparison with State-of-the-Art Methods: In this work we also find that classification accuracy is weakly correlated with FID score which is in agreement to (Ravuri & Vinyals, 2019). We achieve better classifier accuracy in all cases, better than cGAN which achieves state-of-the-art Classifier Accuracy Score (CAS). Our method shows minimal degradation in FID for each long-tailed case, in comparison to the corresponding balanced case. It is also able to achieve the best FID in 3 out of 4 long-tailed cases. Hence we expect that methods such as Consistency Regularization (Zhang et al., 2019), Latent Optimization (Wu et al., 2019b) etc. can be applied in conjunction with our method to further improve the quality of images. But in this work we specifically focused on techniques used to provide class information Y of an image X to the GAN. Several state-of-the-art GANs use an approach similar to cGAN (Wu et al., 2019b; Brock et al., 2018) for conditioning the discriminator and the generator.\nFID (↓) KLDiv(↓) FID (↓) KLDiv(↓) Imbalance Ratio 100 10\nCIFAR-10\nSNDCGAN 36.97± 0.20 0.31± 0.0 32.53± 0.06 0.14± 0.0 Ours (Supervised) 32.93± 0.11 0.06± 0.0 30.48± 0.07 0.01± 0.0 Ours (Semi Supervised) 33.32± 0.03 0.14± 0.0 30.37± 0.14 0.04± 0.0\nLSUN\nSNResGAN 37.70± 0.10 0.68± 0.0 33.28± 0.02 0.29± 0.0 Ours (Supervised) 35.04± 0.19 0.06± 0.0 28.78± 0.01 0.01± 0.0 Ours (Semi Supervised) 35.95± 0.05 0.15± 0.0 30.96± 0.07 0.06± 0.0\nTable 2: Comparison of results in Semi Supervised Setting. The pretrained classifier used in our framework is fine-tuned with 0.1% of labelled data. The same classifier trained on balanced dataset is used as annotator for calculating KL Divergence for all baselines.\nFID (↓) KLDiv(↓) SNResGAN 30.05± 0.05 0.18± 0.0\nACGAN 69.90± 0.13 0.40± 0.0 cGAN 30.87± 0.06 0.09± 0.0 Ours 28.17± 0.06 0.11± 0.0\nTable 3: Results on long-tailed CIFAR-100 dataset with imbalance ratio = 10. FID is computed through 50k generated images and KL Div of class distribution of GAN and uniform distribution is present in last column.\nWe also find that our method trained using SNResGAN performs similarly to experiments in Table 1 on long-tailed CIFAR-100 dataset as well. Our method achieves the best FID of 28.17 among all baselines and also achieves balanced class distribution like cGAN. The results are summarized in Table 3 and detailed experimental details are present in Appendix A.6.1." }, { "heading": "4.2 SEMI-SUPERVISED CLASS-BALANCING GAN", "text": "In this section we show that the presence of classifier in our framework is an advantage for it, as it allows classifiers trained through various sources to be used for providing feedback to GAN. This feedback allows the GAN to generate class balanced distributions in cases when the labels for underlying long-tailed distributions are not known. This reduces the need of labelled data in our framework and shows the effectiveness over conditional GAN. As it has been shown in that performance of conditional GANs deteriote (Lucic et al., 2019) when used with limited labelled data. We use a ResNet-50 pretrained model on ImageNet from BiT (Big Image Transfer) (Kolesnikov et al., 2019) and fine tune it using 0.1 % of labelled data of balanced training set (i.e. 5 images per class for CIFAR-10 and 50 images per class for LSUN dataset). In all long-tailed cases this amount of data for each class is present in the training set.\nWe observe that with just using 0.1% labelled data we are able to obtain a significantly balanced distribution as seen by low KL Divergence in comparison to unconditonal GAN (in Table 2) and also achive better FID score than unsupervised GAN. This application is unique to our framework as conditional GANs explicity require labels for whole dataset for training. The experimental details are present in Appendix A.6." }, { "heading": "4.3 DATA-FREE UNIVERSAL ADVERSARIAL PERTURBATION", "text": "Adversarial perturbation (Szegedy et al. (2013)) is a structured noise added to a benign data sample with an aim of confusing the machine learning model processing it, leading to an inaccurate inference. Universal Adversarial Perturbations (UAP) (Moosavi-Dezfooli et al. (2017)) are such noises that are input agnostic and can fool the model when added to any data sample. These perturbations demonstrate transferability across different deep CNN models posing a challenge to their deployability. Crafting UAPs require original training data on which the target deep model is trained. However, the dataset access can be limited due to privacy restrictions. Attackers overcome this limitation via (i) formulating data-free objectives (e.g. Mopuri et al. (2017)), or (ii) using a proxy dataset composed of either arbitrary natural samples (e.g. Zhang et al. (2020)) or generated synthetic samples (e.g. Mopuri et al. (2018b)). GAN inspired generative modelling (Poursaeed et al. (2018); Mopuri et al. (2018a;b)) of the UAPs for a given CNN classifier has been shown to capture these input agnostic vulnerabilities. However, in the absence of the target training data, these models suffer from lack of knowledge about the training distribution. Further, synthetic samples generated using existing methods (e.g. Mopuri et al. (2018b)) lack diversity and use an activation maximization approach which is computationally expensive since optimization has to be performed for each batch of samples separately. To tackle this issue, we introduce an activation maximization term in our GAN objective to combine discriminative class knowledge (P (Y |Xt)) learnt by the classifier\n(C) trained on target data (Xt) with an arbitrary prior distribution P (Xp). We present an overview of our approach in Figure 3.\nIn the absence of the target data on which the victim CNN classifier is trained, we first train a GAN on an arbitrary dataset. Through our regularizer, we encourage the GAN to generate samples from all the modes of the target data. This is achieved by incorporating the pre-trained CNN classifier in the optimization as discussed in Section 3. Once the GAN is trained, we use the generated samples as a proxy to the target data for crafting UAPs. Since these samples represent the support of the target data modes, they bring in useful prior about the same, enabling the attacker to craft effective UAPs. In the UAP experiments we use Comics Dataset (Comics-Dataset) as the arbitrary prior P (Xp) and use the ResNet-18 (He et al., 2016) classifier trained on ImageNet Deng et al. (2009) to impart class specific features through the activation maximization loss. However the use of Activation maximization (AM) alone with GAN can not encourage GAN to learn features of multiple target classes (i.e. modes). This issue is resolved by making use of our regularizer which encourages the GAN to learn different modes. The final generator objective can then be written as:\nLg = L rel G − λLreg + LAM (10)\nLAM = Ez∼Pz [H(C(G(z)))] (11) where H(C(G(z))) is the entropy of the classifier output for the generated data. This application is unique to our framework and cannot be realized by other conditional GANs. We use a DCGAN architecture to generate 128 × 128 images using a prior distribution of comic images (ComicsDataset). It is found that (Odena et al. (2017)) generating a large number of classes is difficult for a single DCGAN even with conditioning. However, with the proposed regularizer, we are able to generate samples which are classified into a very diverse set of 968 ImageNet classes by ResNet18 classifier, whereas just using Activation Maximization with GAN resulted in limited set of 25 labels. We also find that diversity in classes helps a lot in improving the fooling rate for which ablation results are present in Table 5. The regularizer in each cycle encourages GAN to shift it’s focus to the underrepresented classes. Due to the limited capacity of DCGAN it’s bound to forget some classes due to shift in focus caused by regularizer. For mitigating this we sample images from multiple cycles the exact details of procedure are described in Appendix A.7. The above procedure was adequate for our experiments, for resolving the forgetting issues in DCGAN large capacity architectures from BigGAN (Brock et al., 2018) can be used. The exact hyperparameters and architecture details are present in the Appendix A.7.\nUAP Generation and Results: We use Generative Adversarial Perturbation (Poursaeed et al., 2018) which is an off the shelf algorithm for training a generator G for crafting UAPs. We also allow the gradients to flow to deeper ResNet layers using the method introduced by (Wu et al., 2019a). We replace the ImageNet training data with the prior images generated by the GAN described above. We find that a single GAN with ResNet-18 network is enough to generate effective priors for fooling several ImageNet models. For evaluation we follow existing works and limit the strength of perturbation to `∞ = 10. We report Fooling Rate (FR), which is the percentage of data samples for which addition of our UAP flips the predicted label. We use − log(H(C(x), yx)) (i.e negative of log cross\nMethod VGG-16 VGG-19 ResNet-50 ResNet-152 Mean FR GDUAP + P (Mopuri et al., 2019) 64.95 52.49 56.70 44.23 53.89\nPD-UA + P (Liu et al., 2019a) 70.69 64.98 63.50 46.39 60.69 AAA (Mopuri et al., 2018b) 71.59 72.84 - 60.72 68.38 MI-ADV* (Zhang et al., 2020) 92.20 91.60 - 79.90 87.9 Ours 96.16 94.73 83.72 94.00 94.96 MI-ADV** (With ImageNet) 94.30 94.98 - 90.08 93.12\nTable 4: Comparison of our UAP performance (Fooling Rate) to the state-of-the-art approaches. The Mean FR is the mean of VGG-16, VGG-19 and ResNet-152 as those are provided by all other approaches. * These results use MSCOCO (Lin et al., 2014) as prior distribution which overlaps with the target ImageNet categories. **These results are with using the target ImageNet data itself (i.e. in the presence of the data on which the victim classifier is trained).\nPrior Fooling Rate Comics 49.66 GAN + AM 63.89 Ours 83.72 ImageNet Data∗ 89.11\nTable 5: Ablation on Different Priors for ResNet-50 model. *For ImageNet we find that -H(C(x), yx) (i.e. negative cross entropy) is more effective hence we report the better fooling rate.\nentropy) as the fooling loss as prescribed by (Poursaeed et al., 2018) for all networks. The detailed results are presented in Table 4. Note that our data free results are better than not only the existing data-free approaches by a large margin but also the recent data-driven method (Zhang et al., 2020) which uses ImageNet training data by a considerable 2%. We provide a detailed comparison of the data used by various approaches in Appendix A.7." }, { "heading": "5 DISCUSSION", "text": "In this section we discuss some of the important aspects of our work:\n• Our approach can be directly applied for semi supervised GAN learning as it decouples classifier learning and data which can enable learning on unlabeled data. • We would like to emphasize that the presence of a classifier in our framework is not a disadvantage. There has been significant progress in classification setup in semi supervised learning and learning from long-tailed distributions. We also show that classifiers obtained from such methods can also be used in our framework in Section 4.2. • We have noticed while training GAN for the UAP application, on multiple occasions, that texture alone is transferred as a discriminative feature from the classifier. This may be due to the bias of the classifiers towards texture (Geirhos et al., 2018) and image generation will improve as the classifiers improve. However, it still serves as an effective prior about the modes (classes) in the underlying data distribution on which the classifiers are trained. • The class balancing problem differs from data coverage problem (Yu et al., 2020; Srivastava et al., 2017) as the latter tends to make the generated distribution similar to data distribution. Training on long-tailed data can induce the GAN distribution to be long-tailed as well." }, { "heading": "6 CONCLUSION", "text": "In this paper, we propose a class-balancing regularizer to balance class distribution of generated samples while training GANs. We present its implications in terms of a theoretical bound and comprehensive experimental analysis in case of long-tailed data distributions. We have demonstrated the utility of our regularizer beyond the GAN framework in crafting input agnostic adversarial perturbations. The effectiveness of our contribution is exhibited through state-of-the-art performance on training of GANs on long-tailed data distributions as well as in crafting Universal Adversarial Perturbations in a data-free setting." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 PROOF OF THE PROPOSITION", "text": "Proposition: The proposed objective below:\nmax p̂k ∑ k p̂k log(p̂k) N tk (12)\nleads to thefollowing bound on p̂k:\np̂k ≤ e −K(log(K)−1) N\nt k∑\nk N t k −1 (13)\nwhere K is the number of distinct class labels produced by classifier C.\nProof: max p̂k ∑ k p̂k log(p̂k) N tk (14)\nIntroducing the probability constraint and the Lagrange multipltiplier λ:\nL(p̂, λ) = ∑ k p̂k log(p̂k) N tk − λ( ∑ p̂k − 1) (15)\nOn solving the equations obtained by setting ∂L\n∂p̂k = 0 :\n1\nN tk +\nlog(p̂k)\nN tk − λ = 0 =⇒ p̂k = eλN\nt k−1 (16)\nUsing the constraint ∂L\n∂λ = 0 we get:∑\nk p̂k = 0 =⇒ ∑ k eλN t k−1 = 1 =⇒ ∑ k eλN t k = e (17)\nNow we normalize both sides by 1/K where K is the distinct labels produced by classifier and apply Jensen’s inequality for concave function ψ( ∑ aixi∑ ai ) ≥ ∑ aiψ(xi)∑ ai and use ψ as log function:\ne\nK = ∑ k eλN t k K =⇒ log( e K ) = log( ∑ k eλN t k K ) ≥ ∑ k λN tk K\n(18)\nOn substituting the value of λ in inequality:\nK(1− log(K)) ≥ λ ∑ k N tk =⇒ K(1− log(K)) ≥ ( ∑ k N tk) 1 + log(p̂k) N tk (19)\nOn simplifying and exponentiation we get the following result:\np̂k ≤ e −K(log(K)−1) N\nt k∑\nk N t k −1 (20)\nWe observe that the penalizing factor K(log(K)− 1) is increasing in terms of number of classes K in the dataset which is advantageous to us as we need a large penalizing factor as N tk/ ∑ kN t k will be smaller when number of classes is large in the dataset." }, { "heading": "A.2 DATASETS", "text": "We use CIFAR-10 (Krizhevsky et al., 2009) dataset for our experiments which has 50k training images and 10k validation images. For the LSUN (Yu et al., 2015) dataset we use a fixed subset of 50k training images for each of bedroom, conference room, dining room, kitchen and living room classes. In total we have 250k training images and 1.5k validation set of images for LSUN dataset. The imbalanced versions of the datasets are created by removing images from the training set." }, { "heading": "A.3 PRE TRAINED CLASSIFIER DETAILS", "text": "All the pre-trained classifiers used for Image generation experiments use a ResNet32(He et al., 2016) classifier. The classifier is trained using Deferred Re-weighting (DRW) scheme Cao et al. (2019); Cui et al. (2019) with effective number of samples. We use the open source code available at https://github.com/kaidic/LDAM-DRW. We use the same learning rate schedule of initial learning rate of 0.01 and multiplying by 0.01 at epoch 160 and 180. We train the models for 200 epochs and start reweighting at epoch 160. We give a summary of the validation accuracy of the models in the following table: The classifier obtained by training on the balanced scenario is used as an annotator\nfor obtaining class labels for GAN generated samples. We use the same ResNet32 (He et al., 2016) classifier with the same learning rate schedule as above with cross entropy loss to obtain Classifier Accuracy." }, { "heading": "A.4 ARCHITECTURE DETAILS FOR GAN", "text": "We use the SNDCGAN architecture for experiments on CIFAR-10 and SNResGAN architecture for experiments on LSUN dataset Gulrajani et al. (2017); Miyato et al. (2018). The notation for the architecture tables are as follows: m is the batch size, FC(dim in, dim out) is a fully connected Layer,\nCONV(channels in, channels out, kernel size, stride) is convolution layer, TCONV(chanels in, channel out, kernel size, stride) is the transpose convolution layer, BN is BatchNorm (Ioffe & Szegedy, 2015) Layer in case of unconditonal GANs and conditional BatchNorm in case of conditional GANs. LRelu is the leaky relu activation function and GSP is the Global Sum Pooling Layer. The DIS BLOCK(channels in, channels out, downsampling) and GEN BLOCK(channels in, channels out, upsampling) correspond to the Discriminator and Generator block used in the (Gulrajani et al., 2017). The architectures are presented in detail in Tables 7, 8, 9 and 10." }, { "heading": "A.5 HYPERPARAMETER CONFIGURATION (IMAGE GENERATION EXPERIMENTS)", "text": "" }, { "heading": "A.5.1 LAMBDA THE REGULARIZER COEFFECIENT", "text": "The λ hyperparameter is the only hyperparameter that we change across different imbalance scenarios. As the overall objective is composed of the two terms:\nLg = −E(x,z)∼(Pr,Pz)[log(σ(D(G(z))−D(x))]− λLreg (21) As the number of terms in the regularizer objective can increase with number of classes K. For making the regularizer term invariant of K and also keeping the scale of regularizer term similar to GAN loss, we normalize it by K. Then the loss is multiplied by λ. Hence the effective factor that gets multiplied with regularizer term is λK .\nThe presence of pre-trained classifier which provides labels for generated images makes it easy to determine the value of λ. Although the pre-trained classifier is trained on long-tailed data its label distribution is sufficient to provide a signal for balance in generated distribution. We use the KL Divergence of labels with respect to uniform distribution for 10k samples in validation stage to check for balance in distribution and choose λ accordingly. We use the FID implementation available here 2." }, { "heading": "A.5.2 OTHER HYPERPARMETERS", "text": "We update the effective class distribution periodically after 2k updates (i.e. each cycle defined in section 3 consists of 2k iteration). We find the algorithm performance to be stable for a large range of update frequency depicted in Figure 4. We also apply Exponential Moving Average on generator weights after 20k steps for better generalization. The hyperparameters are present in detail in Table 12. Validation Step: We obtain the FID on 10k generated samples after each 2k iterations and choose the checkpoint with best FID for final sampling and FID calulation present in Table 1. Convergence of Network: We find that our GAN + Regularizer setup also achives similar convergence in FID value to the GAN without the regularizer. We show the FID curves for the CIFAR-10 (Imbalance Ratio = 10) experiments in Figure 5." }, { "heading": "A.6 HYPERPARAMETERS FOR THE SEMI SUPERVISED GAN ARCHITECTURE", "text": "We use a ImageNet and ImageNet-21k pre-trained model with ResNet 50 architecutre as the base model. The fine tuning of the model on CIFAR-10 and LSUN has been done by using the code of\n2https://github.com/mseitzer/pytorch-fid\nnotebook present here 3. The accuracy of the classifiers fine-tuned on validation data, trained with 0.1% of labelled data is 84.96 % for CIFAR-10 and 82.40 % for LSUN respectively. The lambda (regularizer coeffecient) values are present in the table below:\nThe training hyper parameters are same as the ones present in the Table 12. Only in case of LSUN semi supervised experiments we use a batch size of 128 to fit into GPU memory for semi supervised experiments." }, { "heading": "A.6.1 RESULTS ON CIFAR-100", "text": "In this section we show results on CIFAR-100 dataset which has 100 classes having 500 images for each class. We use SNResGAN architecture from Miyato & Koyama (2018), which is similar to SNResGAN architecure used for LSUN experiments. The architecutre is used for generating 32×32 images. We use the same hyperparameters used for LSUN experiments listed in Table 12. We use a λ value of 0.5 for CIFAR-100 experimets. The results in Table 3 show that our method on longtailed CIFAR100 of using GAN + Regularizer achieves the best FID and also have class balance similar to cGAN (conditional GAN). The labels for the samples generated by GAN are obtained by a classfier trained on balanced CIFAR-100 dataset. The KL Divergence between the GAN label distribution and uniform distribution is present in Table 3. The classifier for obtaining class labels for KL Divergence evaluation is trained on balanced CIFAR-100 with setup described in A.3 which serves as annotator for all methods.\n3https://github.com/google-research/big transfer/blob/master/colabs/big transfer pytorch.ipynb" }, { "heading": "A.7 UAP EXPERIMENTAL DETAILS", "text": "Dataset: We use the Comics dataset (Comics-Dataset) whereas approach (Zhang et al., 2020) use COCO dataset. COCO dataset has overlap with ImageNet categories. The difference in images used shows that our procedure does not require natural images for generating effective attack. This increases applicability of our method. We use a DCGAN architecture to generate 128× 128 images from the GAN described in Table 15 and 16. In this experiment, we update the mode statistics after every epoch. Hyperparameters are present in the table 14. The images generated by our method is present in figure 10.\nSampling: We find that sampling in different cycles produces samples from diverse classes as due to our regularizer as it enforces learning of different underrepresented classes in different cycles. Hence we sample 1024 images each from the GAN checkpoints in the last 40 cycles to obtain the dataset for UAP generation. This also shows that the regularizer is effective in shifting the distribution of GAN to produce different modes which is not possible with just Activation Maximization (AM). The increase in number of diverse classes is shown in Figure 7.\nGenerative Adversarial Perturbations: We use the author’s Pytorch implementation of the algorithm to generate attacks. For ResNets we allow the gradients to pass through skip connections by using the method of Wu et al. (2019a) with α = 0.5. We train the algorithm for 20 epochs in each case except in case of VGG16. For VGG16 we use an additional factor of 10 with the loss to make fooling rate converge in 20 epochs." } ]
2,020
null
SP:d6ecb075f238cc67a6cc4f6b924e1b7b3eb69dfa
[ "This paper presents a novel method called Embed-SAD (as well as Input-SAD) to learn graph/node representations to disentangle structure and attribute information. Input-SAD is a simple baseline that tries to get structure-attribute disentanglements by individually processing graph structures and node attributes. For structure, the original node attibutes are replaced by out-degrees only, and passed to GNNs, while for attibutes, the node attibutes are passed to fully-connected networks. Embed-SAD is a more elaborate method to disentangle the GNN embeddings by posing two types of additional losses, i.e., the edge-reconstruction loss for structures, and the Noise-Contrastive Estimation (NCE) loss to maximize the mutual information against the structure-encoding vectors, in addition to the original loss for supervision. The paper also develops an interesting evaluation metric called SAD-Metric where node attibutes or graph structures are exclusively perturbed for each graph, and prediction for whether that perturbation is for structure or for attibutes made by the element-wise absolute differences between embedded vectors before and after the perturbation. This SAD-Metric can quantify the extent to which the obtained representation can detect which perturbation, that for structures or that for attibutes, is made for each sample graph. The experimental results also demonstrated that the structure-attibute disentanglement by Embed-SAD learning strategy actually improved the prediction performance of many off-the-shelf GNNs over many different graph- or node-level tasks.", "This paper focuses on disentangling embeddings of the structure and the attribute of graph. \bThe authors' key idea is that the structure and attribute information should be split in GNN. Based on this, the authors try to disentangle the structure embedding and the attribute embedding. With two different components, two different kinds of embeddings can be captured at the input stage. In addition, these two different kinds of embeddings can be obtained by reconstructing the edge and minimizing the mutual information. At last, the authors propose a metric to evaluate the disentanglement. The models in this paper outperform baselines in node classification and graph classification task. " ]
Graph Neural Networks (GNNs) learn effective node/graph representations by aggregating the attributes of neighboring nodes, which commonly derives a single representation mixing the information of graph structure and node attributes. However, these two kinds of information might be semantically inconsistent and could be useful for different tasks. In this paper, we aim at learning node/graph representations with Structure-Attribute Disentanglement (GraphSAD). We propose to disentangle graph structure and node attributes into two distinct sets of representations, and such disentanglement can be done in either the input or the embedding space. We further design a metric to quantify the extent of such a disentanglement. Extensive experiments on multiple datasets show that our approach can indeed disentangle the semantics of graph structure and node attributes, and it achieves superior performance on both node and graph classification tasks.
[]
[ { "authors": [ "Alexander A. Alemi", "Ian Fischer", "Joshua V. Dillon", "Kevin Murphy" ], "title": "Deep variational information bottleneck", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Mohamed Ishmael Belghazi", "Aristide Baratin", "Sai Rajeswar", "Sherjil Ozair", "Yoshua Bengio", "R. Devon Hjelm", "Aaron C. Courville" ], "title": "Mutual information neural estimation", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2013 }, { "authors": [ "Antoine Bordes", "Nicolas Usunier", "Alberto Garcı́a-Durán", "Jason Weston", "Oksana Yakhnenko" ], "title": "Translating embeddings for modeling multi-relational data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Maurice Bruynooghe" ], "title": "Solving combinatorial search problems by intelligent backtracking", "venue": "Information Processing Letters,", "year": 1981 }, { "authors": [ "Shaosheng Cao", "Wei Lu", "Qiongkai Xu" ], "title": "Grarep: Learning graph representations with global structural information", "venue": "In ACM International Conference on Information and Knowledge Management,", "year": 2015 }, { "authors": [ "Bin Chen", "Robert P Sheridan", "Viktor Hornak", "Johannes H Voigt" ], "title": "Comparison of random forest and pipeline pilot naive bayes in prospective qsar predictions", "venue": "Journal of Chemical Information and Modeling,", "year": 2012 }, { "authors": [ "Hubie Chen", "Carla Gomes", "Bart Selman" ], "title": "Formal models of heavy-tailed behavior in combinatorial search", "venue": "In International Conference on Principles and Practice of Constraint Programming,", "year": 2001 }, { "authors": [ "Ming Chen", "Zhewei Wei", "Zengfeng Huang", "Bolin Ding", "Yaliang Li" ], "title": "Simple and deep graph convolutional networks", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Tian Qi Chen", "Xuechen Li", "Roger B. Grosse", "David Duvenaud" ], "title": "Isolating sources of disentanglement in variational autoencoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Xi Chen", "Yan Duan", "Rein Houthooft", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Cian Eastwood", "Christopher K.I. Williams" ], "title": "A framework for the quantitative evaluation of disentangled representations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Justin Gilmer", "Samuel S. Schoenholz", "Patrick F. Riley", "Oriol Vinyals", "George E. Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Aditya Grover", "Jure Leskovec" ], "title": "node2vec: Scalable feature learning for networks", "venue": "In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2016 }, { "authors": [ "Michael Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Michael Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "William L. Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Irina Higgins", "Loı̈c Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Geoffrey E Hinton", "Alex Krizhevsky", "Sida D Wang" ], "title": "Transforming auto-encoders", "venue": "In International Conference on Artificial Neural Networks,", "year": 2011 }, { "authors": [ "R. Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Philip Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Theofanis Karaletsos", "Serge J. Belongie", "Gunnar Rätsch" ], "title": "Bayesian unsupervised representation learning with oracle constraints", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by factorising", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Variational graph auto-encoders", "venue": "CoRR, abs/1611.07308,", "year": 2016 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Lars Kotthoff" ], "title": "Algorithm selection for combinatorial search problems: A survey", "venue": "In Data Mining and Constraint Programming", "year": 2016 }, { "authors": [ "Tejas D Kulkarni", "William F Whitney", "Pushmeet Kohli", "Josh Tenenbaum" ], "title": "Deep convolutional inverse graphics network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Jianxin Ma", "Peng Cui", "Kun Kuang", "Xin Wang", "Wenwu Zhu" ], "title": "Disentangled graph convolutional networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Laurens Van Der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Bryan Perozzi", "Rami Al-Rfou", "Steven Skiena" ], "title": "Deepwalk: online learning of social representations", "venue": "In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2014 }, { "authors": [ "Karl Ridgeway", "Michael C. Mozer" ], "title": "Learning deep disentangled embeddings with the f-statistic loss", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yu Rong", "Wenbing Huang", "Tingyang Xu", "Junzhou Huang" ], "title": "Dropedge: Towards deep graph convolutional networks on node classification", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2008 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Galligher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI Magazine,", "year": 2008 }, { "authors": [ "Oleksandr Shchur", "Maximilian Mumme", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Pitfalls of graph neural network evaluation", "venue": "arXiv preprint arXiv:1811.05868,", "year": 2018 }, { "authors": [ "Narayanaswamy Siddharth", "Brooks Paige", "Jan-Willem Van de Meent", "Alban Desmaison", "Noah Goodman", "Pushmeet Kohli", "Frank Wood", "Philip Torr" ], "title": "Learning disentangled representations with semi-supervised deep generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Zhiqing Sun", "Zhi-Hong Deng", "Jian-Yun Nie", "Jian Tang" ], "title": "Rotate: Knowledge graph embedding by relational rotation in complex space", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Chenhao Tan", "Lillian Lee", "Jie Tang", "Long Jiang", "Ming Zhou", "Ping Li" ], "title": "User-level sentiment analysis incorporating social networks", "venue": "In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2011 }, { "authors": [ "Jian Tang", "Meng Qu", "Mingzhe Wang", "Ming Zhang", "Jun Yan", "Qiaozhu Mei" ], "title": "LINE: large-scale information network embedding", "venue": "In International Conference on World Wide Web,", "year": 2015 }, { "authors": [ "Naftali Tishby", "Noga Zaslavsky" ], "title": "Deep learning and the information bottleneck principle", "venue": "IEEE Information Theory Workshop,", "year": 2015 }, { "authors": [ "Naftali Tishby", "Fernando C Pereira", "William Bialek" ], "title": "The information bottleneck method", "venue": "arXiv preprint physics/0004057,", "year": 2000 }, { "authors": [ "Phi Vu Tran" ], "title": "Multi-task graph autoencoders", "venue": "arXiv preprint arXiv:1811.02798,", "year": 2018 }, { "authors": [ "Théo Trouillon", "Johannes Welbl", "Sebastian Riedel", "Éric Gaussier", "Guillaume Bouchard" ], "title": "Complex embeddings for simple link prediction", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Aäron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": null, "year": 2018 }, { "authors": [ "Petar Velickovic", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Vikas Verma", "Meng Qu", "Alex Lamb", "Yoshua Bengio", "Juho Kannala", "Jian Tang" ], "title": "Graphmix: Regularized training of graph neural networks for semi-supervised learning", "venue": null, "year": 1909 }, { "authors": [ "Daixin Wang", "Peng Cui", "Wenwu Zhu" ], "title": "Structural deep network embedding", "venue": "In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2016 }, { "authors": [ "Zhenqin Wu", "Bharath Ramsundar", "Evan N Feinberg", "Joseph Gomes", "Caleb Geniesse", "Aneesh S Pappu", "Karl Leswing", "Vijay Pande" ], "title": "Moleculenet: A benchmark for molecular machine learning", "venue": "Chemical Science,", "year": 2018 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Qiang Yan", "Lianren Wu", "Lan Zheng" ], "title": "Social network based microblog user behavior analysis", "venue": "Physica A: Statistical Mechanics and Its Applications,", "year": 2013 }, { "authors": [ "Zhilin Yang", "William Cohen", "Ruslan Salakhudinov" ], "title": "Revisiting semi-supervised learning with graph embeddings", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Zhitao Ying", "Jiaxuan You", "Christopher Morris", "Xiang Ren", "William L. Hamilton", "Jure Leskovec" ], "title": "Hierarchical graph representation learning with differentiable pooling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Guo Zhang", "Hao He", "Dina Katabi" ], "title": "Circuit-gnn: Graph neural networks for distributed circuit design", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Muhan Zhang", "Zhicheng Cui", "Marion Neumann", "Yixin Chen" ], "title": "An end-to-end deep learning architecture for graph classification", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "GAT (Velickovic" ], "title": "2018) and GIN (Xu et al., 2019)) and evaluate various models’ performance on MoleculeNet, in which we set the number of attention heads as 5 for GAT. For all the three experiments, the Embed-SAD model outperforms other methods in terms of average test ROC-AUC. The Input-SAD model achieves superior performance on ClinTox and BACE datasets, and its overall performance is comparable with DisenGNN", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Representing nodes or entire graphs with informative low-dimensional feature vectors plays a crucial role in many real-world applications and domains, e.g. user analysis in social networks (Tan et al., 2011; Yan et al., 2013), relational inference in knowledge graphs (Bordes et al., 2013; Trouillon et al., 2016; Sun et al., 2019), molecular property prediction in drug/material discovery (Gilmer et al., 2017; Wu et al., 2018) and circuit response prediction in circuit design (Zhang et al., 2019). Recently, Graph Neural Networks (GNNs) (Kipf & Welling, 2017; Velickovic et al., 2018; Xu et al., 2019) have shown their superiority in many different tasks. In general, the essential idea of these methods is to learn effective node representations (or graph representations with an additional graph pooling) through aggregating the attributes of each node and its neighbors in an iterative and nonlinear way.\nFor an attributed graph, GNNs commonly encode the information of its graph structure and node attributes into a single representation. This might be problematic, since the semantic space of graph structure and node attributes might not be well aligned, and these two types of information could be useful for different tasks. For example, predicting the health condition of a user mainly depends on his/her profile information, and the social network does not provide too much meaningful information; in another case, the prediction of a user’s social class mainly relies on his/her social network structure. Therefore, a more reasonable solution is to disentangle these two types of information into two distinct sets of representations, and the importance of which can be further determined by downstream tasks. Such disentangled representation has been proved to be beneficial to model’s generalization ability and interpretability (Chen et al., 2016; Higgins et al., 2017; Alemi et al., 2017).\nRecently, DisenGNN (Ma et al., 2019) studied disentangled node representation learning by grouping the neighbors of each node to different channels, and each channel corresponds to a different latent factor. In other words, DisenGNN focuses on disentangling the various latent factors of graph structure. By contrast, our work intends to disentangle the representations of graph structure and node attributes, which is orthogonal to their work and also more general.\nIn this paper, we aim to learn node/graph representations with Structure-Attribute Disentanglement (GraphSAD). As a naive trial, we first attempt to conduct disentanglement in the input space, named as Input-SAD, which separates a graph into a structure and an attribute component and then encodes these two components respectively. However, since graph structure and node attributes are not completely independent, it is better to suppress the dependency of these two factors in the embedding space, instead of directly separating the input graph. Inspired by this fact, we propose to distill a\ngraph’s structure and attribute information into the distinct channels of embedding vectors, named as Embed-SAD. Concretely, for each node embedding, half of its elements capture the graph structure through edge reconstruction, and the other half extracts the attribute information by minimizing the mutual information with the structure counterpart and, at the same time, preserving semantic discriminability. In addition, we devise a metric to quantitatively evaluate graph representation’s structure-attribute disentanglement, denoted as SAD-Metric, which measures the sensitivity of a model when varying either the graph structure or node attributes of an input graph.\nWe summarize our contributions as follows:\n• We study structure-attribute disentangled node/graph representation learning through separating graph structure and node attributes in either the input or the embedding space. • We design a quantitative metric to measure the extent of structure-attribute disentangle-\nment, which is novel on its graph-specific data processing scheme. • Through combining the proposed disentangling techniques with various GNNs, we empir-\nically verify our method’s superior performance on both the node and graph classification benchmark datasets. Also, we analyze the disentangled graph representations via the proposed metric and qualitative visualization." }, { "heading": "2 PROBLEM DEFINITION AND PRELIMINARIES", "text": "" }, { "heading": "2.1 PROBLEM DEFINITION", "text": "We study learning node representations (e.g. social networks) or whole-graph representations (e.g. molecular graphs) of attributed graphs. Formally, we denote an attributed graph as G = (V, E ,A). V denotes the set of nodes. E = {(u, v, tuv)} is the set of edges with tuv as the type of the edge connecting node u and v (e.g. different types of bonds in molecular graphs). A = {Av|v ∈ V} represents the set of node attributes.\nOur goal is to learn meaningful representations for each node or the whole graph. Existing GNNs typically mix both the graph structure and node attributes into a unified representation through neural message passing. However, in practice, these two types of information may encode different semantics and be useful for different tasks. Take the prediction on social networks as an example. When predicting the social class of users, the graph structure plays a more important role than user attributes, while user attributes are definitely more informative than graph structure when forecasting users’ health conditions. It is therefore desirable to disentangle the information of graph structure and node attributes into different sets of representations and use the downstream task to determine their importance. Specifically, we define our problem as follows:\nNode/Graph Representation Learning with Structure-Attribute Disentanglement. Given an attributed graph G = (V, E ,A), we aim to learn node (or whole-graph) representations by disentangling the semantics of graph structure S = {V, E} and node attributes A into two distinct sets of representations, i.e. zv = [zv,S , zv,A] (or zG = [zG,S , zG,A]). The importance of the two kinds of representations is further determined by the downstream task such as node or graph classification." }, { "heading": "2.2 PRELIMINARIES", "text": "Graph Neural Networks (GNNs). A GNN maps each node v ∈ V to an embedding vector zv and also encodes the entire graph G as vector zG . For an L-layer GNN, the L-hop information surrounding each node is captured via a neighborhood aggregation mechanism. Formally, the l-th GNN layer can be defined as:\nz(l)v = COMBINE (l) ( z(l−1)v ,AGGREGATE (l) ({( z(l−1)v , z (l−1) u , tuv ) : u ∈ N (v) })) , (1)\nwhere N (v) is the set of node v’s neighbors, tuv denotes edge attribute, z(l)v denotes the representation of v at the l-th layer, and z(0)v is initialized by the node attribute Av . Using all the node embeddings in a graph, the entire graph’s embedding can be derived by a permutation-invariant readout function:\nzG = READOUT ({ zv|v ∈ V }) . (2)\nMutual Information Estimator. Mutual information (MI) quantifies the mutual dependency between two random variables. Some recent works (Belghazi et al., 2018; Hjelm et al., 2019) studied neural-network-based MI estimators. Among which, the Noise-Contrastive Estimation (NCE) (Gutmann & Hyvärinen, 2010; 2012) was first employed as a lower bound of MI by van den Oord et al. (2018), and we also adopt this estimator in our method for its effectiveness and concision. In practice, for two random variables x1 and x2, given one positive pair (x+1 ,x + 2 ) ∼ p(x1,x2) and K distractors (x+1 ,x2,j) ∼ p(x1)p(x2) (j = 1, 2, · · · ,K), the NCE estimation of MI is defined as:\nINCE ( x+1 ,x + 2 , {x2,j}Kj=1 ) = log(K+1)+log\nexp ( T (x+1 ,x + 2 ) )\nexp ( T (x+1 ,x + 2 ) ) + ∑K j=1 exp ( T (x+1 ,x2,j) ) , (3) where T (·, ·) is a parameterized discriminator function which outputs a scalar value for a pair of input samples, and its architecture is detailed in Sec. 5.1." }, { "heading": "3 LEARNING GRAPH REPRESENTATIONS WITH STRUCTURE-ATTRIBUTE DISENTANGLEMENT", "text": "" }, { "heading": "3.1 INPUT-SAD: STRUCTURE-ATTRIBUTE DISENTANGLEMENT FOR INPUTS", "text": "As an initial attempt, we seek to learn structure-attribute disentangled node/graph representations by separating a graph into a structure and an attribute component and then encoding them respectively, as shown in Fig. 1(a). Concretely, given an attributed graph G = (V, E ,A), these two components are constructed and encoded as follows.\nThe structure component extracts the graph structure and forms another graph GS = (VS , ES ,AS), in which the node and edge sets remain unchanged, i.e. VS = V , ES = E , and the out-degree of each node serves as its attribute, i.e. AS = {d(v)|v ∈ VS} (d(·) denotes the out-degree function). A GNN maps this component to a δ-dimensional embedding space:\n(zV,S , zG,S) = GNN(VS , ES ,AS), (4)\nwhere zV,S = {zv,S |v ∈ V} ∈ R|V|×δ denotes the node embeddings derived only by the graph structure, and zG,S ∈ Rδ is the embedding of the entire structure component.\nThe attribute component is formed as a feature matrix U ∈ R|V|×D, where the feature vector Uv ∈ RD is a D-dimensional embedding of node attribute Av . For this component, a fully-connected network and a readout function (e.g. mean pooling in our implementation) are used for encoding:\nzV,A = FCN(U), zG,A = READOUT(zV,A), (5)\nwhere zV,A = {zv,A|v ∈ V} ∈ R|V|×δ denotes the attribute embeddings of the nodes in graph G, and zG,A ∈ Rδ embeds the whole attribute component.\nThe complete information of graph G is restored by concatenating the structure and attribute embedding for each node and for entire graph:\nzV = { [zv,S , zv,A] | v ∈ V } ∈ R|V|×2δ, zG = [zG,S , zG,A] ∈ R2δ, (6)\nwhere [·, ·] denotes the concatenation operation. Upon these concatenated node/graph embeddings, the prediction task (e.g. node/graph classification) is performed by a task-specific network C, which defines the supervised loss Lsup for model optimization:\nmin GNN,FCN,C\nLsup. (7)" }, { "heading": "3.2 EMBED-SAD: STRUCTURE-ATTRIBUTE DISENTANGLEMENT FOR EMBEDDINGS", "text": "The explicit separation of an input graph into structure and attribute components forces the independent encoding of graph structure and node attributes. However, these two factors are not completely independent. For example, in a social network, the social connections of a person can provide useful information about his/her character, and vice versa. Therefore, the representations derived by Input-SAD may not fully capture the structure and attribute information of a graph. To tackle this shortcoming, we seek to perform encoding on the original graph and distill its structure and attribute information into distinct channels of embedding vectors, as illustrated in Fig. 1(b).\nFor an attributed graph G = (V, E ,A), a GNN is employed to map the graph to a 2δ-dimensional embedding space: (zV , zG) = GNN(V, E ,A), (8) where zV = {zv|v ∈ V} ∈ R|V|×2δ denotes node embeddings, and zG ∈ R2δ is the embedding of entire graph. We further divide these embeddings into two channels:\nzG = [zG,S , zG,A], zv = [zv,S , zv,A], ∀v ∈ V, (9)\nwhere zv,S , zv,A (zG,S , zG,A) ∈ Rδ are the structure and attribute embedding of node v (the whole graph G). In order to distill the structure and attribute information to the corresponding channel, we propose two learning schemes.\nLearning structure embedding by edge reconstruction. The node embeddings fully capturing the graph structure are supposed to be capable of reconstructing the edges of this graph. Specifically, using the structure embeddings of a pair of nodes, we expect to predict the existence and type of the edge between them, which defines the reconstruction constraint for learning structure embeddings:\nLrecon = −Eu∼PV ,v∼PV Nt∑ i=0 1[tuv=i] · log p(y = i|zu,S , zv,S), (10)\nwhere PV denotes the uniform distribution over V , 1[tuv=i] is the indicator function judging whether the edge (u, v) belongs to type i (i = 0 denotes there is no edge), Nt is the number of different edge types, and p(y|zu,S , zv,S) is modeled by a neural network F which is detailedly discussed in Sec. B. This objective function relates to the ones proposed in VGAE (Kipf & Welling, 2016) and GraphSAGE (Hamilton et al., 2017), while it additionally constrains the reconstruction of edge type, which enables the structure embeddings to adequately capture the information of graph structure.\nLearning attribute embedding by mutual information (MI) minimization. Now that the structure embedding is obtained, we would like to derive the attribute embedding which extracts the information of node attributes and suppresses that of graph structure. To achieve this goal, we employ a neural-network-based MI estimator (i.e. the NCE estimator in Sec. 2.2) to estimate and, simultaneously, minimize the dependency between structure and attribute embedding.\nIn specific, we denote the structure and attribute latent factor as two random variables, zS and zA, and regard the structure and attribute embedding of each node as the sample from the corresponding marginal distribution, i.e. zv,S ∼ p(zS), zv,A ∼ p(zA) (v ∈ V). For computing the NCE estimation of MI, we define the structure and attribute embedding of the same node as a positive pair, i.e. (zv,S , zv,A) ∼ p(zS , zA) (v ∈ V), and the embedding pair constituted by two different nodes serves as a distractor, i.e. (zv,S , zw,A) ∼ p(zS)p(zA) (v 6= w, v, w ∈ V). With these notions, the estimated MI between two latent factors is defined as:\nIS,A = Ev∼PV ,wj∼PV\\{v} INCE ( zv,S , zv,A, {zwj ,A}Kj=1 ) , (11)\nAlgorithm 1: Evaluation procedure for SAD-Metric. Input: Evaluation set D = {Gi}Ni=1, the model to be evaluatedM. Output: The evaluation score for SAD-Metric. Initialize the counter: c = 0 ; for i=1 to N do\nyGi ← RandomSample({0, 1}) ; # Sample a binary label if yGi = 0 then G′i ←ModifyAttribute(Gi) ; # Modify the attribute factor of a graph else G′i ←ModifyStructure(Gi) ; # Modify the structure factor of a graph zGi ←M(Gi), zG′i ←M(G ′ i) ; # Infer graph embeddings ∆zGi = |zGi − zG′i | ; # Compute embedding difference Predict the binary label ŷGi = arg maxy p(y|∆zGi) ; if ŷGi = yGi then\nc← c+ 1 ;\nReturn Acc = c/N × 100% ; # Compute prediction accuracy as SAD-Metric\nwhere PV and PV\\{v} denote the uniform distribution over the node set with and without node v, and INCE(·, ·, ·) is the NCE estimation function defined in Eq. 3. Through minimizing this estimated MI, both the linear and nonlinear dependency between structure and attribute embeddings can be suppressed, which facilitates structure-attribute disentangled node/graph representations. Note that, such learning mechanism relates to the information bottleneck (IB) principle (Tishby et al., 2000; Tishby & Zaslavsky, 2015; Alemi et al., 2017), while, compared with IB, the proposed approach intends to separate two different sources of information instead of pursuing the maximal compression about the input.\nModel optimization. We perform the prediction task (e.g. node/graph classification) by appending a task-specific networkC upon the disentangled node/graph embeddings (i.e. zV or zG), which defines a supervised loss Lsup intended to be minimized. This supervised task also guarantees that the meaningful semantic information is not eliminated by the MI minimization scheme. For structureattribute disentanglement, on one hand, the reconstruction loss Lrecon is minimized to distill the information of graph structure into structure embeddings. On the other hand, the MI minimization is conducted in an adversarial manner, in which the discriminator T (defined in Eq. 3) is trained to maximize IS,A, while the GNN encoder seeks to minimize that term. The overall objective is:\nmin GNN,F,C max T Lsup + λ1Lrecon + λ2IS,A, (12)\nwhere λ1 and λ2 are the trade-off parameters balancing between different objectives." }, { "heading": "3.3 SAD-METRIC: STRUCTURE-ATTRIBUTE DISENTANGLEMENT METRIC", "text": "In order to quantify the extent of structure-attribute disentanglement achieved by various models, we devise a classifier-based metric to measure the learnt graph representations. Inspired by a previous work (Higgins et al., 2017), we focus on two desired properties of the disentangled representations: (1) independence: a representation vector is expected to be divided into several channels whose interdependency is as low as possible; (2) interpretability: each of these channels corresponds to a single latent factor of the data.\nTo derive such a metric, for a given graph G, we first sample a binary label yG from the uniform distribution over {0, 1}. According to this label, either the graph’s structure or attribute is modified under the other factor fixed (yG = 0: fix G’s structure; yG = 1: fix G’s attribute), which forms the counterpart graph G′. In practice, we modify a graph’s structure through randomly dropping one of its edge (Rong et al., 2019), and the graph’s attribute is modified by randomly altering the attribute of a node (implementation details are stated in Sec. B). Using the model to be evaluated, the embeddings of graph G and G′ (i.e. zG and zG′ ) are inferred, and we denote the absolute difference between these two embeddings as ∆zG (i.e. ∆zG = |zG − zG′ |). When the structure and attribute\ninformation are disentangled in graph embeddings (i.e. independence and interpretability holds), the elements corresponding to the fixed factor should possess lower values in ∆zG , which makes it easier to predict yG with a low capacity classifier (e.g. linear classifier) upon ∆zG . Based on this fact, we employ the prediction accuracy of yG on a set of graphs as the structure-attribute disentanglement metric (SAD-Metric). The whole evaluation procedure is summarized in Algorithm 1.\nIn order to measure the extent of structure-attribute disentanglement in node embeddings, we further design a node-centric metric, named as node-SAD-Metric. The detailed definition and experimental results for this metric are presented in Sec. E." }, { "heading": "3.4 THEORETICAL ANALYSIS", "text": "In this section, we theoretically illustrate that the disentanglement of structure and attribute representation is able to ease the burden of model optimization by shrinking the solution space.\nFor an attributed graph G = (V, E ,A), we can regard each type of node attribute as an attribute node, which transforms graph G into another form, G = (V,VA, E , EA) (VA: the set of all attribute nodes, EA: the edges connecting normal and attribute nodes), as shown in Fig. 2(a). This graph can be divided into two parts: (1) a bipartite graph reflecting attribute information, GA = (V,VA, EA) (Fig. 2(b)) and (2) an unattributed graph depicting graph structure, GS = (V, E) (Fig. 2(c)). We first give the definitions of topological space and the spaces for graph GA and GS . Definition 1. A topological space T = ( X,N (x) ) is composed of a set X and a neighborhood function N (x) mapping each x ∈ X to a subset of X . Definition 2. The topological space for graph GA is TA = ( V ∪ VA,NA(v) ) , and the topological\nspace for graph GS is TS = ( V,NS(v) ) .\nWe consider two ways of graph embedding. The first way embeds graph GA and GS to a common embedding space, i.e. learning a function f : TA × TS → Z, while the second way embeds two graphs to separate embedding spaces, i.e. learning a function f̃ : TA × TS → ZA × ZS . When the dimension of embedding space is identical in these two ways, we propose that the dimension of the solution space of function f̃ is smaller.\nProposition 1. If it holds that dim(Z) = dim(ZA) + dim(ZS), we have dim(f̃) 6 dim(f).\nThe detailed proof of this proposition is provided in Sec. A. Proposition 1 tells that the structureattribute disentanglement of graph representation narrows the solution space of the model, which enables us to train the graph encoder more effectively." }, { "heading": "4 RELATED WORK", "text": "Graph Representation Learning. The early efforts towards learning low-dimensional embeddings of nodes/graphs focused on optimizing the objectives induced by random walk statistics (Perozzi et al., 2014; Tang et al., 2015; Grover & Leskovec, 2016) or matrix factorization (Cao et al., 2015; Wang et al., 2016). By contrast, the Graph Neural Networks (GNNs) (Scarselli et al., 2008) derive embedding vectors via a neighborhood aggregation mechanism. Gilmer et al. (2017) suggested that most GNNs perform a Message Passing and a Readout phase, and different techniques (Kipf & Welling, 2017; Hamilton et al., 2017; Velickovic et al., 2018; Ying et al., 2018; Zhang et al., 2018; Xu et al., 2019) have been explored to enhance the effectiveness of these two phases. Un-\nlike these methods which mix the information of graph structure and node attributes into a single representation, our approach aims to disentangle these two factors in node/graph representations.\nLearning Disentangled Representations. A disentangled representation is expected to separate the distinct and informative factors of variation in the data (Bengio et al., 2013). Some previous works sought to achieve this goal under the guidance of weak supervision (Hinton et al., 2011; Kulkarni et al., 2015; Siddharth et al., 2017). On another line of research, representation disentanglement is pursued by various unsupervised/self-supervised techniques (Chen et al., 2016; Higgins et al., 2017; Kim & Mnih, 2018; Chen et al., 2018). For graph-structured data, a recent work (Ma et al., 2019) disentangled the latent factors of graph structure via a neighborhood routing mechanism. The proposed structure-attribute disentanglement is orthogonal to their work and also more general.\nMeasuring Disentangled Representations. The quantitative measurement of representation disentanglement is essential to compare different disentangling algorithms. Given the true factors of variation, various disentanglement metrics have been designed based on classifier (Karaletsos et al., 2016; Higgins et al., 2017; Kim & Mnih, 2018), mutual information estimation (Chen et al., 2018; Ridgeway & Mozer, 2018) or distribution entropy (Eastwood & Williams, 2018). The proposed SAD-Metric follows the embedding-based evaluation protocol as previous works, while it is novel on its data processing scheme which is tailored for graph-structured data." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "Model configurations. For all approaches evaluated in Secs. 5.2 and 5.3, we equip them with a GCN (Kipf & Welling, 2017) (hidden units’ dimension: 300, readout function: mean pooling) with two/three layers for node/graph classification, respectively. The performance on other GNNs, i.e. GraphSAGE (Hamilton et al., 2017), GAT (Velickovic et al., 2018) and GIN (Xu et al., 2019), is reported in Sec. 5.5. For the Input-SAD model, a two-layer fully-connected network (hidden units’ dimension: 300, activation function: ReLU) is adopted to encode the attribute component of graph. For the Embed-SAD model, the discriminator of the NCE estimator is built as an encoder-and-dot architecture: T (x1,x2) = f(x1)Tf(x2), where f(·) is modeled by two linear layers and a ReLU nonlinearity in between, and it projects the original feature vector to an inner-product space.\nTraining details. In node classification experiments, an Adam optimizer (Kingma & Ba, 2015) (learning rate: 1 × 10−3) is employed to train the model for 1000 epochs, and, for graph classification tasks, we use an Adam optimizer (learning rate: 1 × 10−3, batch size: 32) to perform optimization for 100 epochs. For the negative sampling in NCE estimator (Eq. 3), we utilize all the nodes other than the selected one as negative samples (i.e. K = |V| − 1), while, on three large networks (i.e. PubMed, Coauthor-CS and Coauthor-Physics), 3000 nodes serve as negative samples. Unless otherwise stated, trade-off parameters λ1 and λ2 are set as 1 and 0.1 (sensitivity analysis is in Sec. 5.5), which is determined by the grid search on the validation sets of three citation networks.\nPerformance comparisons. We combine the proposed Input-SAD and Embed-SAD model with four kinds of GNNs (GCN, GraphSAGE, GAT and GIN) to verify their effectiveness. Furthermore, we compare our method with five existing approaches that seek to promote GNNs’ representation learning capability, i.e. Multi-task (Tran, 2018), GraphMix (Verma et al., 2019), DropEdge (Rong et al., 2019), DisenGNN (Ma et al., 2019) and InitRes (Chen et al., 2020) (detailed settings are in Sec. B). Among which, Multi-task and DisenGNN relate to our method: the former performs link prediction and node/graph classification simultaneously; the latter disentangles node representations based on different generative causes of edges." }, { "heading": "5.2 EXPERIMENTS OF NODE CLASSIFICATION", "text": "Datasets. We employ three citation networks (i.e. Cora, CiteSeer and PubMed (Sen et al., 2008)) and two larger coauthor networks (i.e. Coauthor-CS and Coauthor-Physics (Shchur et al., 2018)) for semi-supervised node classification (20 labeled nodes per category). The node attributes of three citation networks are the bag-of-words representation of the document, and the nodes in two coauthor networks are featured by the paper keywords for each author’s papers. Edge attributes are not included in these datasets. The details about dataset statistics, dataset split and evaluation protocol are provided in Sec. C.\nResults. In Tab. 1, we report the performance of different methods on five standard node classification benchmarks. Based on a two-layer GCN model, the proposed Embed-SAD achieves the highest test accuracy on four of five tasks among all approaches. The performance of Input-SAD is inferior on these tasks, which, we think, is mainly ascribed to its failure in fully capturing the structure and attribute information of each node." }, { "heading": "5.3 EXPERIMENTS OF GRAPH CLASSIFICATION", "text": "Datasets. For graph classification, we adopt eight single-task/multi-task classification datasets in MoleculeNet (Wu et al., 2018), and, following previous findings (Chen et al., 2012), a scaffold split scheme is utilized for dataset split. For each molecular graph sample, the type and chirality of atom serve as the node attributes, and the type and direction of bond constitute the edge attributes. For evaluation, five independent runs with distinct random seeds are conducted, and the average performance is reported. More dataset statistics are in Sec. C.\nResults. Tab. 2 shows the comparisons between our methods and existing techniques. Embed-SAD outperforms DisenGNN, a closely related work, on six of eight datasets, and a 1.1% performance gain is achieved in terms of average ROC-AUC, which illustrates the effectiveness of structureattribute disentanglement. Input-SAD performs well on this graph classification benchmark and obtains superior performance on SIDER and BACE datasets, which, we think, is because the graph pooling operation is able to supplement the missing structure or attribute information of each node from other nodes." }, { "heading": "5.4 QUANTITATIVE EVALUATION OF STRUCTURE-ATTRIBUTE DISENTANGLEMENT", "text": "Evaluation details. We employ the same datasets and dataset split scheme as in Sec. 5.3. We use a GCN model with random parameters (random-GCN) and a GCN model pre-trained by graph classification (GCN-baseline) as baseline models. Three disentanglement models (DisenGNN, InputSAD and Embed-SAD) also establish appropriate priors by performing graph classification on the training set. A linear binary classifier is built upon the model to be evaluated, and the classifier is\ntrained with the graphs in the training set and evaluated by test graphs. All results are averaged over five independent runs.\nResults. We report the SAD-Metric score of five methods in Tab. 3. The poor performance of random-GCN shows that it can hardly disentangle the structure and attribute information of a graph, and the graph classification pre-training endows the GCN-baseline model with better disentanglement capability. Compared to DisenGNN, the proposed Input-SAD and Embed-SAD model better disentangle graph’s structure and attribute information. The Embed-SAD model achieves the best disentanglement of graph representations on seven of eight datasets." }, { "heading": "5.5 ANALYSIS", "text": "Effect of two objectives on Embed-SAD. In Tabs. 1 and 2, we analyze the effect of reconstruction loss and structure-attribute mutual information minimization. When removing any of these two objectives, the model’s performance is impaired, which demonstrates their complementary relation.\nResults of different GNNs. In Fig. 3(a), we combine three representation disentanglement techniques with four kinds of GNNs, and the average test ROC-AUC over eight datasets of MoleculeNet is plotted (detailed results are in Sec. D). Embed-SAD performs best on all four configurations, and the performance of Input-SAD and DisenGNN is comparable with each other.\nSensitivity of trade-off parameters λ1, λ2. Figs. 3(b), (c) show the performance of Embed-SAD on citation networks using different trade-off weights. We can observe that stable performance gain is obtained when the values of λ1 and λ2 are around 1 and 0.1, respectively.\nVisualization. In Figs. 4(a), (b), (c), we visualize the absolute values of the correlations between the learnt graph embedding’s elements on BBBP dataset. Among three models, Embed-SAD suppresses the correlation between structure and attribute embedding to the greatest extent. We further visualize the embedding distributions using t-SNE (Maaten & Hinton, 2008) in Figs. 4(d), (e). Both Input-SAD and Embed-SAD separate two kinds of embeddings into distinct spaces, and attribute embeddings possess stronger semantic discriminability compared to the structure counterparts." }, { "heading": "6 CONCLUSIONS AND FUTURE WORK", "text": "We study node/graph representation learning with Structure-Attribute Disentanglement (GraphSAD) in both the input and the embedding space. We further design a quantitative metric to measure such disentanglement. On node and graph classification benchmark datasets, we empirically verify our method’s superior performance over existing techniques.\nOur future explorations will involve improving the learning manners for structure-attribute disentangled representations, evaluating the proposed models on more tasks (e.g. regression-based tasks) and disentangling graphs in other ways." }, { "heading": "A PROOF OF PROPOSITION 1", "text": "Considering the search of embedding function as a combinatorial search problem (Bruynooghe, 1981; Chen et al., 2001; Kotthoff, 2016), we can derive the dimension of the solution space for two types of embedding functions, i.e. f : TA × TS → Z and f̃ : TA × TS → ZA × ZS , as follows:\ndim(f) =\n( dim(TA) + dim(TS)\ndim(Z)\n) , (13)\ndim(f̃) = ( dim(TA) dim(ZA) ) · ( dim(TS) dim(ZS) ) . (14)\nUnder the condition that dim(Z) = dim(ZA) + dim(ZS), the basic property of combinations gives the following inequality:(\ndim(TA) dim(ZA)\n) · (\ndim(TS) dim(ZS)\n) 6 ( dim(TA) + dim(TS)\ndim(Z)\n) , (15)\nwhich deduces that: dim(f̃) 6 dim(f). (16)\nThis conclusion shows that the solution space of the graph embedding function with structureattribute disentanglement is narrower than that of the entangled counterpart." }, { "heading": "B MORE IMPLEMENTATION DETAILS", "text": "Modeling conditional distribution for edge reconstruction. For the graphs that only provide the existence of edges (i.e. Nt = 1), we utilize an inner-product decoder for edge prediction:\np(y = 0|zu,S , zv,S) = 1− σ(zTu,S zv,S), p(y = 1|zu,S , zv,S) = σ(zTu,S zv,S), (17)\nwhere σ(·) denotes the sigmoid function. For the graphs owning more than one type of edges (i.e. Nt > 1), we concatenate the embedding vectors of two nodes and use a linear classifier to predict the type of the edge between them:\np(y|zu,S , zv,S) = F ( [zu,S , zv,S ] ) , (18)\nwhere F : R2δ → RNt+1 is the edge prediction function modeled by a linear classifier network. Attribute modification for SAD-Metric. To modify the attribute of a graph, we randomly select a node in this graph and alter its attribute. Specifically, for a graph sample (i.e. molecule) in the datasets of MoleculeNet, we modify its attribute by randomly selecting an atom and resetting the atom’s type and chirality to other valid values. Such technique can be applied to any graph dataset in which node attributes are discretely represented.\nDetailed settings for compared methods. In the experiments, we compare our method with five existing techniques that aim to improve graph representation learning, i.e. Multi-task (Tran, 2018), GraphMix (Verma et al., 2019), DropEdge (Rong et al., 2019), DisenGNN (Ma et al., 2019) and InitRes (Chen et al., 2020). The detailed settings of these approaches are as follows:\n• Multi-task. Node embeddings are employed for both node classification and link prediction, and the loss term for link prediction possesses the weight of 0.1.\n• GraphMix. In this approach, parameter α controls the Beta distribution from which mixup ratios are sampled, and we fix this parameter as 1.0 in all experiments.\n• DropEdge. During training, 10% edges of each graph are randomly dropped to mitigate the over-fitting and over-smoothing problems.\n• DisenGNN. For each GNN layer, the feature vector of each node is divided into 5 channels, and 5 iterations of neighborhood routing are performed.\n• InitRes. This method constructs a residual connection from the initial node representations to each GNN layer. We set the residual ratio as 0.2, such that the final representation of each node retains at least 20% of the input feature." }, { "heading": "C MORE EXPERIMENTAL DETAILS", "text": "Citation networks. In Tab. 4, we provide the detailed information about three citation networks, in which 20 labeled nodes per category are used for training, and there are 500 and 1000 nodes for validation and test, respectively. We adopt the standard dataset split proposed in Yang et al. (2016) for all experiments on citation networks. All the reported results are averaged over 100 independent runs using different random seeds.\nCoauthor networks. The statistics of two coauthor networks are listed in Tab. 4. Following Shchur et al. (2018), we use 20 labeled nodes per category as the training set, 30 nodes per category as the validation set, and the rest as the test set. The reported accuracy is averaged over 30 random train/validation/test splits, and, for each split, 50 independent runs are performed.\nMolecule datasets. In Tab. 5, we present the detailed statistics of eight molecule datasets in MoleculeNet (Wu et al., 2018). On each dataset, a model intends to predict one or multiple properties of various molecules, where the prediction of each property is a binary classification task." }, { "heading": "D MORE RESULTS ON MOLECULENET", "text": "In Tabs. 6, 7 and 8, we combine different techniques with three GNNs (i.e. GraphSAGE (Hamilton et al., 2017), GAT (Velickovic et al., 2018) and GIN (Xu et al., 2019)) and evaluate various models’ performance on MoleculeNet, in which we set the number of attention heads as 5 for GAT. For all the three experiments, the Embed-SAD model outperforms other methods in terms of average test ROC-AUC. The Input-SAD model achieves superior performance on ClinTox and BACE datasets, and its overall performance is comparable with DisenGNN." }, { "heading": "E NODE-SAD-METRIC: NODE-CENTRIC STRUCTURE-ATTRIBUTE DISENTANGLEMENT METRIC", "text": "E.1 DEFINITION\nThe node-SAD-Metric measures the extent of structure-attribute disentanglement in node embeddings. Similar with the graph-level SAD-Metric, this node-level disentanglement metric evaluate two properties of node embeddings, i.e. independence and interpretability (detailed definitions referring to Sec. 3.3). In this case, after structure and attribute modification, the nodes of a graph are classified into three types: (1) the nodes whose one-hop structure is modified, (2) the nodes whose attribute is modified and (3) the nodes without one-hop structure or attribute modification. In practice, we first randomly drop 20% edges in a graph and then randomly modify the attributes of 20% of the rest nodes whose one-hop structure have not been modified. Similar as in SAD-Metric (Sec. 3.3), we employ the absolute difference between the node embeddings before and after the above modification to perform a three-way classification. Upon on such embedding difference, a\nlinear classifier is trained to classify which type of modification is performed on a node, and the prediction accuracy of such node classification task serves as the node-SAD-Metric.\nE.2 EXPERIMENTAL RESULTS\nSetups. As in Sec. 5.4, we use the MoleculeNet dataset (Wu et al., 2018) and the scaffold split scheme (Chen et al., 2012) for this experiment. Also, the settings of five studied models are identical with those in Sec. 5.4. A linear classifier is trained with the graphs in the training split and evaluated by the graphs in the test split. All results are averaged over five independent runs using different random seeds.\nResults. Tab. 9 reports the structure-attribute disentanglement performance of five models in terms of node-SAD-Metric. The random-GCN baseline can poorly disentangle the structure and attribute information in node embedding, and GCN-baseline performs better by pre-training on graph classification task. Among three methods for disentangled representation learning, the Embed-SAD model achieves the best performance on seven of eight tasks." } ]
2,020
GRAPHSAD: LEARNING GRAPH REPRESENTATIONS
SP:142a01056d20ddab91353b9d2ec07925f82d10ea
[ "The paper proposes a method to make neural networks more accurate and interpretable by replacing their final layers with a probabilistic decision tree. As a result, the network can produce a sequence of decisions that leads to the final classification result, given an input image. The method is trained with soft decisions by assigning probabilities to each leaf, which are associated with a single class. The tree decision hyperplanes are constructed automatically from the backbone networks final dense layer and finetuned. The fact that decisions are soft solves the differentiablility problem of decisions as in various other similar papers, cited or uncited (more below).", "Aim to improve the interpretability and the accuracy of the neural network, this paper takes a step further on the integration of NN with a decision tree. It will replace the final linear layer of the NN with a decision tree induced by pre-trained model weights. It takes advantage of both hard and soft decision trees and designs suitable tree supervision loss thereon. Extensive experiments verify the design choice of the proposed components. On both small-scale and large-scale datasets, it beats the decision tree counterparts. Also, on the aspects of generalization and interpretability, it shows the strength compared to NN." ]
Machine learning applications such as finance and medicine demand accurate and justifiable predictions, barring most deep learning methods from use. In response, previous work combines decision trees with deep learning, yielding models that (1) sacrifice interpretability for accuracy or (2) sacrifice accuracy for interpretability. We forgo this dilemma by jointly improving accuracy and interpretability using Neural-Backed Decision Trees (NBDTs). NBDTs replace a neural network’s final linear layer with a differentiable sequence of decisions and a surrogate loss. This forces the model to learn high-level concepts and lessens reliance on highlyuncertain decisions, yielding (1) accuracy: NBDTs match or outperform modern neural networks on CIFAR, ImageNet and better generalize to unseen classes by up to 16%. Furthermore, our surrogate loss improves the original model’s accuracy by up to 2%. NBDTs also afford (2) interpretability: improving human trust by clearly identifying model mistakes and assisting in dataset debugging. Code and pretrained NBDTs are at github.com/alvinwan/neural-backed-decision-trees.
[ { "affiliations": [], "name": "Alvin Wan" }, { "affiliations": [], "name": "Lisa Dunlap" }, { "affiliations": [], "name": "Daniel Ho" }, { "affiliations": [], "name": "Jihan Yin" }, { "affiliations": [], "name": "Scott Lee" }, { "affiliations": [], "name": "Suzanne Petryk" }, { "affiliations": [], "name": "Sarah Adel Bargal" }, { "affiliations": [], "name": "Joseph E. Gonzalez" } ]
[ { "authors": [ "Karim Ahmed", "Mohammadharis Baig", "Lorenzo Torresani" ], "title": "Network of experts for large-scale image categorization", "venue": null, "year": 2016 }, { "authors": [ "Stephan Alaniz", "Zeynep Akata" ], "title": "XOC: explainable observer-classifier for explainable binary decisions", "venue": "CoRR, abs/1902.01780,", "year": 2019 }, { "authors": [ "Seungryul Baek", "Kwang In Kim", "Tae-Kyun Kim" ], "title": "Deep convolutional decision jungle for image classification", "venue": null, "year": 2003 }, { "authors": [ "Arunava Banerjee" ], "title": "Initializing neural networks using decision trees", "venue": null, "year": 1990 }, { "authors": [ "Arunava Banerjee" ], "title": "Initializing neural networks using decision trees", "venue": "In Proceedings of the International Workshop on Computational Learning and Natural Learning Systems,", "year": 1994 }, { "authors": [ "Ufuk Can Biçici", "Cem Keskin", "Lale Akarun" ], "title": "Conditional information gain networks", "venue": "24th International Conference on Pattern Recognition (ICPR),", "year": 2018 }, { "authors": [ "Olcay Boz" ], "title": "Converting a trained neural network to a decision tree dectext - decision tree extractor", "venue": "In ICMLA,", "year": 2000 }, { "authors": [ "Clemens-Alexander Brust", "Joachim Denzler" ], "title": "Integrating domain knowledge: using hierarchies to improve deep classifiers", "venue": "In Asian Conference on Pattern Recognition,", "year": 2019 }, { "authors": [ "Diogo V Carvalho", "Eduardo M Pereira", "Jaime S Cardoso" ], "title": "Machine learning interpretability", "venue": "A survey on methods and metrics. Electronics,", "year": 2019 }, { "authors": [ "Mark Craven", "Jude W Shavlik" ], "title": "Extracting tree-structured representations of trained networks. In Advances in neural information processing", "venue": null, "year": 1996 }, { "authors": [ "Mark W Craven", "Jude W Shavlik" ], "title": "Using sampling and queries to extract rules from trained neural networks. In Machine learning proceedings", "venue": null, "year": 1994 }, { "authors": [ "Darren Dancey", "David McLean", "Zuhair Bandar" ], "title": "Decision tree extraction from trained neural networks", "venue": null, "year": 2004 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR09,", "year": 2009 }, { "authors": [ "Jia Deng", "Jonathan Krause", "Alexander C Berg", "Li Fei-Fei" ], "title": "Hedging your bets: Optimizing accuracy-specificity trade-offs in large scale visual recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2012 }, { "authors": [ "Finale Doshi-Velez", "Been Kim" ], "title": "Towards a rigorous science of interpretable machine learning", "venue": "arXiv preprint arXiv:1702.08608,", "year": 2017 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017", "venue": "URL http://archive. ics.uci.edu/ml", "year": 2017 }, { "authors": [ "Nicholas Frosst", "Geoffrey E. Hinton" ], "title": "Distilling a neural network into a soft decision", "venue": "tree. CoRR,", "year": 2017 }, { "authors": [ "Yanming Guo", "Yu Liu", "Erwin M Bakker", "Yuanhao Guo", "Michael S Lew" ], "title": "Cnn-rnn: a large-scale hierarchical image classification framework", "venue": "Multimedia Tools and Applications,", "year": 2018 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Kelli Humbird", "Luc Peterson", "Ryan McClarren" ], "title": "Deep neural network initialization with decision trees", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2018 }, { "authors": [ "Irena Ivanova", "Miroslav Kubat" ], "title": "Initialization of neural networks by means of decision trees", "venue": "Knowledge-Based Systems,", "year": 1995 }, { "authors": [ "Irena Ivanova", "Miroslav Kubat" ], "title": "Decision-tree based neural network (extended abstract)", "venue": "In Machine Learning:", "year": 1995 }, { "authors": [ "Cem Keskin", "Shahram Izadi" ], "title": "Splinenets: Continuous neural decision graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Peter Kontschieder", "Madalina Fiterau", "Antonio Criminisi", "Samuel Rota Bulo" ], "title": "Deep neural decision forests", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "R. Krishnan", "G. Sivakumar", "P. Bhattacharya" ], "title": "Extracting decision trees from trained neural networks", "venue": "Pattern Recognition,", "year": 2009 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Ya Le", "Xuan Yang" ], "title": "Tiny imagenet visual recognition challenge", "venue": null, "year": 2015 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "CJ Burges" ], "title": "Mnist handwritten digit database", "venue": "ATT Labs [Online]. Available: http://yann. lecun. com/exdb/mnist,", "year": 2010 }, { "authors": [ "Zachary Chase Lipton" ], "title": "The mythos of model interpretability", "venue": "arXiv preprint arXiv:1606.03490,", "year": 2016 }, { "authors": [ "SM Lundberg", "G Erion", "H Chen", "A DeGrave", "JM Prutkin", "B Nair", "R Katz", "J Himmelfarb", "N Bansal", "S-i Lee" ], "title": "From local explanations to global understanding with explainable ai for trees, nat", "venue": "mach. intell.,", "year": 2020 }, { "authors": [ "Mason McGill", "Pietro Perona" ], "title": "Deciding how to decide: Dynamic routing in artificial neural networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Abdul Arfat Mohammed", "Venkatesh Umaashankar" ], "title": "Effectiveness of hierarchical softmax in large scale classification tasks", "venue": "In 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI),", "year": 2018 }, { "authors": [ "Calvin Murdock", "Zhen Li", "Howard Zhou", "Tom Duerig" ], "title": "Blockout: Dynamic model selection for hierarchical deep networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Venkatesh N. Murthy", "Vivek Singh", "Terrence Chen", "R. Manmatha", "Dorin Comaniciu" ], "title": "Deep decision network for multi-class image classification", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Vitali Petsiuk", "Abir Das", "Kate Saenko" ], "title": "Rise: Randomized input sampling for explanation of black-box models", "venue": "In Proceedings of the British Machine Vision Conference (BMVC),", "year": 2018 }, { "authors": [ "F Poursabzi-Sangdeh", "D Goldstein", "J Hofman", "J Vaughan", "H Wallach" ], "title": "Manipulating and measuring model interpretability", "venue": "In MLConf,", "year": 2018 }, { "authors": [ "Joseph Redmon", "Ali Farhadi" ], "title": "Yolo9000: better, faster, stronger", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": "why should I trust you?”: Explaining the predictions of any classifier", "venue": "In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data", "year": 2016 }, { "authors": [ "Samuel Rota Bulo", "Peter Kontschieder" ], "title": "Neural decision forests for semantic image labelling", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Anirban Roy", "Sinisa Todorovic" ], "title": "Monocular depth estimation using neural regression forest", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "C Rudin" ], "title": "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. manuscript based on c. rudin please stop explaining black box machine learning models for high stakes decisions", "venue": "In Proceedings of NeurIPS 2018 Workshop on Critiquing and Correcting Trends in Learning,", "year": 2018 }, { "authors": [ "Ramprasaath R Selvaraju", "Michael Cogswell", "Abhishek Das", "Ramakrishna Vedantam", "Devi Parikh", "Dhruv Batra" ], "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Noam Shazeer", "Azalia Mirhoseini", "Krzysztof Maziarz", "Andy Davis", "Quoc Le", "Geoffrey Hinton", "Jeff Dean" ], "title": "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer", "venue": "arXiv preprint arXiv:1701.06538,", "year": 2017 }, { "authors": [ "Carlos N Silla", "Alex A Freitas" ], "title": "A survey of hierarchical classification across different application domains", "venue": "Data Mining and Knowledge Discovery,", "year": 2011 }, { "authors": [ "Karen Simonyan", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps", "venue": "arXiv preprint arXiv:1312.6034,", "year": 2013 }, { "authors": [ "Chapman Siu" ], "title": "Transferring tree ensembles to neural networks", "venue": "In Neural Information Processing,", "year": 2019 }, { "authors": [ "Jost Tobias Springenberg", "Alexey Dosovitskiy", "Thomas Brox", "Martin A. Riedmiller" ], "title": "Striving for simplicity: The all convolutional net", "venue": "CoRR, abs/1412.6806,", "year": 2014 }, { "authors": [ "Mukund Sundararajan", "Ankur Taly", "Qiqi Yan" ], "title": "Axiomatic attribution for deep networks", "venue": "International Conference on Machine Learning (ICML)", "year": 2017 }, { "authors": [ "Ryutaro Tanno", "Kai Arulkumaran", "Daniel C. Alexander", "Antonio Criminisi", "Aditya Nori" ], "title": "Adaptive neural trees, 2019", "venue": null, "year": 2019 }, { "authors": [ "Ravi Teja Mullapudi", "William R. Mark", "Noam Shazeer", "Kayvon Fatahalian" ], "title": "Hydranets: Specialized dynamic architectures for efficient inference", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Andreas Veit", "Serge Belongie" ], "title": "Convolutional networks with adaptive inference graphs", "venue": "In The European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Mike Wu", "M Hughes", "Sonali Parbhoo", "F Doshi-Velez" ], "title": "Beyond sparsity: Tree-based regularization of deep models for interpretability", "venue": "Neural Information Processing Systems (NIPS) Conference. Transparent and Interpretable Machine Learning in Safety Critical Environments (TIML) Workshop,", "year": 2017 }, { "authors": [ "Brandon Yang", "Gabriel Bender", "Quoc V Le", "Jiquan Ngiam" ], "title": "Condconv: Conditionally parameterized convolutions for efficient inference", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Matthew D Zeiler", "Rob Fergus" ], "title": "Visualizing and understanding convolutional networks", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2014 }, { "authors": [ "Jianming Zhang", "Zhe Lin", "Jonathan Brandt", "Xiaohui Shen", "Stan Sclaroff" ], "title": "Top-down neural attention by excitation backprop", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Many computer vision applications (e.g. medical imaging and autonomous driving) require insight into the model’s decision process, complicating applications of deep learning which are traditionally black box. Recent efforts in explainable computer vision attempt to address this need and can be grouped into one of two categories: (1) saliency maps and (2) sequential decision processes. Saliency maps retroactively explain model predictions by identifying which pixels most affected the prediction. However, by focusing on the input, saliency maps fail to capture the model’s decision making process. For example, saliency offers no insight for a misclassification when the model is “looking” at the right object for the wrong reasons. Alternatively, we can gain insight into the model’s decision process by breaking up predictions into a sequence of smaller semantically meaningful decisions as in rule-based models like decision trees. However, existing efforts to fuse deep learning and decision trees suffer from (1) significant accuracy loss, relative to contemporary models (e.g., residual networks), (2) reduced interpretability due to accuracy optimizations (e.g., impure leaves and ensembles), and (3) tree structures that offer limited insight into the model’s credibility.\nTo address these, we propose Neural-Backed Decision Trees (NBDTs) to jointly improve both (1) accuracy and (2) interpretability of modern neural networks, utilizing decision rules that preserve (3) properties like sequential, discrete decisions; pure leaves; and non-ensembled predictions. These properties in unison enable unique insights, as we show. We acknowledge that there is no universally-accepted definition of interpretability (Lundberg et al., 2020; Doshi-Velez & Kim, 2017; Lipton, 2016), so to show interpretability, we adopt a definition offered by Poursabzi-Sangdeh et al. (2018): A model is interpretable if a human can validate its prediction, determining when the model has made a sizable mistake. We picked this definition for its importance to downstream benefits we can evaluate, specifically (1) model or dataset debugging and (2) improving human trust. To accomplish this, NBDTs replace the final linear layer of a neural network with a differentiable oblique decision tree and, unlike its predecessors (i.e. decision trees, hierarchical classifiers), uses a hierarchy derived from model parameters, does not employ a hierarchical softmax, and can be created from any existing classification neural network without architectural modifications. These improvements\n⇤denotes equal contribution\ntailor the hierarchy to the network rather than overfit to the feature space, lessens the decision tree’s reliance on highly uncertain decisions, and encourages accurate recognition of high-level concepts. These benefits culminate in joint improvement of accuracy and interpretability. Our contributions:\n1. We propose a tree supervision loss, yielding NBDTs that match/outperform and outgeneralize modern neural networks (WideResNet, EfficientNet) on ImageNet, TinyImageNet200, and CIFAR100. Our loss also improves the original model by up to 2%.\n2. We propose alternative hierarchies for oblique decision trees – induced hierarchies built using pre-trained neural network weights – that outperform both data-based hierarchies (e.g. built with information gain) and existing hierarchies (e.g. WordNet), in accuracy.\n3. We show NBDT explanations are more helpful to the user when identifying model mistakes, preferred when using the model to assist in challenging classification tasks, and can be used to identify ambiguous ImageNet labels." }, { "heading": "2 RELATED WORKS", "text": "Saliency Maps. Numerous efforts (Springenberg et al., 2014; Zeiler & Fergus, 2014; Simonyan et al., 2013; Zhang et al., 2016; Selvaraju et al., 2017; Ribeiro et al., 2016; Petsiuk et al., 2018; Sundararajan et al., 2017) have explored the design of saliency maps identifying pixels that most influenced the model’s prediction. White-box techniques (Springenberg et al., 2014; Zeiler & Fergus, 2014; Simonyan et al., 2013; Selvaraju et al., 2017; Sundararajan et al., 2017) use the network’s parameters to determine salient image regions, and black-box techniques (Ribeiro et al., 2016; Petsiuk et al., 2018) determine pixel importance by measuring the prediction’s response to perturbed inputs. However, saliency does not explain the model’s decision process (e.g. Was the model confused early on, distinguishing between Animal and Vehicle? Or is it only confused between dog breeds?).\nTransfer to Explainable Models. Prior to the recent success of deep learning, decision trees were state-of-the-art on a wide variety of learning tasks and the gold standard for interpretability. Despite this recency, study at the intersection of neural network and decision tree dates back three decades, where neural networks were seeded with decision tree weights (Banerjee, 1990; 1994; Ivanova & Kubat, 1995a;b), and decision trees were created from neural network queries (Krishnan et al., 1999; Boz, 2000; Dancey et al., 2004; Craven & Shavlik, 1996; 1994), like distillation (Hinton et al., 2015). The modern analog of both sets of work (Humbird et al., 2018; Siu, 2019; Frosst & Hinton, 2017) evaluate on feature-sparse, sample-sparse regimes such as the UCI datasets (Dua & Graff, 2017) or MNIST (LeCun et al., 2010) and perform poorly on standard image classification tasks.\nHybrid Models. Recent work produces hybrid decision tree and neural network models to scale up to datasets like CIFAR10 (Krizhevsky, 2009), CIFAR100 (Krizhevsky, 2009), TinyImageNet (Le & Yang, 2015), and ImageNet (Deng et al., 2009). One category of models organizes the neural network into a hierarchy, dynamically selecting branches to run inference (Veit & Belongie, 2018; McGill & Perona, 2017; Teja Mullapudi et al., 2018; Redmon & Farhadi, 2017; Murdock et al., 2016). However, these models use impure leaves resulting in uninterpretatble, stochastic paths. Other approaches fuse deep learning into each decision tree node: an entire neural network (Murthy et al., 2016), several layers (Murdock et al., 2016; Roy & Todorovic, 2016), a linear layer (Ahmed et al., 2016), or some other parameterization of neural network output (Kontschieder et al., 2015). These models see reduced interpretability by using k-way decisions with large k (via depth-2 trees) (Ahmed et al., 2016; Guo et al., 2018) or employing an ensemble (Kontschieder et al., 2015; Ahmed et al., 2016), which is often referred to as a “black box” (Carvalho et al., 2019; Rudin, 2018).\nHierarchical Classification (Silla & Freitas, 2011). One set of approaches directly uses a preexisting hierarchy over classes, such as WordNet (Redmon & Farhadi, 2017; Brust & Denzler, 2019; Deng et al.). However conceptual similarity is not indicative of visual similarity. Other models build a hierarchy using the training set directly, via a classic data-dependent metric like Gini impurity (Alaniz & Akata, 2019) or information gain (Rota Bulo & Kontschieder, 2014; Biçici et al., 2018). These models are instead prone to overfitting, per (Tanno et al., 2019). Finally, several works introduce hierarchical surrogate losses (Wu et al., 2017; Deng et al., 2012), such as hierarchical softmax (Mohammed & Umaashankar, 2018), but as the authors note, these methods quickly suffer from major accuracy loss with more classes or higher-resolution images (e.g. beyond CIFAR10). We demonstrate hierarchical classifiers attain higher accuracy without a hierarchical softmax." }, { "heading": "3 METHOD", "text": "Neural-Backed Decision Trees (NBDTs) replace a network’s final linear layer with a decision tree. Unlike classical decision trees or many hierarchical classifiers, NBDTs use path probabilities for inference (Sec 3.1) to tolerate highly-uncertain intermediate decisions, build a hierarchy from pretrained model weights (Sec 3.2 & 3.3) to lessen overfitting, and train with a hierarchical loss (Sec 3.4) to significantly better learn high-level decisions (e.g., Animal vs. Vehicle)." }, { "heading": "3.1 INFERENCE", "text": "Our NBDT first featurizes each sample using the neural network backbone; the backbone consists of all neural network layers before the final linear layer. Second, we run the final fully-connected layer as an oblique decision tree. However, (a) a classic decision tree cannot recover from a mistake early in the hierarchy and (b) just running a classic decision tree on neural features drops accuracy significantly, by up to 11% (Table 2). Thus, we present modified decision rules (Figure 1, B):\n1. Seed oblique decision rule weights with neural network weights. An oblique decision tree supports only binary decisions, using a hyperplane for each decision. Instead, we associate a weight vector ni with each node. For leaf nodes, where i = k 2 [1,K], each ni = wk is a row vector from the fully-connected layer’s weights W 2 RD⇥K . For all inner nodes, where i 2 [K+1, N ], find all leaves k 2 L(i) in node i’s subtree and average their weights: ni = P k2L(i) wk/|L(i)|.\n2. Compute node probabilities. Child probabilities are given by softmax inner products. For each sample x and node i, compute the probability of each child j 2 C(i) using p(j|i) = SOFTMAX(h~ni, xi)[j], where ~ni = (hnj , xi)j2C(i). 3. Pick a leaf using path probabilities. Inspired by Deng et al. (2012), consider a leaf, its class k and its path from the root Pk. The probability of each node i 2 Pk traversing the next node in the path Ck(i) 2 Pk \\ C(i) is denoted p(Ck(i)|i). Then, the probability of leaf and its class k is\np(k) = ⇧i2Pkp(Ck(i)|i) (1)\nIn soft inference, the final class prediction k̂ is defined over these class probabilities,\nk̂ = argmaxkp(k) = argmaxk⇧i2Pkp(Ck(i)|i) (2)\nOur inference strategy has two benefits: (a) Since the architecture is unchanged, the fully-connected layer can be run regularly (Table 5) or as decision rules (Table 1), and (b) unlike decision trees and\nother conditionally-executed models (Tanno et al., 2019; Veit & Belongie, 2018), our method can recover from a mistake early in the hierarchy with sufficient uncertainty in the incorrect path (Figure 1 C, Appendix Table 7). This inference mode bests classic tree inference (Appendix C.2)." }, { "heading": "3.2 BUILDING INDUCED HIERARCHIES", "text": "Existing decision-tree-based methods use (a) hierarchies built with data-dependent heuristics like information gain or (b) existing hierarchies like WordNet. However, the former overfits to the data, and the latter focuses on conceptual rather than visual similarity: For example, by virtue of being an animal, Bird is closer to Cat than to Plane, according to WordNet. However, the opposite is true for visual similarity: by virtue of being in the sky, Bird is more visually similar to Plane than to Cat. Thus, to prevent overfitting and reflect visual similarity, we build a hierarchy using model weights.\nOur hierarchy requires pre-trained model weights. Take row vectors wk : k 2 [1,K], each representing a class, from the fully-connected layer weights W . Then, run hierarchical agglomerative clustering on the normalized class representatives wk/kwkk2. Agglomerative clustering decides which nodes and groups of nodes are iteratively paired. As described in Sec 3.1, each leaf node’s weight is a row vector wk 2 W (Figure 2, Step B) and each inner node’s weight ni is the average of its leaf node’s weights (Figure 2, Step C). This hierarchy is the induced hierarchy (Figure 2)." }, { "heading": "3.3 LABELING DECISION NODES WITH WORDNET", "text": "WordNet is a hierarchy of nouns. To assign WordNet meaning to nodes, we compute the earliest common ancestor for all leaves in a subtree: For example, say Dog and Cat are two leaves that share a parent. To find WordNet meaning for the parent, find all ancestor concepts that Dog and Cat share, like Mammal, Animal, and Living Thing. The earliest shared ancestor is Mammal, so we assign Mammal to the parent of Dog and Cat. We repeat for all inner nodes.\nHowever, the WordNet corpus is lacking in concepts that are not themselves objects, like object attributes (e.g., Pencil and Wire are both cylindrical) and (b) abstract visual ideas like context (e.g., fish and boat are both aquatic). Many of these which are littered across our induced hierarchies (Appendix Figure 14). Despite this limitation, we use WordNet to assign meaning to intermediate decision nodes, with more sophisticated methods left to future work." }, { "heading": "3.4 FINE-TUNING WITH TREE SUPERVISION LOSS", "text": "Even though standard cross entropy loss separates representatives for each leaf, it is not trained to separate representatives for each inner node (Table 3, “None”). To amend this, we add a tree super-\nvision loss, a cross entropy loss over the class distribution of path probabilities Dnbdt = {p(k)}Kk=1 (Eq. 1) from Sec 3.1, with time-varying weights !t, t where t is the epoch count:\nL = t CROSSENTROPY(Dpred,Dlabel)| {z } Loriginal +!t CROSSENTROPY(Dnbdt,Dlabel)| {z } Lsoft\n(3)\nOur tree supervision loss Lsoft requires a pre-defined hierarchy. We find that (a) tree supervision loss damages learning speed early in training, when leaf weights are nonsensical. Thus, our tree supervision weight !t grows linearly from !0 = 0 to !T = 0.5 for CIFAR10, CIFAR100, and to !T = 5 for TinyImageNet, ImageNet; t 2 [0, 1] decays linearly over time. (b) We re-train where possible, fine-tuning with Lsoft only when the original model accuracy is not reproducible. (c) Unlike hierarchical softmax, our path-probability cross entropy loss Lsoft disproportionately upweights decisions earlier in the hierarchy, encouraging accurate high-level decisions; this is reflected our out-generalization of the baseline neural network by up to 16% to unseen classes (Table 6)." }, { "heading": "4 EXPERIMENTS", "text": "NBDTs obtain state-of-the-art results for interpretable models and match or outperform modern neural networks on image classification. We report results on different models (ResNet, WideResNet, EfficientNet) and datasets (CIFAR10, CIFAR100, TinyImageNet, ImageNet). We additionally conduct ablation studies to verify the hierarchy and loss designs, find that our training procedure improves the original neural network’s accuracy by up to 2%, and show that NBDTs improve generalization to unseen classes by up to 16%. All reported improvements are absolute." }, { "heading": "4.1 RESULTS", "text": "Small-scale Datasets. Our method (Table 1) matches or outperforms recently state-of-the-art neural networks. On CIFAR10 and TinyImageNet, NBDT accuracy falls within 0.15% of the baseline neural network. On CIFAR100, NBDT accuracy outperforms the baseline by ⇠1%. Large-scale Dataset. On ImageNet (Table 3), NBDTs obtain 76.60% top-1 accuracy, outperforming the strongest competitor NofE by 15%. Note that we take the best competing results for any decision-tree-based method, but the strongest competitors hinder interpretability by using ensembles of models like a decision forest (DNDF, DCDJ) or feature shallow trees with only depth 2 (NofE)." }, { "heading": "4.2 ANALYSIS", "text": "Analyses show that our NBDT improvements are dominated by significantly improved ability to distinguish higher-level concepts (e.g., Animal vs. Vehicle).\nComparison of Hierarchies. Table 2 shows that our induced hierarchies outperform alternatives. In particular, data-dependent hierarchies overfit, and the existing WordNet hierarchy focuses on conceptual rather than visual similarity.\nComparisons of Losses. Previous work suggests hierarchical softmax (Appendix C.1) is necessary for hierarchical classifiers. However, our results suggest otherwise: NBDTs trained with hierarchical softmax see ⇠3% less accuracy than with tree supervision loss on TinyImageNet (Table 3). Original Neural Network. Per Sec 3.1, we can run the original neural network’s fully-connected layer normally, after training with tree supervision loss. Using this, we find that the original neural network’s accuracy improves by up to 2% on CIFAR100, TinyImageNet (Table 5).\nZero-Shot Superclass Generalization. We define a “superclass” to be the hypernym of several classes. (e.g. Animal is a superclass of Cat and Dog). Using WordNet (per Sec 3.2), we (1) identify which superclasses each NBDT inner node is deciding between (e.g. Animal vs. Vehicle). (2) We find unseen classes that belong to the same superclass, from a different dataset. (e.g. Pull Turtle images from ImageNet). (3) Evaluate the model to ensure the unseen class is classified into the correct superclass (e.g. ensure Turtle is classified as Animal). For an NBDT, this is straightforward: one of the inner nodes classifies Animal vs. Vehicle (Sec 3.3). For a standard neural network, we consider the superclass that the final prediction belongs to. (i.e. When evaluating Animal vs. Vehicle on a Turtle image, the CIFAR-trained model may predict any CIFAR Animal class). See Appendix B.2 for details. Our NBDT consistently bests the original neural network by 8%+ (Table 6). When discerning Carnivore vs. Ungulate, NBDT outperforms the original neural network by 16%.\nMid-Training Hierarchy: We test NBDTs without using pre-trained weights, instead constructing hierarchies during training from the partially-trained network’s weights. Tree supervision loss with mid-training hierarchies reliably improve the original neural network’s accuracy, up to ⇠0.6%, and the NBDT itself can match the original neural network’s accuracy (Table 4). However, this underperforms NBDT (Table 1), showing fully-trained weights are still preferred for hierarchy construction." }, { "heading": "5 INTERPRETABILITY", "text": "By breaking complex decisions into smaller intermediate decisions, decision trees provide insight into the decision process. However, when the intermediate decisions are themselves neural network\npredictions, extracting insight becomes more challenging. To address this, we adopt benchmarks and an interpretability definition offered by Poursabzi-Sangdeh et al. (2018): A model is interpretable if a human can validate its prediction, determining when the model has made a sizable mistake. To assess this, we adapt Poursabzi-Sangdeh et al. (2018)’s benchmarks to computer vision and show (a) humans can identify misclassifications with NBDT explanations more accurately than with saliency explanations (Sec 5.1), (b) a way to utilize NBDT’s entropy to identify ambiguous labels (Sec. 5.4), and (c) that humans prefer to agree with NBDT predictions when given a challenging image classification task (Sec. 5.2 & 5.3). Note that these analyses depend on three model properties that NBDT preserves: (1) discrete, sequential decisions, so that one path is selected; (2) pure leaves, so that one path picks one class; and (3) non-ensembled predictions, so that path to prediction attribution is discrete. In all surveys, we use CIFAR10-trained models with ResNet18 backbones." }, { "heading": "5.1 SURVEY: IDENTIFYING FAULTY MODEL PREDICTIONS", "text": "In this section we aim to answer a question posed in (Poursabzi-Sangdeh et al., 2018) ”How well can someone detect when the model has made a sizable mistake?”. In this survey, each user is given 3 images, 2 of which are correctly classified and 1 is mis-classified. Users must predict which image was incorrectly classified given a) the model explanations and b) without the final prediction. For saliency maps, this is a near-impossible task as saliency usually highlights the main object in the image, regardless of wrong or right. However, hierarchical methods provide a sensible sequence of\nintermediate decisions that can be checked. This is reflected in the results: For each explainability technique, we collected 600 survey responses. When given saliency maps and class probabilities, only 87 predictions were correctly identified as wrong. In comparison, when given the NBDT series of predicted classes and child probabilities (e.g., “Animal (90%) ! Mammal (95%)”, without the final leaf prediction) 237 images were correctly identified as wrong. Thus, respondents can better recognize mistakes in NBDT explanations nearly 3 times better.\nAlthough NBDT provides more information than saliency maps about misclassification, a majority – the remaining 363 NBDT predictions – were not correctly identified. To explain this, we note that ⇠ 37% of all NBDT errors occur at the final binary decision, between two leaves; since we provide all decisions except the final one, these leaf errors would be impossible to distinguish." }, { "heading": "5.2 SURVEY: EXPLANATION-GUIDED IMAGE CLASSIFICATION", "text": "In this section we aim to answer a question posed in (Poursabzi-Sangdeh et al., 2018) “To what extent do people follow a model’s predictions when it is beneficial to do so?”. In this first survey, each user is asked to classify a severely blurred image (Fig 4). This survey affirms the problem’s difficulty, decimating human performance to not much more than guessing: 163 of 600 responses are correct (27.2% accuracy).\nIn the next survey, we offer the blurred image and two sets of predictions: (1) the original neural network’s predicted class and its saliency map, and (2) the NBDT predicted class and the sequence of decisions that led up to it (“Animal, Mammal, Cat”). For all examples, the two models predict different classes. In 30% of the examples, NBDT is right and the original model is wrong. In another 30%, the opposite is true. In the last 40%, both models are wrong. As shown in Fig. 4, the image is extremely blurry, so the user must rely on the models to inform their prediction. When offered model predictions, in this survey, 255 of 600 responses are correct (42.5% accuracy), a 15.3 point improvement over no model guidance. We observe that humans trust NBDT-explained prediction more often than the saliency-explained predictions. Out of 600 responses, 312 responses agreed with the NBDT’s prediction, 167 responses agreed with the base model’s prediction, and 119 responses disagreed with both model’s predictions. Note that a majority of user decisions (⇠ 80%) agreed with either model prediction, even though neither model prediction was correct in 40% of examples, showing our images were sufficiently blurred to force reliance on the models. Furthermore, 52% of responses agreed with NBDT (against saliency’s 28%), even though only 30% of NBDT predictions were correct, showing improvement in model trust." }, { "heading": "5.3 SURVEY: HUMAN-DIAGNOSED LEVEL OF TRUST", "text": "The explanation of an NBDT prediction is the visualization of the path traversed. We then compare these NBDT explanations to other explainability methods in human studies. Specifically, we ask participants to pick an expert to trust (Appendix, Figure 13), based on the expert’s explanation – a saliency map (ResNet18, GradCAM), a decision tree (NBDT), or neither. We only use samples where ResNet18 and NBDT predictions agree. Of 374 respondents that picked one method over the other, 65.9% prefer NBDT explanations; for misclassified samples, 73.5% prefer NBDT. This supports the previous survey’s results, showing humans trust NBDTs more than current saliency techniques when explicitly asked." }, { "heading": "5.4 ANALYSIS: IDENTIFYING FAULTY DATASET LABELS", "text": "There are several types of ambiguous labels (Figure 5), any of which could hurt model performance for an image classification dataset like ImageNet. To find these images, we use entropy in NBDT\ndecisions, which we find is a much stronger indicator of ambiguity than entropy in the original neural network prediction. The intuition is as follows: If all intermediate decisions have high certainty except for a few decisions, those decisions are deciding between multiple equally plausible cases. Using this intuition, we can identify ambiguous labels by finding samples with high “path entropy” – or highly disparate entropies for intermediate decisions on the NBDT prediction path.\nPer Figure 6, the highest “path entropy” samples in ImageNet contain multiple objects, where each object could plausibly be used for the image class. In contrast, samples that induce the highest entropy in the baseline neural network do not suggest ambiguous labels. This suggests NBDT entropy is more informative compared to that of a standard neural network." }, { "heading": "6 CONCLUSION", "text": "In this work, we propose Neural-Backed Decision Trees that see (1) improved accuracy: NBDTs out-generalize (16%+), improve (2%+), and match (0.15%) or outperform (1%+) state-of-the-art neural networks on CIFAR10, CIFAR100, TinyImageNet, and ImageNet. We also show (2) improved interpretability by drawing unique insights from our hierarchy, confirming that humans trust NBDT’s over saliency and illustrate how path entropy can be used to identify ambiguous labels. This challenges the conventional supposition of a dichotomy between accuracy and interpretability, paving the way for jointly accurate and interpretable models in real-world deployments." } ]
2,021
NBDT: NEURAL-BACKED DECISION TREE
SP:21296aeb09e1d3d7ca0a729f1ab614f15b12960d
[ "The authors propose a parametric regularizer for estimating unobserved flows in networks, incorporating edge features and other side information. The parameters of the regularizer are learned by means of minimizing the empirical cross-validated MSE. Regularization is necessary because the basic problem, while convex, typically is under-constrained; resulting in a infinite space of solutions which match the observed data. ", "In this paper, the authors introduce a method for missing flow estimation. These method has potential to address some important applications in transportation, power systems and water management. One major difference compared with the previous work is that edge features are incorporated into the optimization process so that the model has a better chance at learning edge-specific patterns. The experimental results have shown some success of the proposed method in traffic and power datasets." ]
The flow estimation problem consists of predicting missing edge flows in a network (e.g., traffic, power, and water) based on partial observations. These missing flows depend both on the underlying physics (edge features and a flow conservation law) as well as the observed edge flows. This paper introduces an optimization framework for computing missing edge flows and solves the problem using bilevel optimization and deep learning. More specifically, we learn regularizers that depend on edge features (e.g., number of lanes in a road, resistance of a power line) using neural networks. Empirical results show that our method accurately predicts missing flows, outperforming the best baseline, and is able to capture relevant physical properties in traffic and power networks.
[ { "affiliations": [], "name": "Arlei Silva" }, { "affiliations": [], "name": "Furkan Kocayusufoglu" }, { "affiliations": [], "name": "Saber Jafarpour" }, { "affiliations": [], "name": "Francesco Bullo" } ]
[ { "authors": [ "Simon Arridge", "Peter Maass", "Ozan Öktem", "Carola-Bibiane Schönlieb" ], "title": "Solving inverse problems using data-driven models", "venue": "Acta Numerica,", "year": 2019 }, { "authors": [ "Mikhail Belkin", "Partha Niyogi", "Vikas Sindhwani" ], "title": "Manifold regularization: A geometric framework for learning from labeled and unlabeled examples", "venue": "Journal of Machine Learning Research,", "year": 2006 }, { "authors": [ "Yoshua Bengio" ], "title": "Gradient-based optimization of hyperparameters", "venue": "Neural computation,", "year": 1900 }, { "authors": [ "Léon Bottou", "Olivier Bousquet" ], "title": "The tradeoffs of large scale learning", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Alberto Bressan", "Sunčica Čanić", "Mauro Garavello", "Michael Herty", "Benedetto Piccoli" ], "title": "Flows on networks: recent results and perspectives", "venue": "EMS Surveys in Mathematical Sciences,", "year": 2014 }, { "authors": [ "Tom Brown", "Jonas Hörsch", "David Schlachtberger" ], "title": "Pypsa: Python for power system analysis", "venue": "arXiv preprint arXiv:1707.09913,", "year": 2017 }, { "authors": [ "Giuseppe Maria Coclite", "Mauro Garavello", "Benedetto Piccoli" ], "title": "Traffic flow on a road network", "venue": "SIAM Journal on Mathematical Analysis,", "year": 2005 }, { "authors": [ "Benoı̂t Colson", "Patrice Marcotte", "Gilles Savard" ], "title": "An overview of bilevel optimization", "venue": "Annals of operations research,", "year": 2007 }, { "authors": [ "George Cybenko" ], "title": "Approximation by superpositions of a sigmoidal function", "venue": "Mathematics of control, signals and systems,", "year": 1989 }, { "authors": [ "Carlos F Daganzo" ], "title": "The cell transmission model: A dynamic representation of highway traffic consistent with the hydrodynamic theory", "venue": "Transportation Research Part B: Methodological,", "year": 1994 }, { "authors": [ "Carlos F Daganzo" ], "title": "The cell transmission model, part ii: network traffic", "venue": "Transportation Research Part B: Methodological,", "year": 1995 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Justin Domke" ], "title": "Generic methods for optimization-based modeling", "venue": "In Artificial Intelligence and Statistics, pp", "year": 2012 }, { "authors": [ "Florian Dörfler", "John W Simpson-Porco", "Francesco Bullo" ], "title": "Electrical networks and algebraic graph theory: Models, properties, and applications", "venue": "Proceedings of the IEEE,", "year": 2018 }, { "authors": [ "Heinz Werner Engl", "Martin Hanke", "Andreas Neubauer" ], "title": "Regularization of inverse problems, volume 375", "venue": "Springer Science & Business Media,", "year": 1996 }, { "authors": [ "L Franceschi", "M Donini", "P Frasconi", "M Pontil" ], "title": "Forward and reverse gradient-based hyperparameter optimization", "venue": "In ICML,", "year": 2017 }, { "authors": [ "L Franceschi", "P Frasconi", "S Salzo", "R Grazzi", "M Pontil" ], "title": "Bilevel programming for hyperparameter optimization and meta-learning", "venue": "In ICML,", "year": 2018 }, { "authors": [ "M. Garavello", "B. Piccoli" ], "title": "Traffic Flow on Networks: Conservation Laws Model. AIMS series on applied mathematics", "venue": "American Institute of Mathematical Sciences,", "year": 2006 }, { "authors": [ "Edward Grefenstette", "Brandon Amos", "Denis Yarats", "Phu Mon Htut", "Artem Molchanov", "Franziska Meier", "Douwe Kiela", "Kyunghyun Cho", "Soumith Chintala" ], "title": "Generalized inner loop metalearning", "venue": null, "year": 1910 }, { "authors": [ "David Hallac", "Jure Leskovec", "Stephen Boyd" ], "title": "Network lasso: Clustering and optimization in large graphs", "venue": "In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining,", "year": 2015 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "David K Hammond", "Pierre Vandergheynst", "Rémi Gribonval" ], "title": "Wavelets on graphs via spectral graph theory", "venue": "Applied and Computational Harmonic Analysis,", "year": 2011 }, { "authors": [ "Jozef Hanc", "Slavomir Tuleja", "Martina Hancova" ], "title": "Symmetries and conservation laws: Consequences of noether’s theorem", "venue": "American Journal of Physics,", "year": 2004 }, { "authors": [ "Juan C Herrera", "Daniel B Work", "Ryan Herring", "Xuegang Jeff Ban", "Quinn Jacobson", "Alexandre M Bayen" ], "title": "Evaluation of traffic data obtained via GPS-enabled mobile phones: The mobile century field experiment", "venue": "Transportation Research Part C: Emerging Technologies,", "year": 2010 }, { "authors": [ "Jonas Hörsch", "Fabian Hofmann", "David Schlachtberger", "Tom Brown" ], "title": "Pypsa-eur: An open optimisation model of the european transmission system", "venue": "Energy Strategy Reviews,", "year": 2018 }, { "authors": [ "Junteng Jia", "Michael T. Schaub", "Santiago Segarra", "Austin R. Benson" ], "title": "Graph-based semisupervised and active learning for edge flows", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2019 }, { "authors": [ "Alexander Jung" ], "title": "On the duality between network flows and network lasso", "venue": "IEEE Signal Processing Letters,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Jan Larsen", "Lars Kai Hansen", "Claus Svarer", "M Ohlsson" ], "title": "Design and regularization of neural networks: the optimal use of a validation", "venue": "Neural Networks for Signal Processing VI. Proceedings of the 1996 IEEE Signal Processing Society Workshop,", "year": 1996 }, { "authors": [ "Yaguang Li", "Rose Yu", "Cyrus Shahabi", "Yan Liu" ], "title": "Diffusion convolutional recurrent neural network: Data-driven traffic forecasting", "venue": "arXiv preprint arXiv:1707.01926,", "year": 2017 }, { "authors": [ "Michael James Lighthill", "Gerald Beresford Whitham" ], "title": "On kinematic waves ii. a theory of traffic flow on long crowded roads", "venue": "Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences,", "year": 1955 }, { "authors": [ "Zichao Long", "Yiping Lu", "Xianzhong Ma", "Bin Dong" ], "title": "Pde-net: Learning pdes from data", "venue": "In 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jonathan Lorraine", "Paul Vicol", "David Duvenaud" ], "title": "Optimizing millions of hyperparameters by implicit differentiation", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Dougal Maclaurin", "David Duvenaud", "Ryan Adams" ], "title": "Gradient-based hyperparameter optimization through reversible learning", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Fabian Pedregosa" ], "title": "Hyperparameter optimization with approximate gradient", "venue": "In Proceedings of the 33rd International Conference on International Conference on Machine Learning-Volume", "year": 2016 }, { "authors": [ "Christopher Rackauckas", "Yingbo Ma", "Julius Martensen", "Collin Warner", "Kirill Zubov", "Rohit Supekar", "Dominic Skinner", "Ali Ramadhan" ], "title": "Universal differential equations for scientific machine learning", "venue": null, "year": 2001 }, { "authors": [ "Maziar Raissi", "Paris Perdikaris", "George E Karniadakis" ], "title": "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations", "venue": "Journal of Computational Physics,", "year": 2019 }, { "authors": [ "Paul I Richards" ], "title": "Shock waves on the highway", "venue": "Operations research,", "year": 1956 }, { "authors": [ "Albert Tarantola" ], "title": "Inverse problem theory and methods for model parameter estimation, volume 89", "venue": null, "year": 2005 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Minjie Wang", "Lingfan Yu", "Da Zheng", "Quan Gan", "Yu Gai", "Zihao Ye", "Mufei Li", "Jinjing Zhou", "Qi Huang", "Chao Ma" ], "title": "Deep graph library: Towards efficient and scalable deep learning on graphs", "venue": "arXiv preprint arXiv:1909.01315,", "year": 2019 }, { "authors": [ "Daniel B Work", "Sébastien Blandin", "Olli-Pekka Tossavainen", "Benedetto Piccoli", "Alexandre M Bayen" ], "title": "A traffic model for velocity data assimilation", "venue": "Applied Mathematics Research eXpress,", "year": 2010 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Huaxiu Yao", "Xianfeng Tang", "Hua Wei", "Guanjie Zheng", "Zhenhui Li" ], "title": "Revisiting spatial-temporal similarity: A deep learning framework for traffic prediction", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Bing Yu", "Haoteng Yin", "Zhanxing Zhu" ], "title": "Spatio-temporal graph convolutional networks: a deep learning framework for traffic forecasting", "venue": "In Proceedings of the 27th International Joint Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Dengyong Zhou", "Olivier Bousquet", "Thomas N Lal", "Jason Weston", "Bernhard Schölkopf" ], "title": "Learning with local and global consistency", "venue": "In Advances in Neural Information Processing Systems,", "year": 2004 }, { "authors": [ "Xiaojin Zhu", "Zoubin Ghahramani", "John D Lafferty" ], "title": "Semi-supervised learning using gaussian fields and harmonic functions", "venue": "In Proceedings of the 20th International Conference on Machine learning", "year": 2003 }, { "authors": [], "title": "In our experiments, we compare GCN against MLP regularization functions. We have also applied the more popular non-spectral graph convolutional operator (Kipf & Welling, 2016) but preliminary results have shown that the Chebyshev operator achieves better performance in flow estimation. C EXTENDED EXPERIMENTAL SECTION", "venue": null, "year": 2016 }, { "authors": [ "stette" ], "title": "2019), a meta-learning framework that greatly facilitates the implementation of bilevel optimization algorithms by implicitly performing the reverse iterations for a list of optimization algorithms, including SGD. Moreover, our GCN implementation is based on the Deep Graph Library (DGL) (Wang et al., 2019). Hardware: We ran our experiments on a single machine with 4 NVIDIA GeForce RTX 2080 GPUs", "venue": null, "year": 2080 } ]
[ { "heading": "1 INTRODUCTION", "text": "In many applications, ranging from road traffic to supply chains to power networks, the dynamics of flows on edges of a graph is governed by physical laws/models (Bressan et al., 2014; Garavello & Piccoli, 2006). For instance, the LWR model describes equilibrium equations for road traffic Lighthill & Whitham (1955); Richards (1956). However, it is often difficult to fully observe flows in these applications and, as a result, they rely on off-the-shelf machine learning models to make predictions about missing flows (Li et al., 2017; Yu et al., 2018). A key limitation of these machine learning models is that they disregard the physics governing the flows. So, the question arises: can we combine physics and machine learning to make better flow predictions?\nThis paper investigates the problem of predicting missing edge flows based on partial observations and the underlying domain-specific physics defined by flow conservation and edge features (Jia et al., 2019). Edge flows depend on the graph topology due to a flow conservation law—i.e. the total inflow at every vertex is approximately its total out-flow. Moreover, the flow at an edge also depends on its features, which might regularize the space of possible flow distributions in the graph. Here, we propose a model that learns how to predict missing flows from data using bilevel optimization (Franceschi et al., 2017) and neural networks. More specifically, features are given as inputs to a neural network that produces edge flow regularizers. Weights of the network are then optimized via reverse-mode differentiation based on a flow estimation loss from multiple train-validation pairs.\nOur work falls under a broader effort towards incorporating physics knowledge to machine learning, which is relevant for natural sciences and engineering applications where data availability is limited (Rackauckas et al., 2020). Conservation laws (of energy, mass, momentum, charge, etc.) are\nessential to our understanding of the physical world. The classical Noether’s theorem shows that such laws arise from symmetries in nature (Hanc et al., 2004). However, flow estimation, which is an inverse problem (Tarantola, 2005; Arridge et al., 2019), is ill-posed under conservation alone. Regularization enables us to apply domain-knowledge in the solution of inverse problems.\nWe motivate our problem and evaluate its solutions using two application scenarios. The first is road traffic networks (Coclite et al., 2005), where vertices represent locations, edges are road segments, flows are counts of vehicles that traverse a segment and features include numbers of lanes and speed limits. The second scenario is electric power networks (Dörfler et al., 2018), where vertices represent power buses, edges are power lines, flows are amounts of power transmitted and edge features include resistances and lengths of lines. Irrigation channels, gas pipelines, blood circulation, supply chains, air traffic, and telecommunication networks are other examples of flow graphs.\nOur contributions can be summarized as follows: (1) We introduce a missing flow estimation problem with applications in a broad class of flow graphs; (2) we propose a model for flow estimation that is able to learn the physics of flows by combining reverse-mode differentiation and neural networks; (3) we show that our model outperforms the best baseline by up to 18%; and (4) we provide evidence that our model learns interpretable physical properties, such as the role played by resistance in a power transmission network and by the number of lanes in a road traffic network." }, { "heading": "2 FLOW ESTIMATION PROBLEM", "text": "We introduce the flow estimation problem, which consists of inferring missing flows in a network based on a flow conservation law and edge features. We provide a list of symbols in the Appendix.\nFlow Graph. Let G(V, E ,X ) be a flow graph with vertices V (n= |V|), edges E (m= |E|), and edge feature matrix X ∈ Rm×d, where X [e] are the features of edge e. A flow vector f ∈ Rm contains the (possibly noisy) flow fe for each edge e ∈ E . In case G is directed, f ∈ Rm+ , otherwise, a flow is negative if it goes against the arbitrary orientation of its edge. We assume that flows are induced by the graph, and thus, the total flow—in plus out—at each vertex is approximately conserved:∑\n(vi,u)∈E\nf(vi,u) ≈ ∑\n(u,vo)∈E\nf(u,vo),∀u ∈ V\nIn the case of a road network, flow conservation implies that vehicles mostly remain on the road.\nFlow Estimation Problem. Given a graph G(V, E ,X ) with partial flow observations f̂ ∈ Rm′ for a subset E ′ ⊆ E of edges (f̂e is the flow for e ∈ E ′, m′= |E ′| < m), predict flows for edges in E \\ E ′.\nIn our road network example, partial vehicle counts f̂ might be measured by sensors placed at a few segments, and the goal is to estimate counts at the remaining segments. One would expect flows not to be fully conserved in most applications due to the existence of inputs and outputs, such as parking lots and a power generators/consumers. In case these input and output values are known exactly, they can be easily incorporated to our problem as flow observations. Moreover, if they are known approximately, we can apply them as priors (as will be detailed in the next section). For the remaining of this paper, we assume that inputs and outputs are unknown and employ flow conservation as an approximation of the system. Thus, different from classical flow optimization problems, such as min-cost flow (Ahuja et al., 1988), we assume that flows are conserved approximately.\nNotice that our problem is similar to the one studied in Jia et al. (2019). However, while their definition also assumes flow conservation, it does not take into account edge features. We claim that these features play important role in capturing the physics of flows. Our main contribution is a new model that is able to learn how to regularize flows based on edge features using neural networks." }, { "heading": "3 OUR APPROACH: PHYSICS+LEARNING", "text": "In this section, we introduce our approach for the flow estimation problem, which is summarized in Figure 1. We formulate flow estimation as an optimization problem (Section 3.1), where the interplay between the flow network topology and edge features is defined by the physics of flow graphs. Flow estimation is shown to be equivalent to a regularized least-squares problem (Section\n3.2). Moreover, we describe how the effect of edge features and the graph topology can be learned from data using bilevel optimization and neural networks in Section 3.3. Finally, we propose a reverse-mode differentiation algorithm for flow estimation in Section 3.4." }, { "heading": "3.1 FLOW ESTIMATION VIA OPTIMIZATION", "text": "The extent to which flow conservation holds for flows in a graph is known as divergence and can be measured using the oriented incidence matrix B ∈ Rn×m of G. The matrix is defined as follows, Bij = 1 if ∃u such that ej = (vi, u) ∈ E , Bij = −1 if ∃u such that ej = (u, vi) ∈ E , and Bij = 0, otherwise. Given B and f , the divergence at a vertex u can be computed as:\n(Bf)u = ∑\n(vi,u)∈E\nf(vi,u) − ∑\n(u,vo)∈E\nf(u,vo) (1)\nAnd thus, we can compute the total (squared) divergence in the graph as ||Bf ||22 = fᵀBᵀBf =∑ u∈V((Bf)u)\n2. One could try to solve the flow estimation problem by minimizing ||Bf ||22 while keeping the observed flows fixed, however, this problem is ill-posed—there might be multiple solutions to the optimization. The standard approach in such a scenario is to resort to regularization. In particular, we apply a generic regularization function Φ with parameters Θ as follows:\nf∗ = arg min f∈Ω ||Bf ||22 + Φ(f ,X ; f (0); Θ) st. fe = f̂e,∀e ∈ E ′ (2)\nwhere Ω is the domain of f , f (0) ∈ Rm is a prior for flows, fe (f̂e) are entries of f (f̂ ) for edge e and the constraint guarantees that observed flows are not changed. Priors f (0), not be confused with observed flows f̂ , should be set according to the application (e.g., as zero, based on a black-box model or historical data). Regarding the domain Ω, we consider Ω = Rm and Ω = Rm+ . The second case is relevant for directed graphs—when flows must follow edge orientations (e.g., traffic).\nIn Jia et al. (2019), the authors set Φ(f , X, f (0); Θ) as λ2||f ||22 for a regularization parameter λ, which implies a uniform zero prior with anL2 penalty over edges. We claim that the regularization function plays an important role in capturing the physics of flow graphs. As an example, for a power network, Φ should account for the resistance of the lines. Thus, we propose learning the regularization from data. Our approach is based on a least-squares formulation, which will be described next." }, { "heading": "3.2 REGULARIZED LEAST-SQUARES FORMULATION", "text": "Flow estimation problem can be viewed as an inverse problem (Tarantola, 2005). Let x ∈ Rm−m′ be the vector of missing flows and H ∈ Rm×m−m′ be a matrix such that Hij = 1 if fi maps to xj (i.e., they are associated to the same edge), and Hi,j = 0, otherwise. Moreover, let f̃ ∈ Rm be such that f̃e = f̂e if e ∈ E ′ and f̃i = 0, otherwise. Using this notation, we define flow estimation as BHx = −Bf̃ + , where BH is a forward operator, projecting x to a vector of vertex divergences, and −Bf̃ + is the observed data, capturing (negative) vertex divergences for observed flows. The error can be interpreted as noise in observations or some level of model misspecification.\nWe can also define a regularized least-squares problem with the goal of recovering missing flows x:\nx∗ = arg min x∈Ω′ ||BHx +Bf̃ ||22 + ||x− x(0)||2Q(X ;Θ) (3)\nwhere Ω′ is a projection of the domain of f to the space of x, ||x||2M = xᵀMx is the matrixscaled norm of x and x(0) ∈ Rm−m′ are priors for missing flows. The regularization function Φ(f ,X ; f (0),Θ) has the form ||x−x(0)||2Q(X ;Θ), where the matrixQ(X ; Θ) is a function of parameters Θ and edge features X . We focus on the case where Q(X ; Θ) is non-negative and diagonal. Equation 3 has a Bayesian interpretation, with x being a maximum likelihood estimate under a Gaussian assumption—i.e., x ∼ N(x(0),Q(X ; Θ)−1) and Bf̃ ∼ N(0, I) (Tarantola, 2005). Thus, Q(X ; Θ) captures the variance in flow observations f̂ in prior estimates f (0) compared to the one. This allows the regularization function to adapt to different edges based on their features. For instance, in our road network example, Q(X ; Θ) might place a lower weight on flow conservation for flows at a road segment with a small number of lanes, which are possible traffic bottlenecks.\nGiven the least-squares formulation described in this section, how do we model the regularization function Q and learn its parameters Θ? We would like Q to be expressive enough to be able to capture complex physical properties of flows, while Θ to be computed accurately and efficiently. We will address these challenges in the remaining of this paper." }, { "heading": "3.3 BILEVEL OPTIMIZATION FOR META-LEARNING THE PHYSICS OF FLOWS", "text": "This section introduces a model for flow estimation that is able to learn the regularization function Q(X ; Θ) in Equation 3 from data using bilevel optimization and neural networks. Bilevel formulation. We learn the parameters Θ that determine the regularization functionQ(X ; Θ) using the following bilevel optimization formulation:\nΘ∗ = arg min Θ E[||x̂− x∗||22] (4)\nst. x∗ = arg min x∈Ω′ ||BHx +Bf̃ ||22 + ||x− x0||2Q(X ;Θ) (5)\nwhere the inner (lower) problem is the same as Equation 3 and the outer (upper) problem is the expected loss with respect to ground truth flows x̂—which we estimate using cross-validation.\nNotice that optimal values for parameters Θ and missing flows x are both unknown in the bilevel optimization problem. The expectation in Equation 4 is a function of multiple instances of the inner problem (Equation 5). Each inner problem instance has an optimal solution x∗ that depends on parameters Θ. In general, bilevel optimization is not only non-convex but also NP-hard (Colson et al., 2007). However, recent gradient-based solutions for bilevel optimization have been successfully applied to large-scale problems, such as hyper-parameter optimization and meta-learning (Franceschi et al., 2018; Lorraine et al., 2020). We will first describe how we model the function Q(X ; Θ) and then discuss how this problem can be solved efficiently using reverse-mode differentiation.\nWe propose to modelQ(X ; Θ) using a neural network, where X are inputs, Θ are learnable weights and the outputs are diagonal entries of the regularization matrix. This is a natural choice due to the expressive power of neural nets (Cybenko, 1989; Xu et al., 2018).\nMulti-Layer Perceptron (MLP). An MLP-based Q(X ; Θ) has the following form: Q(X ; Θ) = diag(MLP (X ; Θ)) (6)\nwhere MLP (X ; Θ) ∈ Rm−m′ . For instance, Q(X ; Θ) can be a 2-layer MLP:\nQ(X ; Θ) = diag(a(b(XW (1))W (2))) (7)\nwhere Θ = {W (1),W (2)}, W (1) ∈ Rd×h, W (2) ∈ Rh×1, h is the number of nodes in the hidden layer, both a and b are activation functions, and the bias was omitted for convenience.\nGraph Neural Network (GNN). The MLP-based approach assumes that each entry [Q(X ; Θ)]e,e associated to an edge e is a function of its features X [e] only. However, we are also interested in how entries [Q(X ; Θ)]e,e might depend on the features of neighborhood of e in the flow graph topology. Thus, we consider the case where Q(X ; Θ) is a GNN, which is described in the Appendix." }, { "heading": "3.4 FLOW ESTIMATION ALGORITHM", "text": "We now focus on how to solve our bilevel optimization problem (Equations 4 and 5). Our solution applies gradient-based approaches (e.g., SGD (Bottou & Bousquet, 2008), Adam (Kingma & Ba, 2014)) and, for simplicity, our description will be based on the particular case of Gradient Descent and assume a zero prior (x(0) = 0). A key challenge in our problem is to efficiently approximate the gradient of the outer objective with respect to the parameters Θ, which, by the chain rule, depends on the gradient of the inner objective with respect to Θ.\nWe first introduce extra notation to describe the outer problem (Equation 4). Let (f̂k, ĝk) be one ofK train-validation folds, both containing ground-truth flow values, such that f̂k ∈ Rp and ĝk ∈ Rq . For each fold k, we apply the inner problem (Equation 5) to estimate missing flows xk. Estimates for all folds are concatenated into a single vector x = [x1;x2; . . . ;xK ] and the same for validation sets ĝ = [ĝ1; ĝ2; . . . ĝK ]. We define a matrixR ∈ Rq×(m−m\n′) such thatRij = 1 if prediction xj corresponds to validation flow ĝi and Rij = 0, otherwise. Using this representation, we can approximate the expectation in the outer objective as Ψ(x,Θ) = (1/K)||Rx − ĝ||22, where x depends implicitly on Θ. We also introduce ΥΘ(x) as the inner problem objective. Moreover, let Γj(xk,j−1,Θi−1) be one step of gradient descent for the value of xk at iteration j with learning rate β:\nΓj(xk,j−1,Θi−1) = xk,j−1 − β∇xΥΘ(xk,j)\n= xk,j−1 − 2β[HᵀkB ᵀ(BHkxk,j−1 +Bf̃k) + 2Qkxk,j−1]\nwhere Hk, Qk and f̃k are the matrix H , a sub-matrix of Q(X ; Θi−1) and the observed flows vector f̃ (see Section 3.2) for the specific fold k. We have assumed the domain (Ω′) of flows xk,j to be the set of real vectors. For non-negative flows, we add the appropriate proximal operator to Γj .\nOur algorithm applies Reverse-Mode Differentiation (RMD) (Domke, 2012; Franceschi et al., 2017) to estimate∇ΘΨ and optimizes Θ also using an iterative algorithm. The main idea of RMD is to first unroll and store a finite number of iterations for the inner problem x1,x2, . . .xJ and then reverse over those iterations to estimate∇ΘΨ, which is computed as follows:\n∇xJ ,ΘΨ(xJ ,Θi) = ∇xΨ(xJ ,Θi) J∑\nj=1 J∏ s=j+1 ∂Γs(xs−1,Θi) ∂xs−1 ∂Γj(xj−1,Θi) ∂Θ\nIn particular, our reverse iteration is based on the following equations:\n∇xΨ(xJ ,Θi) = (2/K)Rᵀ(RxJ − ĝ) ∂Γs(xs−1,Θi)\n∂xs−1 = I − 2β(HᵀBᵀBH + 2Q(X ; Θi))\n∂Γj(xj−1,Θi)\n∂Θ = −4β(∂Q(X ; Θi)/∂Θ)xj−1\nwhere ∂Q(X ; Θi)/∂Θ is the gradient of the regularization functionQ(X ; Θ) evaluated at Θi. In our case, this gradient is the same as the neural network gradients and is omitted here for convenience.\nAlgorithm 1 describes our RMD approach for flow estimation. It receives as inputs the flow network G(V, E ,X ), K train-validation folds {(f̂k, ĝk)}Kk=1, and also hyperparameters T , J , α, and β,\nAlgorithm 1 RMD Algorithm for Flow Estimation\nRequire: Flow network G(V, E ,X ), train-validation folds {(f̂k, ĝk)}Kk=1, number of outer iterations T and inner iterations J , learning rates α and β Ensure: Regularization parameters Θ 1: Initialize parameters Θ0 2: ĝ← [ĝ1; . . . ĝK ] 3: B ← incidence matrix of G 4: for outer iterations i = 1, . . . T do 5: Initialize missing flows xk,0 for all k 6: for inner iterations j = 1, . . . J do 7: for folds k = 1, . . .K do 8: xk,j ← xk,j−1 − 2β[HᵀkBᵀ(BHkxk,j−1 +Bf̃k) + 2Qkxk,j−1] 9: end for 10: xj ← [x1,j ; . . .xK,j ] 11: end for 12: zJ ← (2/K)RT (RxJ − ĝ) 13: for reverse inner iterations j = J − 1, . . . 1 do 14: ←− Θ ← ←− Θ − 4βzj+1(∂Q(X ; Θi−1)/∂Θ)xj+1 15: zj ← zj+1[I − 2β(HᵀBᵀBH +Q(X ; Θi−1))] 16: end for 17: Update Θi ← Θi−1 − α ←− Θ 18: end for 19: return parameters ΘI\ncorresponding to the number of outer and inner iterations, and learning rates for the outer and inner problem, respectively. Its output is a vector of optimal parameters Θ for the regularization function Q(X ; Θ) according to the bilevel objective in Equations 4 and 5. We use ←− Θ to indicate our estimate of ∇ΘΨ(Θi). Iterations of the inner problem are stored for each train-validation fold in lines 4-12. Reverse steps, which produce an estimate ←− Θ , are performed in lines 13-17. We then use ←− Θ to update our estimate of Θ in line 17. The time and space complexities of the algorithm are O(TJKm) and O(Jm), respectively, due to the cost of computing and storing the inner problem iterations.\nAs discussed in the previous section, bilevel optimization is non-convex and thus we cannot guarantee that Algorithm 1 will return a global optima. In particular, the learning objective of our regularization functionQ(X ; Θ) is non-convex—it is a neural network. However, the inner problem (Equation 5) in our formulation has a convex objective (least-squares). In Franceschi et al. (2018), the authors have shown that this property implies convergence. We also find that our algorithm often converges to a good estimate of the parameters in our experiments." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate our approaches for the flow estimation problem using two real datasets and a representative set of baselines and metrics. Due to space limitations, we provide an extended version of this section, with more details on datasets, experimental settings, and additional results in the Appendix." }, { "heading": "4.1 DATASETS", "text": "This section summarizes the datasets used in our evaluation. We normalize flow values to [0, 1] and map discrete features to real vector dimensions using one-hot encoding.\nTraffic: Vertices represent locations and directed edges represent road segments between two locations in Los Angeles County, CA. Flows are daily average vehicle counts measured by sensors placed along highways in the year 2018. We assign each sensor to an edge in the graph based on proximity and other sensor attributes. Our road network covers the Los Angeles County area, with 5, 749 vertices, 7, 498 edges, of which 2, 879 edges (38%) have sensors. The following features were mapped to an 18-dimensional vector: lat-long coordinates, number of lanes, max-speed, and\nhighway type (motorway, motorway link, trunk, etc.), in-degree, out-degree, and centrality (PageRank). The in-degree and centrality of an edge are computed based on its source vertex. Similarly, the out-degree of an edge is the out-degree of its target vertex.\nPower: Vertices represent buses in Europe, undirected edges are power transmission lines and edge flows measure the total active power (in MW) being transmitted through the lines. The dataset is obtained from PyPSA-Eur (Hörsch et al., 2018; Brown et al., 2017)—an optimization model of the European power transmission system—which generates realistic power flows based on solutions of optimal linear power flow problems with historical production and consumption data. Default values were applied for the PyPSA-Eur settings. The resulting graph has 2,048 vertices, 2,729 edges, and 14-dimensional feature vectors capturing resistance, reactance, length, and number of parallel lines, nominal power, edge degree etc. Please see the Appendix for more details." }, { "heading": "4.2 EXPERIMENTAL SETTINGS", "text": "Evaluation metrics: We apply Pearson’s correlation (CORR), Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE) to compare groundtruth and predicted flows. These metrics are formally defined in the Appendix.\nBaselines: Divergence minimization (Div) (Jia et al., 2019) maximizes flow conservation using a single regularization parameter λ, which we optimize using line search in a validation set of flows. Multi-Layer Perceptron (MLP) is a 2-layer neural network with ReLU activations for all layers that learns to predict flows based on edge features. Graph Convolutional Network (GCN) is a 2- layer graph neural network, also with ReLU activations and Chebyshev convolutions of degree 2, that learns to predict the flows using both edge features and the topology but disregarding flow conservation (Kipf & Welling, 2016; Defferrard et al., 2016). We also consider two hybrid baselines. MLP-Div applies the predictions from MLP as priors to Div. Similarly, predictions from GCN are used as priors for GCN-Div. For both hybrid models, we also optimize the parameter λ.\nOur approaches: We consider three variations of Algorithm 1. However, one important modification is that we perform the reverse iterations for each fold—i.e., folds are treated as batches in SGD. Bil-MLP and Bil-GCN apply our reverse-mode differentiation approach using an MLP and a GCN as a regularizer. Moreover, both approaches use zero as the prior x(0). Bil-GCN-Prior applies the GCN predictions as flow priors. Architectures of the neural nets are the same as the baselines." }, { "heading": "4.3 FLOW ESTIMATION ACCURACY", "text": "Table 1 compares our methods and the baselines in terms of several metrics using the Traffic and Power datasets. Values of CORR achieved by MLP and GCN for Traffic are missing because they were undefined—they have generated predictions with zero variance for at least one of the train-test folds. All methods suffer from high MAPE errors for Power, which is due to an over-estimation of small flows. Bil-GCN achieves the best results in both datasets in terms of all metrics, with 6% and 18% lower RMSE than the best baseline for Traffic and Power, respectively. However, notice that Bil-MLP and Bil-GCN achieve very similar performance for Power and Bil-GCN-Prior does not outperform our other methods. We also show scatter plots with the true vs. predicted flows for some of the best approaches in Figure 2. Traffic has shown to be the more challenging dataset, which can be explained, in part, by training data sparsity—only 38% of edges are labeled." }, { "heading": "4.4 ANALYSIS OF REGULARIZERS", "text": "Figure 3 illustrates the regularization function learned by Bil-MLP. We focus on Bil-MLP because it can be analyzed independently of the topology. Figures 3a-3c show scatter plots where the x and y axes represent the value of the regularizer and features, respectively. For Power, Bil-MLP captures the effect of resistance over flows (Fig. 3a). However, only high values of resistance are mostly affected—that is the reason few points can be seen and also explains the good results for Div. We did not find a significant correlation for other features, with the exception of reactance, which is related to resistance. For Traffic, the model learns how the number of lanes constrains the flow at a road segment (Fig. 3b). Results for speed limit are more surprising, 45mph roads are less regularized (Fig. 3c). This is evidence that regularization is affecting mostly traffic bottlenecks in highways—with few lanes but a 65mph speed limit. To further investigate this result, we also show the regularizers over the Traffic topology in Figure 3d. High regularization overlaps with wellknown congested areas in Los Angeles, CA (e.g., Highway 5, Southeast). These results are strong evidence that our methods are able to learn the physics of flows in road traffic and power networks." }, { "heading": "5 RELATED WORK", "text": "Flow graphs are quite ubiquitous in engineering, biomedical and social sciences. Two important properties of flow graphs are that their state space is defined by a graph topology and their dynamics are governed by the physics (or logic) of the problem of interest. We refer to Bressan et al. (2014) for a unified characterization of the mathematical treatment of flow graphs. Notice that these studies do not address the flow inference problem and their applications to real data is limited (Herrera et al.,\n2010; Work et al., 2010). Moreover, we focus on long term flows (e.g. daily vehicle traffic flows) and not on the dynamics. This simplifies the equations of our model to the conservation law.\nFlow inference via divergence minimization was originally proposed in Jia et al. (2019). However, their work has not considered edge features and instead applied a single regularization parameter to the norm of the flow vector f in Equation 2. Our work leverages relevant edge features to learn the interplay between flow conservation and local predictions (priors). Thus, we generalize the formulation from Jia et al. (2019) to the case of a learnable regularization function Q(Θ, X). Our experiments show that the proposed approach achieves superior results in two datasets.\nFlow optimization problems, such as min-cost flow, max-flow and multi-commodity flow, have a long history in computer science (Ahuja et al., 1988; Ford Jr & Fulkerson, 2015). These problems impose flow conservation as a hard constraint, requiring full knowledge of source and sink vertices and noiseless flow observations. Our approach relaxes these requirements by minimizing the flow divergence (see Equation 2). Moreover, our problem does not assume edge capacities and costs.\nThe relationship between flow estimation and inverse problems is of particular interest due to the role played by regularization (Engl et al., 1996) in the solution of ill-posed problems. Recent work on inverse problems has also focused on learning to regularize based on data and even learning the forward operator as well—see Arridge et al. (2019) for a review. The use of the expression “learning the physics” is also popular in the context of the universal differential equation framework, which enables the incorporation of domain-knowledge from scientific models to machine learning (Raissi et al., 2019; Long et al., 2018; Rackauckas et al., 2020).\nBilevel optimization in machine learning has been popularized due its applications in hyperparameter optimization (Bengio, 2000; Larsen et al., 1996). In the last decade, deep learning has motivated novel approaches able to optimize millions of hyperparameters using gradient-based schemes (Maclaurin et al., 2015; Lorraine et al., 2020; Pedregosa, 2016). Our flow estimation algorithm is based on reverse-mode differentiation, which is a scalable approach for bilevel optimization (Franceschi et al., 2017; Domke, 2012; Maclaurin et al., 2015). Another application of bilevel optimization quite related to ours is meta-learning (Franceschi et al., 2018; Grefenstette et al., 2019).\nOur problem is also related to semi-supervised learning on graphs (Zhu et al., 2003; Belkin et al., 2006; Zhou et al., 2004), which is the inference of vertex labels given partial observations. These approaches can be applied for flow estimation via the line graph transformation (Jia et al., 2019). The duality between a recent approach for predicting vertex labels Hallac et al. (2015) and min-cost flows was shown in Jung (2020). However, the same relation does not hold for flow estimation.\nGraph neural network models, which generalize deep learning to graph data, have been shown to outperform traditional semi-supervised learning methods in many tasks (Kipf & Welling, 2016; Hamilton et al., 2017; Veličković et al., 2018). These models have also been applied for traffic forecasting (Li et al., 2017; Yu et al., 2018; Yao et al., 2019). Different from our approach, traditional GNNs do not conserve flows. We show that our models outperform GNNs at flow prediction. Moreover, we also apply GNNs as a regularization function in our model." }, { "heading": "6 CONCLUSIONS", "text": "We have introduced an approach for flow estimation on graphs by combining a conservation law and edge features. Our model learns the physics of flows from data by combining bilevel optimization and deep learning. Experiments using traffic and power networks have shown that the proposed model outperforms a set of baselines and learns interpretable physical properties of flow graphs.\nWhile we have focused on learning a diagonal regularization matrix, we want to apply our framework to the case of a full matri. We are also interested in combining different edge measurements in order to learn more complex physical laws, such as described by the fundamental diagram in the LWR model Lighthill & Whitham (1955); Daganzo (1994; 1995); Garavello & Piccoli (2006)." }, { "heading": "ACKNOWLEDGEMENTS", "text": "Research partially funded by the grants NSF IIS #1817046 and DTRA #HDTRA1-19-1-0017." }, { "heading": "A TABLE OF SYMBOLS", "text": "Table 2 lists the main symbols used in our paper." }, { "heading": "B BILEVEL OPTIMIZATION WITH GRAPH NEURAL NETWORKS", "text": "This section is an extension of Section 3.3. Here, we consider the case where Q(X ; Θ) is a GNN:\nQ(X ,Θ) = diag(GNN(X,Θ,G)) (8)\nFor instance, we apply a 2-layer spectral Graph Convolutional Network (GCN) with Chebyshev convolutions (Defferrard et al., 2016; Kipf & Welling, 2016; Hammond et al., 2011):\nQ(X ; Θ) = diag ReLU Z′∑\nz′=1\nTz′(L̃)ReLU\n( Z∑\nz=1\nTz(L̃)XW (1)z\n) W\n(2) z′\n (9)\nwhere L̃ = 2/λmaxL− I , L is the normalized Laplacian of the undirected version of the line graph G′ of G, λmax is the largest eigenvalue of L, Tz(L̃) is a Chebyshev polynomial of L̃ with order z and W (i)z is the matrix of learnable weights for the z-th order polynomial at the layer i. In a line graph, each vertex represents an edge of the undirected version of G and two vertices are connected if their corresponding edges in G are adjacent. Morever L = I −D−1/2AD−1/2, where A and D are the adjacency and degree matrices of G′. Chebyshev polynomials are defined recursively, with Tz(y) = 2yTz−1(y)− Tz−2(y) and T1(y) = y. In our experiments, we compare GCN against MLP regularization functions. We have also applied the more popular non-spectral graph convolutional operator (Kipf & Welling, 2016) but preliminary results have shown that the Chebyshev operator achieves better performance in flow estimation." }, { "heading": "C EXTENDED EXPERIMENTAL SECTION", "text": "This section in an extension of Section 4.\nC.1 MORE DETAILS ON DATASETS\nTraffic: Flow data was collected from the Caltrans—the California Department of Transportation— PeMS (Performance Measurement System).1 Sensors are placed at major highways in the state. We use sensor geo-locations and other attributes to approximately match them to a compressed version of road network extracted from Openstreetmap.2 The compression merges any sequence of segments without a branch, as these extra edges would not affect the flow estimation results. We emphasize that this dataset is not of as high quality as Power, due to possible sensor malfunction and matchings of sensors to the wrong road segments. This explains why flow estimation is more challenging in Traffic. Figure 4 is a visualization of our traffic dataset with geographic (lat-long) located vertices and colors indicating light versus heavy traffic (compared to the average). The road segments in the graph (approximately) cover the LA County area. We show the map (from Openstreetmap) of the area covered by our road network in Figure 5.\nPower: We will provide more details on how we build the power dataset. PyPSA (Python for Power System Analsys) is a toolbox for the simulation of power systems (Brown et al., 2017). We applied the European transmission system (PyPSA-Eur), which covers the ENTSO-E area (Hörsch et al., 2018), to generate a single network snapshot. Besides the PyPSA-Eur original set of edges, which we will refer to as line edges, we have added a set of bus edges. These extra edges allow us to represent power generation and consumption as edge flows. For the line edges, we cover the following PyPSA attributes (with their respective PyPSA identifiers3): reactance (x), resistance(r), capacity (s nom), whether the capacity s nom can be extended (s nom extendable), the capital cost of extending s nom (capital cost), the length of the line (length), the number of parallel lines (num parallel) and the optimized capacity (s nom opt). For bus lines, the only attribute is the control strategy (PQ, PV, or Slack). Notice that we create a single vector representation for both line and bus lines by adding an extra indicator position (line or bus). Moreover, categorical attributes (e.g., the control strategy) were represented using one-hot encoding. Figure 6 is a visualization of our power dataset with geographic (lat-long) located vertices and colors indicating high versus low power (compared to the average).\nC.2 EVALUATION METRICS\nWe apply the following evaluation metrics for flow estimation. Let ftrue and fpred be m′dimensional vectors with true and predicted values for missing flows associated to edges in E \\ E ′. Correlation (Corr):\ncov(fpred, ftrue)/(σ(fpred).σ(ftrue))\nwhere cov is the covariance and σ is the standard deviation.\n1Source: http://pems.dot.ca.gov/ 2Source: https://www.openstreetmap.org 3https://pypsa.readthedocs.io/en/latest/components.html\nMean Absolute Percentage Error (MAPE): 1 m−m′ ∑\ne∈E\\E′ | (ftrue)e − (fpred)e (ftrue)e |\nMean Absolute Error (MAE):\n1\nm′ ∑ e∈E\\E′ |(ftrue)e − (fpred)e|\nRoot Mean Squared Error (RMSE):√√√√ 1 m′ ∑ e∈E\\E′ [(ftrue)e − (fpred)e]2\nDivergence (Div): ∑ v ( ∑ u f(u,v) − ∑ u f(v,u)) 2\nC.3 MORE EXPERIMENTAL SETTINGS\nTrain/test splits: We report results of a 10-fold cross-validation based on the set of labeled flows. Moreover, we use 10% of training flows for validation.\nImplementation4: We have implemented Algorithm 1 using PyTorch, CUDA, and Higher (Grefenstette et al., 2019), a meta-learning framework that greatly facilitates the implementation of bilevel optimization algorithms by implicitly performing the reverse iterations for a list of optimization algorithms, including SGD. Moreover, our GCN implementation is based on the Deep Graph Library (DGL) (Wang et al., 2019).\nHardware: We ran our experiments on a single machine with 4 NVIDIA GeForce RTX 2080 GPUs (each with 8Gb of RAM) and 32 Intel Xeon CPUs (2.10GHz and 128Gb of RAM).\n4https://github.com/arleilps/flow-estimation\nHyperparameter settings: We have selected the parameters based on RMSE for each method using grid search with learning rate over [100, 10−1, 10−2, 10−3] and number of nodes in the hidden layer over [4, 8, 16]. The total number of iterations was set to 3000 for Min-Div and 5000 for MLP and GCN, all with early stop on convergence after 10 iterations. For our methods (both based on Algorithm 1), we set T = 10, J = 300, α = 10−2, β = 10−2 and K = 10 in all experiments.\nC.4 DIVERGENCE RESULTS\nAlthough the main goal of flow estimation is to minimize the flow prediction loss, we also evaluate how our methods and the baselines perform in terms of divergence (or flow conservation) in Table 3. As expected, MLP and GCN do not conserve the flows. However, interestingly, our methods (BilMLP and Bil-GCN) achieve higher flow conservation than Min-Div. This is due to the regularization parameter λ, which is tuned based on a set of validation flows." }, { "heading": "D TRUE VS. PREDICTED FLOWS", "text": "Figure 7 shows scatter plots with the true vs. predicted flows that are missing from Figure 2.\nD.1 VISUALIZATION OF REGULARIZER FOR POWER\nFigure 8 shows the regularizers over the Power network topology. As discussed in Section 4.4, the regularizer affects mostly a few top resistance edges. For the remaining ones, regularizers have a small value. Notice that these high resistance edges are associated with lines transmitting small amounts of power, as shown in Figure Figure 6, and have a large impact on the overall flow estimation accuracy.\nD.2 RUNNING TIME\nTable 4 shows the average running times—over the 10-fold cross-validation—of our methods and the baselines for the Traffic and Power datasets. We show both training and test times. The results show that our reverse-mode differentiation algorithm adds significant overhead on training time for Traffic, taking up to 4 times longer than Min-Div to finish. As described in Section 3.4, this is due mainly to the cost of computing and storing the inner problem iterations. On the other hand, all the methods are efficient at testing. GCN converged quickly (due to early stopping) for both datasets. However, it achieved poor results for Power, as shown in Table 1, which is a sign of overfitting or underfitting. Notice that the results reported are the best in terms of RMSE." } ]
2,021
NETWORK FLOW ESTIMATION
SP:0370e68af5e82fcbde2ca16e57721e455620a1fe
[ "This paper proposes a method for inverse reinforcement learning that incorporates a differential planning module. Explicit transition dynamics modeling with inverse value iteration is added to promote meaningful reward learning. Empirical evaluations on several high-dimensional Atari environments and 2 continuous control environments are provided which show improvements over existing inverse reinforcement learning baselines when given only one-life demonstrations. Some visuals are also presented to show that the proposed method is able to learn more meaningful reward maps than previous methods.", "This paper assumes no access to the reward values and attempts to learn a policy by starting just with one demonstration to define the reward. For obtaining the reward, the authors rely on the ideas from Value Iteration Networks (VIN) method and they add the modules that help to deal with cases with complex transition dynamics. The resulting method is tested on atari domain and on continuous control tasks." ]
Imitation learning from limited demonstrations is challenging. Most inverse reinforcement learning (IRL) methods are unable to perform as good as the demonstrator, especially in a high-dimensional environment, e.g, the Atari domain. To address this challenge, we propose a novel reward learning method, which streamlines a differential planning module with dynamics modeling. Our method learns useful planning computations with a meaningful reward function that focuses on the resulting region of an agent executing an action. Such a planning-based reward function leads to policies with better generalization ability. Empirical results with multiple network architectures and reward instances show that our method can outperform state-of-the-art IRL methods on multiple Atari games and continuous control tasks. Our method achieves performance that is averagely 1,139.1% of the demonstration.
[]
[ { "authors": [ "Pieter Abbeel", "Andrew Y Ng" ], "title": "Apprenticeship learning via inverse reinforcement learning", "venue": "In International Conference on Machine Learning, pp", "year": 2004 }, { "authors": [ "Richard Bellman" ], "title": "A markovian decision process", "venue": "Journal of mathematics and mechanics,", "year": 1957 }, { "authors": [ "Abdeslam Boularias", "Jens Kober", "Jan Peters" ], "title": "Relative entropy inverse reinforcement learning", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2011 }, { "authors": [ "Daniel S Brown", "Wonjoon Goo", "Prabhat Nagarajan", "Scott Niekum" ], "title": "Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Daniel S Brown", "Wonjoon Goo", "Scott Niekum" ], "title": "Ranking-based reward extrapolation without rankings", "venue": "In Conference on Robot Learning,", "year": 2019 }, { "authors": [ "Chelsea Finn", "Sergey Levine", "Pieter Abbeel" ], "title": "Guided cost learning: Deep inverse optimal control via policy optimization", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Jonathan Ho", "Stefano Ermon" ], "title": "Generative adversarial imitation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Borja Ibarz", "Jan Leike", "Tobias Pohlen", "Geoffrey Irving", "Shane Legg", "Dario Amodei" ], "title": "Reward learning from human preferences and demonstrations in atari", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Animesh Karnewar" ], "title": "Pytorch implementations of variational discriminator bottleneck, 2018", "venue": "URL https://github.com/akanimax/Variational_Discriminator_Bottleneck", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Ilya Kostrikov" ], "title": "Pytorch implementations of reinforcement learning algorithms, 2018", "venue": "URL https: //github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail", "year": 2018 }, { "authors": [ "Nantas Nardelli", "Gabriel Synnaeve", "Zeming Lin", "Pushmeet Kohli", "Philip HS Torr", "Nicolas Usunier" ], "title": "Value propagation networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Andrew Y Ng", "Stuart J Russell" ], "title": "Algorithms for inverse reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2000 }, { "authors": [ "Sufeng Niu", "Siheng Chen", "Hanyu Guo", "Colin Targonski", "Melissa C Smith", "Jelena Kovačević" ], "title": "Generalized value iteration networks: Life beyond lattices", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Xue Bin Peng", "Angjoo Kanazawa", "Sam Toyer", "Pieter Abbeel", "Sergey Levine" ], "title": "Variational discriminator bottleneck: Improving imitation learning, inverse rl, and gans by constraining information flow", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Dean A Pomerleau" ], "title": "Efficient training of artificial neural networks for autonomous navigation", "venue": "Neural Computation,", "year": 1991 }, { "authors": [ "Martin L Puterman" ], "title": "Markov Decision Processes: Discrete Stochastic Dynamic Programming", "venue": null, "year": 2014 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "High-dimensional continuous control using generalized advantage estimation", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Kihyuk Sohn", "Honglak Lee", "Xinchen Yan" ], "title": "Learning structured output representation using deep conditional generative models", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Aviv Tamar", "Yi Wu", "Garrett Thomas", "Sergey Levine", "Pieter Abbeel" ], "title": "Value iteration networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Xingrui Yu", "Yueming Lyu", "Ivor Tsang" ], "title": "Intrinsic reward driven imitation learning via generative model", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Li Zhang", "Xin Li", "Sen Chen", "Hongyu Zang", "Jie Huang", "Mingzhong Wang" ], "title": "Universal value iteration networks: When spatially-invariant is not universal", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Brian D Ziebart", "Andrew Maas", "J Andrew Bagnell", "Anind K Dey" ], "title": "Maximum entropy inverse reinforcement learning", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2008 }, { "authors": [ "Brian D Ziebart", "J Andrew Bagnell", "Anind K Dey" ], "title": "Modeling interaction via the principle of maximum causal entropy", "venue": "In International Conference on Machine Learning,", "year": 2010 }, { "authors": [ "Atari games", "Krull", "Time Pilot" ], "title": "The results show that our method outperforms the expert and other baselines by a large margin on both additional Atari games. Table 13: Average return of vPERL-Small with RMean, GIRIL (Yu et al., 2020), VIN and stateof-the-art IRL algorithms GAIL (Ho & Ermon, 2016) and VAIL (Peng et al., 2019) with one-life demonstration data", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Imitation learning (IL) offers an alternative to reinforcement learning (RL) for training an agent, which mimics the demonstrations of an expert and avoids manually designed reward functions. Behavioral cloning (BC) (Pomerleau, 1991) is the simplest form of imitation learning, which learns a policy using supervised learning. More advanced methods, inverse reinforcement learning (IRL) (Ng & Russell, 2000; Abbeel & Ng, 2004) seeks to recover a reward function from the demonstrations and train an RL agent on the recovered reward function. In the maximum entropy variant of IRL, the aim is to find a reward function that makes the demonstrations appear near-optimal on the principle of maximum entropy (Ziebart et al., 2008; 2010; Boularias et al., 2011; Finn et al., 2016).\nHowever, most state-of-the-art IRL methods fail to meet the performance of demonstrations in highdimensional environments with limited demonstration data, e.g., a one-life demonstration in Atari domain (Yu et al., 2020). This is due to the main goal of these IRL approaches is to recover a reward function that justifies the demonstrations only. The rewards recovered from limited demonstration data would be vulnerable to the overfitting problem. Optimizing these rewards from an arbitrary initial policy results in inferior performance. Recently, Yu et al. (2020) proposed generative intrinsic reward learning for imitation learning with limited demonstration data. This method outperforms expert and IRL methods in several Atari games. Although GIRIL uses the prediction error as curiosity to design the surrogate reward that encourages (pushes) states away from the demonstration and avoids overfitting, the curiosity also results in ambiguous quality of the rewards in the environment.\nIn this paper, we focus on addressing the two key issues of previous methods when learning with limited demonstration data, i.e., 1) overfitting problem, and 2) ambiguous quality of the reward function. To address these issues, we propose to learn a straightforward surrogate reward function by learning to plan from the demonstration data, which is more reasonable than the previous intrinsic reward function (i.e., the prediction error between states). Differential planning modules (DPM) is potentially useful to achieve this goal, since it learns to map observation to a planning computation for a task, and generates action predictions based on the resulting plan (Tamar et al., 2016; Nardelli et al., 2019; Zhang et al., 2020). Value iteration networks (VIN) (Tamar et al., 2016) is the representative one, which represents value iteration as a convolutional neural network (CNN). Meaningful reward and value maps have been learned along with the useful planning computation, which leads to policies that generalize well to new tasks. However, due to the inefficiency of summarizing complicated transition dynamics, VIN fails to scale up to the Atari domain.\nTo address this challenge, we propose a novel method called variational planning-embedded reward learning (vPERL), which is composed of two submodules: a planning-embedded action back-tracing module and the transition dynamics module. We leverage a variational objective based on the conditional variational autoencoder (VAE) (Sohn et al., 2015) to jointly optimize the two submodules, which greatly improves the generalization ability. This is critical for the success of achieving a straightforward and smooth reward function and value function with limited demonstration data.\nAs shown in Figure 1, vPERL learns meaningful reward and value maps that attends to the resulting region of the agent executing an action, which indicates meaningful planning computation. However, directly applying VIN in Atari domain in the way of supervised learning (Tamar et al., 2016) only learns reward and value maps that attend no specific region, which usually results in no avail.\nEmpirical results show that our method outperforms state-of-the-art IRL methods on multiple Atari games and continuous control tasks. Remarkably, our methods achieve performance that is up to 58 times of the demonstration. Moreover, the average performance improvement of our method is 1,139.1% of the demonstration over eight Atari games." }, { "heading": "2 BACKGROUND AND RELATED LITERATURE", "text": "Markov Decision Process (MDP)(Bellman, 1966) is a standard model for sequential decision making and planning. An MDP M is defined by a tuple (S,A, T,R, γ), where S is the set of states, A is the set of actions, T : S ×A× S → R+ is the environment transition distribution, R : S → R is the reward function, and γ ∈ (0, 1) is the discount factor (Puterman, 2014). The expected discounted return or value of the policy π is given by V π(s) = Eτ [ ∑ t=0 γ\ntR(st, at)|s0 = s], where τ = (s0, a0, s1, a1, · · · ) denotes the trajectory, in which the actions are selected according to π, s0 ∼ T0(s0), at ∼ π(at|st), and st+1 ∼ T (st+1|st, at). The goal in an MDP is to find the optimal policy π∗ that enables the agent to obtain high long-term rewards.\nGenerative Adversarial Imitation Learning (GAIL) (Ho & Ermon, 2016) extends IRL by integrating adversarial training technique for distribution matching (Goodfellow et al., 2014). GAIL performs well in low-dimensional applications, e.g., MuJoCo. However, it does not scale well to high-dimensional scenarios, such as Atari games (Brown et al., 2019a). Variational adversarial imitation learning (VAIL) (Peng et al., 2019) improves on GAIL by compressing the information via variational information bottleneck. GAIL and VAIL inherit problems of adversarial training, such as instability in training process, and are vulnerable to overfitting problem when learning with limited demonstration data. We have included both methods as comparisons to vPERL in our experiments.\nGenerative Intrinsic Reward driven Imitation Learning (GIRIL) (Yu et al., 2020) leverage generative model to learn generative intrinsic rewards for better exploration. Though GIRIL outperforms previous IRL methods on several Atari games, the reward map of GIRIL is ambiguous and less informative, which results in inconsistent performance improvements in different environments. In contrast, our vPERL learns efficient planning-based reward that is more straightforward and informative. We have included GIRIL as a competitive baseline in our experiments.\nDifferentiable planning modules perform end-to-end learning of planning computation, which leads to policies that generalize to new tasks. Value iteration (VI) (Bellman, 1957) is a well-known method for calculating the optimal value V ∗ and optimal policy π∗: Vn+1(s) = maxaQn(s, a),\nwhere Qn(s, a) = R(s, a) + γ ∑ s′ T (s\n′|s, a)Vn(s′) denotes the Q value in the nth iteration. The value function Vn in VI converges as n→∞ to V ∗, from which the optimal policy may be derived as π∗(s) = argmaxaQ∞(s, a).\nValue iteration networks (VIN) (Tamar et al., 2016) proposes to embed value iteration (VI) (Bellman, 1957) process with a recurrent convolutional network, and generalizes well in conventional navigation domains. VIN assumes there is some unknown embedded MDP M where the optimal plan in M contains useful information about the optimal plan in the original MDP M . VIN connects the two MDPs with a parametric reward function R = fR(s). Nardelli et al. (2019) proposes value propagation networks (VPN) generalize VIN for better sample complexity by employing value propagation (VProp). Recently, universal value iteration networks (UVIN) extends VIN to spatially variant MDPs (Zhang et al., 2020). Although VIN can be extended to irregular spatial graphs by applying graph convolutional operator (Niu et al., 2018), most of the VIN variants still focus on solving the conventional navigation problems (Zhang et al., 2020).\nIn this paper, we extend differentiable planning module to learn an efficient reward function for imitation learning on limited demonstration data. We dig more on leveraging the learned reward function for imitation learning; while previous related work of VIN focuses more on the value function. Therefore, our work is complementary to the research of VIN and its variants. Note that any differentiable planning module can be embedded in our method. As a simple example, we utilize the basic VIN as a backbone to build our reward learning module." }, { "heading": "3 VARIATIONAL PLANNING-EMBEDDED REWARD LEARNING", "text": "In this section, we introduce our solution, variational planning-embedded reward learning (vPERL). As illustrated in Figure 2, our reward learning module is composed of two submodules to accomplish planning-embedded action back-tracing and explicit forward transition dynamics modeling." }, { "heading": "3.1 ACTION BACK-TRACING AND FORWARD DYNAMICS MODELLING IN VPERL", "text": "Planning-embedded action back-tracing. Instead of directly applying VIN for policy learning (Tamar et al., 2016), we build our first submodule qφ(at|st, st+1) for action back-tracing. As illustrated in the top section of Figure 2, we first obtain the reward map R = fR(st, st+1) on an embedded MDP M , where fR is a convolutional layer. A VI module fV I takes in the reward map R, and effectively performs K times of VI by recurrently applying a convolutional layer Q for K times (Tamar et al., 2016). The Q layer is then max-pooled to obtain the next-iteration value V . The right-directed circular arrow in a light-blue color denotes the direction of convolutions. Then, we simply obtain the action from the intermediate optimal value V ∗ by an action mapping function: at = fa(V ∗ ). On these terms, we build our planning-embedded action back-tracing submodule, which is formally represented as qφ(at|st, st+1) = fa(fV I(fR(st, st+1))). Since the\nconvolutional kernel is incapable of summarizing the transition dynamics in a complex environment, directly training this submodule is still insufficient for learning efficient reward function and planning computation in an environment like Atari domain.\nExplicit transition dynamics modeling via inverse VI. To address this, we further build upon another submodule pθ(st+1|at, st) for explicit transition dynamics modeling. We build the submodule based on the inverse VI module, which is a NN architecture that mimics the process of the inverse version of VI. The implementation of the inverse VI module is straightforward. We first map the action for the intermediate optima value in another embedded MDP M ′ by a value mapping function: V ′ ∗ = fV ′(st, at). Then, we apply the inverse VI module to obtain the reward map R′. The inverse VI module f ′V I takes in the intermediate value V ′ and recurrently apply a deconvolutional layer Q′ for K times on the value to obtain the reward map R′. The left-directed circular arrow in a purple color denotes the direction of deconvolutions. To accomplish the transition, we map the obtained R′ to the future state by: st+1 = fs′(R′). The transition modeling is therefore presented as pθ(st+1|at, st) = fs′(f ′V I(fV ′(st, at))), which is a differentiable submodule, and can be trained simultaneously with the action back-tracing submodule.\nVariational solution to vPERL. A variational autoencoder (VAE) (Kingma & Welling, 2013) can be defined as being an autoencoder whose training is regularised to avoid overfitting and ensure that the latent space has good properties that enable generative process. To avoid the learned planningbased reward overfitting to the demonstration, we optimize both submodules in a unified variational solution, which follows the formulation of conditional VAE (Sohn et al., 2015). Conditional VAE is a conditional generative model for structured output prediction using Gaussian latent variables, which is composed of a conditional encoder, decoder and prior. Accordingly, we regard the action back-tracing module qφ(z|st, st+1) as the encoder, pθ(st+1|z, st) as the decoder, and pθ(z|st) as the prior. Our vPERL module is maximized with the following objective:\nL(st, st+1; θ, φ) = Eqφ(z|st,st+1)[log pθ(st+1|z, st)]−KL(qφ(z|st, st+1)‖pθ(z|st)) − αKL(qφ(ât|st, st+1)‖πE(at|st))]\n(1)\nwhere z is the latent variable, πE(at|st) is the expert policy distribution, ât = Softmax(z) is the transformed latent variable, α is a positive scaling weight. The first two terms on the RHS of Eq. (1) in the first line denote the evidence lower bound (ELBO) of the conditional VAE (Sohn et al., 2015). These two terms are critical for our reward learning module to perform planning-based action backtracing and transition modeling. Additionally, we integrate the third term on the RHS of Eq. (1) in the second line to further boost the action back-tracing. The third term minimizes the KL divergence between the expert policy distribution πE(at|st) and the action distribution qφ(ât|st, st+1), where ât = Softmax(z) is transformed from the latent variable z. In this way, we train the forward state transition and action back-tracing simultaneously.\nAlgorithm 1 Imitation learning via variational planning-embedded reward learning (vPERL). 1: Input: Expert demonstration data D = {(si, ai)}Ni=1. 2: Initialize policy π, and the dual planning networks. 3: for e = 1, · · · , E do 4: Sample a batch of demonstration D̃ ∼ D. 5: Train vPERL module on D̃ to converge. 6: end for 7: for i = 1, · · · ,MAXITER do 8: Update policy via any policy gradient method, e.g., PPO on the learned surrogate reward rt. 9: end for 10: Output: Policy π.\nNote that the full objective in Eq. (1) is still a variational lower bound of the marginal likelihood log(pθ(st+1|st)). Accordingly, it is reasonable to maximize this as an objective of our reward learning module. By optimizing the objective, we improve the forward state transition and action back-tracing. As a result, our reward learning module efficiently models the transition dynamics of the environment. During training, we use the latent variable z as the intermediate action. After training, we will calculate the surrogate rewards from the learned reward map. As shown in Figure 1, our method learns meaningful reward map, which highlights the resulting region of an agent executing one action.\nTo leverage such meaningful information, we calculate two types of rewards that both correspond to the highlighted informative region, i.e., rt = RMax = maxR and rt = RMean = meanR, which uses the maximum and mean value of the reward map R, respectively.\nAlgorithm 1 summarizes the full training procedure of imitation learning via vPERL. The process begins by training a vPERL module for E epochs (steps 3-6). In each training epoch, we sample a mini-batch demonstration data D̃ with a mini-batch size of B and maximize the objective in Eq. (1). Then in steps 7-9, we update the policy π via any policy gradient method, e.g., PPO (Schulman et al., 2017), so as to optimize the policy π with the learned surrogate reward function rt." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 ATARI GAMES", "text": "We first evaluate our proposed vPERL on one-life demonstration data for eight Atari games within OpenAI Gym (Brockman et al., 2016). To enable a fair comparison, we evaluate our method and all other baselines under the same standard setup, where we train an agent to play Atari games without access to the true reward function (Ibarz et al., 2018; Brown et al., 2019a). The games and demonstration details are provided in Table 1.\nTable 1: Statistics of Atari environments.\nDemonstration Length # Lives\nGame One-life Full-episode available\nKung-Fu Master 1,167 3,421 4 Battle Zone 260 1,738 5 Centipede 166 663 3 Seaquest 562 2,252 4 Q*bert 787 1,881 4\nBreakout 1,577 2,301 5 Beam Rider 1,875 4,587 3 Space Invaders 697 750 3\nA one-life demonstration only contains the states and actions performed by an expert player until they lose their life in a game for the first time (Yu et al., 2020). In contrast, one fullepisode demonstration contains states and actions after the expert player loses all available lives in a game. Therefore, the one-life demonstration data is much more limited than one fullepisode demonstration. We define three levels of performance: 1) basic one-life demonstrationlevel - gameplay up to one life lost (“one-life\"), 2) expert-level - gameplay up to all-lives lost (“one full-episode\"), and 3) beyond expert - “better-than-expert\" performance.\nOur ultimate goal is to train an imitation agent that can achieve a better-than-expert performance with the demonstration data recorded up to the moment of losing their first life in the game.\nDemonstrations To generate one-life demonstrations, we trained a PPO (Schulman et al., 2017) agent with the ground-truth reward for 10 million simulation steps. We used PPO implementation with the default hyper-parameters in the repository (Kostrikov, 2018). As Table 1 shows, the one-life demonstrations are all much shorter than the full-episode demonstrations, which make for extremely limited training data.\nExperimental Setup Our first step was to train a reward learning module for each game on the onelife demonstration. We set K = 10 in vPERL for all of the Atari games. By default, we use a neural network architecture that keeps the size of the reward map and value maps the same as that of the state, which is 84× 84. We achieve this by using a convolutional kernel of size 3 for each convolutional layer, and applying padding. The corresponding method is called ‘vPERL-Large’. Additionally, to enable faster learning, we implement our method with another neural network architecture that reduces the size of the reward map and value maps into 18× 18. The corresponding method is called ‘vPERL-Small’. Both vPERL-Large and vPERL-Small can learn meaningful reward map as well as useful planning computation. Training was conducted with the Adam optimizer (Kingma & Ba, 2015) at a learning rate of 3e-5 and a mini-batch size of 32 for 50,000 epochs. In each training epoch, we sampled a mini-batch of data every four states.\nTo evaluate the quality of our learned reward, we trained a policy to maximize the inferred reward function via PPO. We set α = 100 for training our reward learning module. We trained the PPO on the learned reward function for 50 million simulation steps to obtain our final policy. The PPO is trained with a learning rate of 2.5e-4, a discount factor of 0.99, a clipping threshold of 0.1, an entropy coefficient of 0.01, a value function coefficient of 0.5, and a GAE parameter of 0.95 (Schulman et al.,\n2016). We compared imitation performance by our vPERL agent against VIN, two state-of-the-art inverse reinforcement learning methods, GAIL (Ho & Ermon, 2016) and VAIL (Peng et al., 2019). More details of setup are outlined in Appendix F.2.\nResults In Figure 3, we report the performance by normalizing the demonstration performance to 1. Figure 3 shows that vPERL achieves performance that is usually close or better than that of the expert demonstrator. The most impressive one is the Centipede game, our vPERL achieves performance that is around 60 times higher than the demonstration. GIRIL achieves the second best performance in Centipede, beating the demonstration by around 30 times. On the Qbert game, vPERL beats all other baselines by a large margin, achieving performance that is more than 15 times of the demonstration.\nA detailed quantitative comparison of IL algorithms is listed in Table 2. We have evaluated four variants of vPERL with two types of network architectures (Large and Small) and surrogate rewards (RMax and RMean). Both vPERL-Large and vPERL-Small can learn meaningful reward and value maps as well as useful computation in the Atari domain. In Appendix D and E, we visualize the learned reward and value maps of vPERL-Large and vPERL-Small, respectively. With such meaningful rewards, the four variants of vPERL outperform the expert demonstrator in six out of eight Atari games. Remarkably, vPERL-Small with RMean achieves an average performance that is 1,139.1% of the demonstration and 278.5% of the expert over the eight Atari games. Figure 3 shows the bar plot of normalized performance of four vPERL variants against other baselines.\nTable 2 shows that VIN is far from achieving demonstration-level performance, since it is unable to learn useful planning computation as well as the meaningful reward and value maps in the Atari domain. GAIL fails to achieve good imitation learning performance. VAIL manages to exceed the expert performance in one game, i.e., Breakout. GIRIL performs better than previous methods, outperforming the expert demonstrator in three games. The results show that our vPERL agent outperforms the expert demonstrator by a large margin in six out of eight games. Figure 4 shows the qualitative comparison of our method (vPERL-Small-RMean), GIRIL, VAIL, GAIL, VIN and the average performance of the expert and demonstration. Additionally in Appendix C.1, our method consistently outperforms expert and other baselines on two more Atari games, Krull and Time Pilot." }, { "heading": "4.1.1 HOW DOES VPERL OUTPERFORM PREVIOUS METHODS?", "text": "The contributions of each component of vPERL. In this subsection, we study the contribution of each component of our method, i.e. Action Back-tracing submodule, Transition Modeling submodule, and the variational objective in Eq. (1). Specifically, we directly train the Action Back-tracing and Transition Modeling submodules in terms of supervised learning. We used the mean of the reward map and the prediction error of the next state as the reward for the former and latter submodules, respectively. To study the contribution of the variational objective, we introduced another baseline, PERL, which trains both submodules as an autoencoder. Table 3 shows quantitative comparison between the performance of vPERL and its components.\nThe results show that individual training of each component results in no avail. PERL successfully outperforms the demonstration in one game, i.e. Centipede, which indicates the potential advantage of using both submodules. However, PERL fails in the other seven games, while vPERL outperforms the demonstration in eight games and outperforms expert in six. The large performance gap between PERL and vPERL indicates the variational objective in Eq. (1) is important to learn efficient rewards. To further investigate the key reason for why our method works well, we added another baseline - supervised PERL, which forces the encoding of PERL to be close to the true action. The supervised PERL fails in all of the games. Comparing with vPERL, we can attribute the critical contribution to the use of the ELBO of conditional VAE, or more specific, the term KL(qφ(z|st, st+1)‖pθ(z|st)) in Eq. (1). It helps vPERL to work well and outperform previous methods for two reasons: 1) The generative training of VAE can serve as a good regularization to alleviate the overfitting problem.\n2) The regularization enables vPERL to learn a smooth value function and reward function, which consistently provides straightforward and informative rewards for the moving states in the environment." }, { "heading": "Empirical evidence:", "text": "1) Better generalization ability. The empirical results in Table 2 and Figure 4 show that VIN, GAIL and VAIL are vulnerable to overfitting problem, usually results in no avail and has fewer chances to reach the demonstration-level performance. In contrast, our vPERL has better generalization ability and consistently achieves performance that is either close to or better than the expert.\n2) Straightforward and informative reward. Figure 5 shows the state, the reward maps of vPERL and GIRIL in three Atari games. The reward map of GIRIL can be close to zero (in Battle Zone) and state (in Q*bert) or occasionally informative (in Centipede), which is ambiguous and less informative. In contrast, the reward map of our vPERL is more straightforward, and consistently attends to informative regions in the state for all of the games.\nOur method successfully addresses the two key issues, therefore, it can outperform previous methods." }, { "heading": "4.2 CONTINUOUS CONTROL TASKS", "text": "We also evaluated our method on continuous control tasks where the state space is low-dimensional and the action space is continuous. The continuous control tasks were from Pybullet 1 environment.\nDemonstrations To generate demonstrations, we trained a Proximal Policy Optimization (PPO) agent with the ground-truth reward for 1 million simulation steps. We used the PPO implementation in the repository (Kostrikov, 2018) with the default hyper-parameters for continuous control tasks. In each task, we used one demonstration with a fixed length of 1,000 for evaluation. The details of experimental setup can be found in Appendix F.1.\nResults Figure 6 shows that our method vPERL achieves the best imitation performance in both continuous control tasks, i.e. Inverted Pendulum and Inverted Double Pendulum. Although GIRIL achieves performance that is close to the demonstration, the efficient planning-based reward function enables vPERL to perform even better. Other baselines are unable to reach the demonstration-level performance by learning from only one demonstration. Quantitative results are shown in Appendix A." }, { "heading": "4.3 ABLATION STUDIES", "text": "Ablation study on the growing number of demonstrations. Figure 7 shows the average return versus number of full-episode demonstrations on both Atari games and continuous control tasks. The results shows that our method vPERL achieves the highest performance across different numbers of full-episode demonstrations. GIRIL usually comes the second best, and GAIL can achieve good performance with more demonstrations in continuous control tasks. Quantitative results have been shown in Appendix B.1.\n1https://pybullet.org/\nAblation study on the optimality of the demonstrations. Figure 8 shows the average return versus optimality of demonstrations on Atari games and continuous control tasks. In sections 4.1 and 4.2, we trained PPO agents with ground-truth reward for 10 million (10M) steps as the expert for Atari games, and for 1 million (1M) steps as the expert for continuous control tasks. In this ablation, we train PPO agents with 10% and 50% simulation steps of the expert to generate demonstrations with diverse optimality. The results show that vPERL consistently outperforms the expert and demonstrations on the demonstrations of different optimality. Quantitative results are shown in Appendix B.2.\nAblation study on the hyper-parameters K. Figure 8 shows the average return versus different choices of K on Atari games and continuous control tasks. The results show that our method is not very sensitive to different choices of K. Quantitative results are shown in Appendix B.3." }, { "heading": "5 CONCLUSION", "text": "This paper presents a simple but efficient reward learning method, called variational planningembedded reward learning (vPERL). By simultaneously training a planning-embedded action backtracing module and a transition dynamics module in a unified generative solution, we obtain a reward function that is straightforward and informative, and has better generalization ability than previous methods. Informative analysis and empirical evidence support the critical contribution of ELBO regularization term for learning efficient planning-based reward with extremely limited demonstrations. Empirical results show our method outperforms state-of-the-art imitation learning methods on multiple Atari games and continuous control tasks by a large margin. Extensive ablation studies show that our method is not very sensitive to the number of demonstrations, optimality of demonstration, and choices of the hyperparameter K. We remain the extension of our method to more complex continuous control tasks as future work. Another interesting topic for future investigation would be applying vPERL to hard exploration tasks with extremely sparse rewards." }, { "heading": "A QUANTITATIVE RESULTS OF CONTINUOUS CONTROL TASKS.", "text": "Table 4 shows the detailed quantitative comparison of the demonstration and imitation methods. The results shown in the table were the mean performance over three random seeds." }, { "heading": "B ABLATION STUDIES", "text": "" }, { "heading": "B.1 THE EFFECT OF THE NUMBER OF FULL-EPISODE DEMONSTRATIONS.", "text": "We also evaluated our method with different numbers of full-episode demonstrations on both Atari games and continuous control tasks. Table 5 and Table 6 show the detailed quantitative comparison of imitation learning methods across different numbers of full-episode demonstrations in the Centipede game and Qbert game. The comparisons on two continuous control tasks, Inverted Pendulum and Inverted Double Pendulum, have been shown in Table 7 and Table 8.\nThe results shows that our method vPERL achieves the highest performance across different numbers of full-episode demonstrations, and GIRIL usually comes the second best. GAIL is able to achieve better performance with the increase of the demonstration number in both continuous control tasks." }, { "heading": "B.2 THE EFFECT OF EXPERT OPTIMALITY.", "text": "Table 9 and Table 10 show the average return of vPERL-Small-RMean with demonstrations of different optimality on Atari games and continuous control tasks, respectively. In the experiments, we trained a PPO agent with ground-truth reward for 10 million (10M) simulation steps as the expert for Atari games, and 1 million (1M) steps for continuous control tasks. To study the effects of optimality of the demonstrations, we additionally trained PPO agents with less simulation steps: 1M steps and 5M steps for Atari games, and 0.1M steps and 0.5M steps for continuous control tasks. With the additional PPO agents, we generated demonstrations with 10%, 50% optimality of the 10M-step ‘Expert’ for both Atari games and continuous control tasks.\nThe results show that our method outperforms expert by a large margin in Atari games and reach the demonstration-level performance in continuous control tasks with demonstrations of different optimality." }, { "heading": "B.3 THE EFFECT OF CHOICES OF K .", "text": "For the sake of consistency, we set K = 10 for all of experiments on Atari games and continuous control tasks in Section 4. To study the effects of the hyperparameter K, we evaluate our method on two Atari games and two continuous control tasks with two additional K (K=5, and K=15).\nTable 11 and Table 12 shows the average return of vPERL-Small-RMean versus different choices of K on Atari games and continuous control tasks. With the three choices of K, our method consistently outperforms the expert in the Atari games, and reach the best (demonstration-level) performance in continuous control tasks. This indicates that our method is not very sensitive to the choices of hyperparameter K." }, { "heading": "C ADDITIONAL EVALUATION RESULTS", "text": "" }, { "heading": "C.1 ADDITIONAL ATARI GAMES.", "text": "Table 13 shows the average return of vPERL-Small with RMean and other baselines on two additional Atari games, Krull and Time Pilot. The results show that our method outperforms the expert and other baselines by a large margin on both additional Atari games." }, { "heading": "C.2 ONE-LIFE DEMONSTRATIONS WITHOUT SCORES AND LIVES ON THE STATES.", "text": "To avoid the effects of the scores and lives in the states of Atari games, we also evaluate our method on the “No-score Demo.”, which is obtained by masking the game score and number of lives left in the demonstrations (Brown et al., 2019a). Table 14 compares the average return of vPERL-Small-RMean with the “Standard Demo.” and the “No-score Demo.” on Q*bert game and Krull game.\nThe results show that our method achieves better performance on the “No-score Demo.” than the “Standard Demo.”. This indicates the negative effects of the game scores and numbers of left lives on the states of demonstrations. From Figure 1 and more reward visualization in Section D and E, we observe that our method learns to attend on the meaningful region in a state and ignore the game score and numbers of left lives automatically. Masking the game score and numbers of left\nlives in the demonstration further alleviates burdens on learning efficient planning computations and planning-based rewards for Atari games.\nIn summary, our method can learn to outperform the expert without explicitly access to the true rewards, and does not relied on the game scores and numbers of left lives in the states of demonstrations. Furthermore, the results show that the performance of our method can be improved by masking out the game scores and numbers of left lives in the demonstrations.\nD VISUALIZATION OF REWARD AND VALUE IMAGES OF VPERL-LARGE AND VIN.\nIn this section, we visualize the reward maps and value maps learned by vPERL-Large and VIN on Atari games. Here, both vPERL and VIN are based on large-size VIN architecture. The size of reward map is 84× 84. The figures show that the reward and value maps learned by vPERL are much meaningful than that by VIN.\nE VISUALIZATION OF REWARD AND VALUE IMAGES OF VPERL-SMALL AND VIN.\nIn this section, we visualize the reward maps and value maps learned by vPERL and VIN on several Atari games. To enable faster training, here both vPERL and VIN are based on small-size VIN architecture. The size of reward map is 18× 18. The figures show that the reward and value maps learned by vPERL are much meaningful than that by VIN." }, { "heading": "F ADDITIONAL DETAILS OF EXPERIMENTAL SETUPS", "text": "" }, { "heading": "F.1 EXPERIMENTAL SETUP OF CONTINUOUS CONTROL TASKS", "text": "Our first step was also to train a reward learning module for each continuous control task on one demonstration. To build our reward learning module for continuous tasks, we used a simple VIN and inverse VIN as the model bases of action back-tracing and transition modeling submodules, respectively. In the simple VIN model, we used 1D convolutional layer with a kernel size of 2 and\nstride of 1 to implement the function fR, reward map R and Q value Q. To accomplish the action back-tracing, the final value map of VIN was fully connected with a hidden layer with a size of 32. Reversely, we used 1D deconvolutional layer to implement the inverse VIN model. We kept the size of feature maps in both VIN and inverse VIN unchanged across all the layers. We setK = 10 for both the VIN and inverse VIN in all tasks. The dimension of latent variable z is set to the action dimension for each task. Additionally, we used a two-layer feed forward neural network with tanh activation function as policy architecture. The number of hidden unit is set to 100 for all tasks. To extend our method on continuous control tasks, we made minor modification on the training objective. In Atari games, we used the KL divergence to measure the distance between the expert policy distribution and the action distribution in Eq. (1). In continuous control tasks, we instead directly treated the latent variable z as the back-traced action and used mean squared error to measure the distance between the back-traced action and the true action in the demonstration. We set the scaling weight α in Eq. (1) to 1.0 for all tasks. Training was conducted with the Adam optimizer (Kingma & Ba, 2015) at a learning rate of 3e-5 and a mini-batch size of 32 for 50, 000 epochs. In each training epoch, we sampled a mini-batch of data every 20 states.\nTo evaluate the quality of our learned reward, we used the trained reward learning module to produce rewards, and trained a policy to maximize the inferred reward function via PPO. We trained the PPO on the learned reward function for 5 million simulation steps to obtain our final policy. The PPO is trained with a learning rate of 3e-4, a clipping threshold of 0.1, a entropy coefficient of 0.0, a value function coefficient of 0.5, and a GAE parameter of 0.95 (Schulman et al., 2016).\nFor a fair comparison, we used the same VIN as the model base for all the baselines. The reward function of GAIL and VAIL was chosen according to the original papers (Ho & Ermon, 2016; Peng et al., 2019). The information constraint Ic in VAIL was set to 0.5 for all tasks. To enable fast training, we trained all the imitation methods with 16 parallel processes." }, { "heading": "F.2 ADDITIONAL DETAILS OF GAIL AND VAIL", "text": "The discriminator for both GAIL and VAIL takes in a state (a stack of four frames) and an action (represented as a 2d one-hot vector with a shape of (|A| × 84 × 84), where |A| is the number of valid discrete actions in each environment) (Brown et al., 2019b). The discriminator outputs a binary classification value, and − log(D(s, a)) is the reward. VAIL was implemented according to the repository of Karnewar (2018). The discriminator network architecture has an additional convolutional layer (with a kernel size of 4) as the final convolutional layer to encode the latent variable in VAIL. We used the default setting of 0.2 for the information constraint (Karnewar, 2018). PPO with the same hyper-parameters was used to optimize the policy network for all the methods. For both GAIL and VAIL, we trained the discriminator using the Adam optimizer with a learning rate of 0.001. The discriminator was updated at each policy step." } ]
2,020
LEARNING EFFICIENT PLANNING-BASED REWARDS
SP:40701460d7ed2175ff193b228f93af7d50911267
[ "The paper presents an approach to keypoint localization (to retrieve people/animals pose) combining labeled and unlabeled data. Features are extracted and concatenated into a single descriptor per keypoints, by multiplying feature maps and heatmaps and max-pooling over the spatial domain, and used for semantic classification. Images are transformed with simple perspective augmentations. The non-supervised part comes in enforcing that keypoint representations for unlabeled images remain close.", "This paper presents semi-supervised keypoint localization networks and loss functions to overcome the need for the labeled keypoint data for that task. It simultaneously generates keypoint heatmaps and pose invariant keypoint representations, where these representations were separately used to enforce translation equivariance, and translation invariance, and semantic consistency, respectively. The proposed method attains the improvement on several benchmarks for human and animal body landmark localization." ]
Knowledge about the locations of keypoints of an object in an image can assist in fine-grained classification and identification tasks, particularly for the case of objects that exhibit large variations in poses that greatly influence their visual appearance, such as wild animals. However, supervised training of a keypoint detection network requires annotating a large image dataset for each animal species, which is a labor-intensive task. To reduce the need for labeled data, we propose to learn simultaneously keypoint heatmaps and pose invariant keypoint representations in a semi-supervised manner using a small set of labeled images along with a larger set of unlabeled images. Keypoint representations are learnt with a semantic keypoint consistency constraint that forces the keypoint detection network to learn similar features for the same keypoint across the dataset. Pose invariance is achieved by making keypoint representations for the image and its augmented copies closer together in feature space. Our semi-supervised approach significantly outperforms previous methods on several benchmarks for human and animal body landmark localization.
[ { "affiliations": [], "name": "Olga Moskvyak" }, { "affiliations": [], "name": "Frederic Maire" }, { "affiliations": [], "name": "Feras Dayoub" }, { "affiliations": [], "name": "Mahsa Baktashmotlagh" } ]
[ { "authors": [ "Mykhaylo Andriluka", "Leonid Pishchulin", "Peter Gehler", "Bernt Schiele" ], "title": "2d human pose estimation: New benchmark and state of the art analysis", "venue": "In Proc. CVPR,", "year": 2014 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ekin D. Cubuk", "Alex Kurakin", "Kihyuk Sohn", "Han Zhang", "Colin Raffel" ], "title": "Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring", "venue": "In Proc. ICLR,", "year": 2020 }, { "authors": [ "Xuanyi Dong", "Yezhou Yang" ], "title": "Teacher supervises students how to learn from partially labeled images for facial landmark detection", "venue": "In Proc. ICCV,", "year": 2019 }, { "authors": [ "Xuanyi Dong", "Shoou-I Yu", "Xinshuo Weng", "Shih-En Wei", "Yi Yang", "Yaser Sheikh" ], "title": "Supervisionby-registration: An unsupervised approach to improve the precision of facial landmark detectors", "venue": null, "year": 2018 }, { "authors": [ "Pei Guo", "Ryan Farrell" ], "title": "Aligned to the object, not to the image: A unified pose-aligned representation for fine-grained recognition", "venue": "In Proc. WACV,", "year": 2019 }, { "authors": [ "Sina Honari", "Pavlo Molchanov", "Stephen Tyree", "Pascal Vincent", "Christopher Joseph Pal", "Jan Kautz" ], "title": "Improving landmark localization with semi-supervised learning", "venue": "In Proc. CVPR,", "year": 2018 }, { "authors": [ "Tomas Jakab", "Ankush Gupta", "Hakan Bilen", "Andrea Vedaldi" ], "title": "Unsupervised learning of object landmarks through conditional image generation", "venue": "In Proc. NeurIPS,", "year": 2018 }, { "authors": [ "Sam Johnson", "Mark Everingham" ], "title": "Clustered pose and nonlinear appearance models for human pose estimation", "venue": "In Proc. BMVC,", "year": 2010 }, { "authors": [ "Sam Johnson", "Mark Everingham" ], "title": "Learning effective human pose estimation from inaccurate annotation", "venue": "In Proc. CVPR,", "year": 2011 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Proc. ICLR,", "year": 2015 }, { "authors": [ "S. Laine", "Timo Aila" ], "title": "Temporal ensembling for semi-supervised learning", "venue": "In Proc. ICLR,", "year": 2017 }, { "authors": [ "D. Lee" ], "title": "Pseudo-label : The simple and efficient semi-supervised learning method for deep neural networks", "venue": "In Proc. ICML Workshops,", "year": 2013 }, { "authors": [ "Shuyuan Li", "Jianguo Li", "Weiyao Lin", "Hanlin Tang" ], "title": "Amur tiger re-identification in the wild", "venue": "In Proc. ICCV Workshops,", "year": 2019 }, { "authors": [ "Cen Liu", "R. Zhang", "Lijun Guo" ], "title": "Part-pose guided amur tiger re-identification", "venue": "In Proc. ICCV Workshops,", "year": 2019 }, { "authors": [ "Nannan Liu", "Q. Zhao", "Nan Zhang", "Xinhua Cheng", "Jianing Zhu" ], "title": "Pose-guided complementary features learning for amur tiger re-identification", "venue": "In Proc. ICCV Workshops,", "year": 2019 }, { "authors": [ "Alexander Mathis", "Pranav Mamidanna", "Kevin M. Cury", "Taiga Abe", "V. Murthy", "M. Mathis", "M. Bethge" ], "title": "Deeplabcut: markerless pose estimation of user-defined body parts with deep learning", "venue": "Nature Neuroscience,", "year": 2018 }, { "authors": [ "O. Moskvyak", "F. Maire" ], "title": "Learning geometric equivalence between patterns using embedding neural networks", "venue": "In Proc. DICTA,", "year": 2017 }, { "authors": [ "Olga Moskvyak", "F. Maire", "Feras Dayoub", "Mahsa Baktashmotlagh" ], "title": "Keypoint-aligned embeddings for image retrieval and re-identification", "venue": "arXiv preprint,", "year": 2020 }, { "authors": [ "Ilija Radosavovic", "P. Dollár", "Ross B. Girshick", "Georgia Gkioxari", "Kaiming He" ], "title": "Data distillation: Towards omni-supervised learning", "venue": "In Proc. CVPR,", "year": 2018 }, { "authors": [ "Christos Sagonas", "Epameinondas Antonakos", "Georgios Tzimiropoulos", "S. Zafeiriou", "M. Pantic" ], "title": "300 faces in-the-wild challenge: database and results", "venue": "Image and Vision Computing,", "year": 2016 }, { "authors": [ "M.S. Sarfraz", "A. Schumann", "A. Eberle", "R. Stiefelhagen" ], "title": "A pose-sensitive embedding for person re-identification with expanded cross neighborhood re-ranking", "venue": "In Proc. CVPR,", "year": 2018 }, { "authors": [ "Kihyuk Sohn", "David Berthelot", "C. Li", "Zizhao Zhang", "N. Carlini", "E.D. Cubuk", "Alex Kurakin", "Han Zhang", "Colin Raffel" ], "title": "Fixmatch: Simplifying semi-supervised learning with consistency and confidence", "venue": "arXiv preprint,", "year": 2020 }, { "authors": [ "Ke Sun", "Bin Xiao", "Dong Liu", "Jingdong Wang" ], "title": "Deep high-resolution representation learning for human pose estimation", "venue": "In Proc. CVPR,", "year": 2019 }, { "authors": [ "Antti Tarvainen", "H. Valpola" ], "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", "venue": "In Proc. NIPS,", "year": 2017 }, { "authors": [ "James Thewlis", "Hakan Bilen", "A. Vedaldi" ], "title": "Unsupervised learning of object landmarks by factorized spatial embeddings", "venue": "In Proc. ICCV,", "year": 2017 }, { "authors": [ "James Thewlis", "Samuel Albanie", "Hakan Bilen", "A. Vedaldi" ], "title": "Unsupervised learning of landmarks by descriptor vector exchange", "venue": "In Proc. ICCV,", "year": 2019 }, { "authors": [ "N. Ukita", "Yusuke Uematsu" ], "title": "Semi- and weakly-supervised human pose estimation", "venue": "Computer Vision Image Understanding,", "year": 2018 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-SNE", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Jesper E. van Engelen", "H. Hoos" ], "title": "A survey on semi-supervised learning", "venue": "Machine Learning,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "L. Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Proc. NIPS,", "year": 2017 }, { "authors": [ "P. Welinder", "S. Branson", "T. Mita", "C. Wah", "F. Schroff", "S. Belongie", "P. Perona" ], "title": "Caltech-UCSD Birds 200", "venue": "Technical Report CNS-TR-2010-001, California Institute of Technology,", "year": 2010 }, { "authors": [ "Qizhe Xie", "Zihang Dai", "E. Hovy", "Minh-Thang Luong", "Quoc V. Le" ], "title": "Unsupervised data augmentation for consistency training", "venue": "arXiv preprint,", "year": 2019 }, { "authors": [ "Y. Yang", "D. Ramanan" ], "title": "Articulated human detection with flexible mixtures of parts", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2013 }, { "authors": [ "Y. Zhang", "Yijie Guo", "Y. Jin", "Yijun Luo", "Zhiyuan He", "H. Lee" ], "title": "Unsupervised discovery of object landmarks as structural representations", "venue": "In Proc. CVPR,", "year": 2018 }, { "authors": [ "Zhihui Zhu", "X. Jiang", "Feng Zheng", "Xiao wei Guo", "Feiyue Huang", "W. Zheng", "Xing Sun" ], "title": "Viewpoint-aware loss with angular regularization for person re-identification", "venue": "In Proc. AAAI,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Detecting keypoints helps with fine-grained classification (Guo & Farrell, 2019) and re-identification (Zhu et al., 2020; Sarfraz et al., 2018). In the domain of wild animals (Mathis et al., 2018; Moskvyak et al., 2020; Liu et al., 2019a;b), annotating data is especially challenging due to large pose variations and the need for domain experts to annotate. Moreover, there is less commercial interest in keypoint estimation for animals compared to humans, and little effort is invested in collecting and annotating public datasets.\nUnsupervised detection of landmarks1 (Jakab et al., 2018; Thewlis et al., 2017; 2019) can extract useful features, but are not able to detect perceptible landmarks without supervision. On the other hand, supervised learning has the risk of overfitting if trained only on a limited number of labeled examples. Semi-supervised learning combines a small amount of labeled data with a large amount of unlabeled data during training. It is mostly studied for classification task (van Engelen & Hoos, 2019) but it is also important for keypoint localization problem because annotating multiple keypoints per image is a time-consuming manual work, for which precision is the most important factor. Pseudo-labeling (Lee, 2013) is a common semi-supervised approach where unlabeled examples are assigned labels (called pseudo-labels) predicted by a model trained on a labeled subset. A heuristic unsupervised criterion is adopted to select the pseudo-labeled data for a retraining procedure. More recently, the works of (Dong & Yang, 2019; Radosavovic et al., 2018) apply variations to selection criteria in pseudo-labeling for semi-supervised facial landmark detection. However, there are less variations in facial landmark positions than in human or animal body joints, where there is a high\n1We use terms keypoints or landmarks interchangeably in our work. These terms are more generic than body joints (used in human pose estimation) because our method is applicable to a variety of categories.\nrisk of transferring inaccurate pseudo-labeled examples to the retraining stage that is harmful for the model.\nPrevious work of (Honari et al., 2018) in semi-supervised landmark detection utilizes additional class attributes and test only on datasets that provide these attribute annotations. Our work focuses on keypoint localization task in a common real-world scenario where annotations are provided for a small subset of data from a large unlabeled dataset. More specifically, we propose a method for semi-supervised keypoint localization that learns a list of heatmaps and a list of semantic keypoint representations for each image (Figure 1). A semantic keypoint representation is a vector of real numbers in a low-dimensional space relative to the image size, and the same keypoints in different images have similar representations. We leverage properties that are specific to the landmark localization problem to design constraints for jointly optimizing both representations.\nWe extend a transformation consistency constraint of (Honari et al., 2018) to be able to apply it on each representation differently (i.e. transformation equivariant constraint for heatmaps and transformation invariant constraint for semantic representations). Moreover, we formulate a semantic consistency constraint that encourages detecting similar features across images for the same landmark independent of the pose of the object (e.g. an eye in all images should look similar). Learning both representations simultaneously allows us to use the power of both supervised and unsupervised learning.\nOur work is motivated by data scarcity in the domain of wild animals, but is not limited to animals, and as well, it is applicable to human body landmarks detection. The contribution of our work is three-fold:\n• We propose a technique for semi-supervised keypoint localization that jointly learns keypoint heatmaps and semantic representations optimised with supervised and unsupervised constraints;\n• Our method can be easily added to any existing keypoint localization networks with no structural and with minimal computational overhead;\n• We evaluate the proposed method on annotated image datasets for both humans and animals. As demonstrated by our results, our method significantly outperforms previously proposed supervised and unsupervised methods on several benchmarks, using only limited labeled data.\nThe paper is organised as follows. Related work on semi-supervised learning and keypoint localization is reviewed in Section 2. Our proposed method is described in Section 3. Experimental settings, datasets and results are discussed in Section 4." }, { "heading": "2 RELATED WORK", "text": "Keypoint localization. Supervised keypoint localization research is driven by a few large datasets with labeled keypoints that span across several common research domains including human pose estimation (Andriluka et al., 2014) and facial keypoints (Sagonas et al., 2016). Challenges in obtaining keypoint annotations have led to the rise in unsupervised landmark localization research. Several unsupervised methods leverage the concept of equivariance which means that landmark coordinates stay consistent after synthetic transformations or in subsequent video frames. Thewlis et al. (2017) propose to learn viewpoint-independent representations that are equivariant to different transformations and Dong et al. (2018) exploit the coherence of optical flow as a source of supervision. Zhang et al. (2018) learn landmark encodings by enforcing constraints that reflect the necessary properties for landmarks such as separability and concentration. Jakab et al. (2018) propose a generative approach where the predicted heatmaps are used to reconstruct the input image from a transformed copy. Recent work (Thewlis et al., 2019) enforce the consistency between instances of the same object by exchanging descriptor vectors. These methods are mostly evaluated on faces of people that have less degrees of freedom during movements and transformations than human or animal body joints. We compare our method to the combination of supervised and aforementioned unsupervised methods in Section 4.\nSemi-supervised learning is the most studied for the classification task. Pseudo-labeling (Lee, 2013) is a method that uses the model’s class predictions as artificial labels for unlabeled examples and then trains the model to predict these labels. Another technique is a consistency regularization which states that realistic perturbations of input examples from unlabeled dataset should not significantly change the output of a neural network. Consistency regularization is used in Π-model (Laine & Aila, 2017) and further improved by Temporal Ensembling (Laine & Aila, 2017) which maintains an exponential moving average prediction for each training example and Mean Teacher (Tarvainen & Valpola, 2017) that averages model weights instead of model predictions. Recent methods UDA (Xie et al., 2019), ReMixMatch (Berthelot et al., 2020) and FixMatch (Sohn et al., 2020) use a combination of consistency loss, pseudo-labeling and advanced augmentation techniques in addition to color perturbations and spatial transformations. In this work, we investigate adjustments required to apply consistency loss to keypoint localization which we discuss in Section 3.2.\nSemi-supervised learning for keypoint localization. To the best of our knowledge, there are a few works in semi-supervised keypoint localization. Dong & Yang (2019) build on the pseudo-labeling technique and propose a teacher model and two students to generate more reliable pseudo-labels for unlabeled images. However, the method is evaluated on face landmarks and in cases with high variations of poses, there is a high possibility of inaccurate pseudo-labels that cannot be filtered out and be harmful during the retraining stage. Honari et al. (2018); Ukita & Uematsu (2018) learn keypoints in a semi-supervised manner but utilise extra annotations to guide landmark learning such as action labels (running, jumping) for juman joints or emotion labels (smiling, yawning) for facial keypoint localization. Different from previous work our approach does not use any class labels and learns directly from unlabeled data with high pose variations." }, { "heading": "3 SEMI-SUPERVISED LEARNING FOR KEYPOINT LOCALIZATION", "text": "In this work, we propose a semi-supervised technique for keypoint localization that learns from an image set where ground truth annotations are provided only for a small subset of the dataset. The overall architecture consists of two components: a keypoint localization network (KLN) that outputs keypoint heatmaps of the image, and a keypoint classification network (KCN) that classifies keypoints given a semantic keypoint representation as input. Our method does not pose any constraints on the architecture of the KLN and it can be added to any existing keypoint localization network with minimal modifications.\nWe optimize heatmaps with the supervised loss and the transformation equivariance constraint. Simultaneously, keypoint representations are optimized with transformation invariance and semantic consistency constraints (Figure 1). We discuss each constraint and related components of the architecture in the next sections." }, { "heading": "3.1 SEMANTIC KEYPOINT REPRESENTATIONS", "text": "Keypoint heatmaps are optimized to estimate locations of keypoints in the image. However, heatmaps do not carry any information about a semantic type of the keypoint (e.g, a beak or an eye for a bird). In semi-supervised regime, the feedback provided by unlabeled examples are not as effective as the ones coming from labeled examples. To extract useful information from unlabeled images, we propose learning a semantic keypoint representation. In particular, keypoint localization network is encouraged to detect similar features for the same semantic keypoint across the dataset by incorporating the feedback from a keypoint representation classifier in the objective function.\nMotivation for our approach is that the same keypoints should activate the same feature maps. Let us consider KLN as a function f(x;θ) with an input image x and trainable parameters θ that outputs heatmaps h = f(x;θ). We collect intermediate feature maps from KLN, upscale them to the spatial dimension of output heatmaps, concatenate by channels, and pass through a convolutional layer with C filters of size one (Figure 2). The resulting feature map F has the shape (C,H,W ). Then, feature maps F are element-wise multiplied with each keypoint heatmap hi, i ∈ {1, ...,K} seperately to mask out activations corresponding to the detected keypoint. The output of this operation is K feature maps of size (C,H,W ). Global Max Pooling (GMP) is applied over feature maps to keep the highest value for each channel. We call the produced vector zi = GMP(F hi) for each keypoint i ∈ {1, ...,K} a semantic keypoint representation. Finally, we pass keypoint representations to a simple KCN (φ) which is a fully connected network with an input and an output layer for classification with cross-entropy loss. The feedback from the cross-entropy loss makes up a semantic consistency (SC) loss:\nLsc(x) = − 1\nK K∑ i=1 ŷi log(φ(zi)) (1)\nwhere ŷ is a vector of ground truth semantic labels for keypoints because the order of keypoints in a heatmap is fixed.\nOne advantage of our method is its efficiency as it only adds a small number of parameters to the network to address the task of keypoint representation classification. Specifically, KCN is a small fully connected network shared between keypoints and it has less than a thousand of parameters depending on the number of keypoints. Our approach is related to attention modules (Vaswani et al., 2017; Hu et al., 2020) as our network has the ability to focus on a subset of features using elementwise multiplication with heatmaps. However, our model uses this attention-based mechanism to learn additional keypoint representations from unlabeled data by optimizing a set of unsupervised losses." }, { "heading": "3.2 TRANSFORMATION CONSISTENCY CONSTRAINT", "text": "The difference between keypoint heatmaps and semantic keypoint representations is that the former is transformation equivariant and the latter is transformation invariant. In other words, the output\nheatmaps should be consistent with viewpoint variations of the image while keypoint representations should be preserved for all different transformations of the image. We call this property a transformation consistency constraint.\nTransformation equivariance (TE) enforces a commutative property on the landmark localization and augmentation operations that include spatial transformation (e.g, rotations and translations), meaning that the order of applying these two operations does not matter. Let g(·, s) be an augmentation function with augmentation parameters s which are not trainable and sampled randomly each time. Transformation equivariance constraint is formulated as: f ◦ g(x) = g ◦ f(x). We measure a transformation equivariance loss Lte over predicted heatmaps by squared Euclidean distance:\nLte(x;θ) = Ex [ ||f(g(x, s);θ)− g(f(x;θ), s)||2 ] (2)\nNote that, after applying a transformation, some landmarks may go outside of the image boundary, and cause the visibility issue. This problem is alleviated in our formulation by applying the same transformation to an image. This is different from equivariant landmark transformation (ELT) loss proposed by Honari et al. (2018) which computes an inverse transformation instead. In essence, inverse transformation cannot bring these landmarks back meaning that inverse transformation does not output the original image. Our approach avoids this issue.\nTransformation invariance (TI) of keypoint representations is enforced by pulling corresponding vectors for the image and its augmented copy closer together. First, we concatenate keypoint representations in one vector to get a holistic representation z of the image x:\nz = [z1, z2, ...,zK ]. (3)\nWe apply a random spatial transformation to the input image to get image x′, compute keypoint representations z′1, z ′ 2, ...,z ′ K , and concatenate them to get a vector z\n′. Finally, we enforce pose invariance by penalizing a distance between representations of original and transformed images and formulate a transformation invariance loss Lti:\nLti(x,x′) = Ex,x′ [ ||z − z′||2 ] (4)\nThe overall objective is the weighted sum of losses:\nL = λ1Lsup + λ2Lsc + λ3Lte + λ4Lti (5) where Lsup is a supervised mean squared error between predicted and ground truth heatmaps for the labeled subset. Parameters λi are defined experimentally." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATASETS", "text": "We evaluate our method on two datasets with annotated human body joints and two datasets of wild animals.\nMPII Human Pose dataset (Andriluka et al., 2014) is a collection of images showing people doing real-world activities with annotations for the full body. Due to the fact that test annotations are not released publicly, we use training and validation splits of MPII in our experiments. We use 10,000 images for training to speed up experiments as we run multiple training runs for each subset of labeled examples. Our validation and test sets consist of 3,311 and 2,958 images respectively. Annotations contain coordinates for 16 body joints with a visibility flag.\nLSP (Leeds Sports Pose) (Johnson & Everingham, 2010; 2011) dataset is a collection of annotated images with people doing sports such as athletics, badminton or soccer. Each image has been annotated with 14 joint locations. We use 10,000 images from extended (Johnson & Everingham, 2011) version for training and 2,000 images from original (Johnson & Everingham, 2010) dataset for testing and validation.\nCUB-200-2011 (Welinder et al., 2010) is a dataset of 200 fine-grained classes of bird species. We split dataset into training, validation and testing with disjoint classes so test classes does not appear\nduring training. First 100 classes are used for training (5,864 images), 50 classes for validation (2,958 images) and the last 50 classes (2,966 images) for testing. Each image is annotated with 15 body keypoints such as beak, left eye and throat. We use class label only for splitting the dataset and do not use it anywhere in out method.\nATRW (Li et al., 2019) is a dataset of Amur tigers images captured in multiple wild zoos in unconstrained settings. Professionals annotated 15 skeleton keypoints for each tiger. We use 3,610 images for training, 516 for validation and 1,033 for testing with annotations provided by authors. This dataset is more challenging than birds as four-legged animals exhibit more pose variations.\nTraining set for each dataset is split into labeled and unlabeled subsets by randomly picking 5%, 10%, 20% or 50% of the training examples and discarding the labels for the rest of the data. The procedure is repeated three times so all experiments are run three times to obtain the mean and standard deviation of the results. Validation and test sets are fixed for all experiments. Validation set is used to tune hyperparameters and test set is used to report the final results. The order of the keypoints is explicitly defined in annotations and is fixed for the training and inference.\nThe evaluation metric is PCK (probability of correct keypoint) from (Yang & Ramanan, 2013) where a keypoint is considered correctly localized if it falls within αl pixels of the ground truth position (α is a constant and l is the maximum side of the bounding box). The PCK@0.1 (α = 0.1) score is reported for LSP, CUB-200-2011 and ATRW datasets. For MPII we use an adaptation (Andriluka et al., 2014) which is PCKh (head-normalized probability of correct keypoint) where l is the head size that corresponds to 60% of the diagonal length of the ground truth head bounding box (provided in the MPII annotations)." }, { "heading": "4.2 IMPLEMENTATION DETAILS", "text": "Images are resized to the input size 256 × 256 and heatmaps are predicted at size 64 × 64. We adapt HRNet-32 (Sun et al., 2019) architecture as KLN because it is originally designed for keypoint localization and retains features at high spatial dimension (e.g. 64 × 64 for the input of size 256×256). We collect intermediate features at the output of each multi-scale subnetwork, after concatenation we get 352 channels and then apply 64 convolutional filters of size one. GMP results in representations of length 64 for each keypoint. We also experimented with collecting more features from different layers but it did not improve the performance. KCN is a fully connected network that accepts keypoint representation of size 64 and classifies keypoints based on their semantic labels (from 10 to 17 depending on the dataset).\nWe use perspective transformations as an augmentation function g where parameters s of the transformation are sampled randomly using a method from (Moskvyak & Maire, 2017) to avoid extreme warping. We also experimented with simple affine transformations but perspective gave better results most likely due to higher variability of transformations.\nUnsupervised losses may hurt the learning at the beginning because output heatmaps and intermediate feature maps are random during first epochs. A possible solution is to vary the contribution of unsupervised losses according to a predefined strategy. To avoid tuning many hyperparameters, our semi-supervised approach uses ground truth heatmaps in unsupervised losses for the labeled samples in a batch. This approach has only one hyperparameter - percentage of the labeled samples in a batch. We found that there is enough feedback from labeled examples when the batch has 50% of labeled and 50% of unlabeled examples.\nWe adopt Adam (Kingma & Ba, 2015) optimizer with learning rate 10−4 for all experiments. Models are trained until the accuracy on the validation set has stopped improving. The weights of loss components were determined experimentally (λ1, λ2, λ3, λ4) = (103, 0.5, 102, 102). We provide the sensitivity analysis in Section 4." }, { "heading": "4.3 RESULTS", "text": "Comparison with the supervised baseline. We train HRNet-32 (Sun et al., 2019) with the supervised loss as a baseline from the official implementation on the labeled subsets with 5%, 10%, 20%, 50% and 100% of the dataset. The baseline performance decreases significantly when the amount of training data is reduced on human poses and tigers datasets (Table 1). On birds dataset, we observe\nonly a small decrease in the baseline score (Table 1). We explain it by the fact that there are more variations in poses of four-legged animals and human body joints than of birds. Supervised results on MPII are lower than the official ones because the training set is smaller and we do not include additional tricks during training (e.g. half body transforms) and testing (post-processing and averaging over flipped images).\nOur method significantly improves the baseline on all datasets (Table 1). Our proposed unsupervised constraints are the most beneficial for low data regimes with 5%, 10% and 20% labeled images. For example, our method increases the score from 40% to 66% on LSP dataset with 5% of labeled data. On the challenging tigers dataset, our approach reaches the score of 92% trained with only 5% labeled examples when the supervised model shows the score 69% while trained on the same labeled data. Experiments show that the influence of additional unsupervised losses decreases when more labeled examples are added to the training. Experiments show that our method on 100% labeled data outperforms the supervised baseline by a small margin because by learning supplementary semantic keypoint representations with unsupervised losses the model learns to generalize better.\nComparison with the pseudo-labeled baseline. We apply pseudo-labeled (PL) method from Radosavovic et al. (2018) on our datasets (Table 1). We use the same model HRNet-32 as in all our experiments for a fair comparison. Overall, the pseudo-labeled baseline is inferior to our method on all datasets used in our study. We explain it by the fact that Radosavovic et al. (2018) trained on datasets that are by order of magnitude larger than our data so models pretrained on the labeled subset are already good enough to generate reliable pseudo-labels." }, { "heading": "TE + TI + SC 66.32 69.09 71.62 72.19 74.44", "text": "Comparison with related methods. We compare our approach with previously proposed semisupervised and unsupervised methods for landmark detection (Table 1). The equivariant landmark transformation (ELT) loss from (Honari et al., 2018) forces a model to predict equivariant landmarks with respect to transformations applied to an image. ELT loss gives a small improvement over the baseline model and is inferior to our method on all datasets. Jakab et al. (2018) learn keypoints without supervision by encouraging the keypoints to capture the geometry of the object by learning to generate the input image given its predicted keypoints and an augmented copy. For a fair comparison we inject the models from Jakab et al. (2018) into our training pipeline and add the supervised loss for the labeled examples in each batch. All other parameters are kept the same including augmentation, subsets of data and training schedule. We observe that the generation approach improves over ELT loss and the baseline however it is inferior to our method. The generation approach also introduces more parameters (in the reconstruction part of the network) than our approach that adds only a small keypoint classifier network.\nAblation study. We investigate the influence of different loss components of our methods on LSP dataset (Table 2). At first, we remove semantic consistency loss component (Eq. 1) and observe the significant drop in the score especially in low labeled data regime. For example, with 5% of labeled data the score drops from 66% when trained with the combination TE + TI + SC to 46% for the combination TE + TI. When we return semantic consistency and remove transformation consistency losses (Eq. 2, 4), the results are reduced slightly. The results of ablation study shows that the semantic consistency loss component is more influential than the transformation consistency. Both TE and TI losses contribute to the performance gain and their combination achieves better results than each loss separately. We argue that our TE loss is an improvement over ELT loss (Honari et al., 2018). We replaced our TE loss with an inverse transformation loss of Honari et al. (2018) in our framework, and applied it on ATRW and CUB-200-2011 datasets with 20% of labeled data. We observed that the score decreased by 1% on both datasets.\nWe also analyse the influence of the amount of unlabeled data in our method (Table 3). We conduct experiments where the amount of labeled examples is fixed at 5% and the number of unlabeled examples is reduced to 50%, 20% and 10% of the number of original unlabeled samples. We observe that the score goes down as the amount of unlabeled data is reduced. Our method outperforms the supervised score only by a small margin with 10% of unlabeled data. We conclude that the number of unlabeled examples plays an important role in training with our unsupervised losses.\nWe conduct an ablation study to get an insight on using ground truth heatmaps in unsupervised losses. Experiments on the LSP dataset show a decrease of 1-2% in the score for all cases when ground truth heatmaps are not used (Table 4). The results prove the benefit of using the signal from available ground truth heatmaps.\nSensitivity analysis of weight loss components. We fixed the weight λ1 = 103 and tested weights: λ2 = (0.1, 0.5, 1.), λ3 = (101, 102, 103) and λ4 = (101, 102, 103). The ranges of weight values are different due to differences in scales for mean squared error and cross-entropy loss. Experiments on LSP dataset show that our method is not sensitive to variations of TE (λ3) and TI (λ4) losses (Figure 3). The most notable drop in accuracy is observed when the weight of SC loss (λ2) is reduced to 0.1 and the accuracy is at the same level when λ2 equals 0.5 and 1.0. We select the combination of (λ2, λ3, λ4) = (0.5, 102, 102) that achieves the highest score.\nAnalysis of keypoint representations. We analyze the learned keypoint representation with tSNE (van der Maaten & Hinton, 2008). The t-SNE algorithm maps a high dimensional space (64 dimensions in our case) into a two-dimensional while preserving the similarity between points. The t-SNE plot for the keypoint representations of LSP test set (Figure 4) shows that representations for the same keypoints are clustered together." }, { "heading": "5 CONCLUSION", "text": "We presented a new method for semi-supervised keypoint localization. We show that reliable keypoints can be obtained with a limited number of labeled examples. This is achieved by learning semantic keypoint representations simultaneously with keypoint heatmaps using a set of unsupervised constraints tailored for the keypoint localization task. We applied our method to predict human body joints and animal body keypoints and demonstrated that it outperforms current supervised and unsupervised methods. Moreover, it reaches the same performance as the model trained on the whole labeled dataset with only 10% of labeled images on tigers ATRW dataset and with 50% labeled images on challenging human poses LSP dataset. We plan to investigate the applicability of our method to domain adaptation for keypoint localization in the future work." } ]
2,021
SEMI-SUPERVISED KEYPOINT LOCALIZATION
SP:4815005f4ab4a69abde3b5456b811e4e98ba86c7
[ "This work proposes a new form of adversarial training, supported by two proposed adversarial attacks based off a perceptual distance. The choice of perceptual distance (LPIPS), is computed by comparing the activations of (possibly different) two neural networks with respect to a pair of inputs. The authors propose two new attacks based off this perceptual distance: PPGD and LPA, as it is distinct from the common choice of L_2 or L_inf. This work claims that performing adversarial training against adversarial examples crafted by the proposed attacks, induces robustness to a wide range of \"narrow\" threat models e.g. L_2, JPEG, L_inf.", "This paper studies the adversarial robustness of deep neural networks against multiple and unforeseen threat models. Since there lacks a precise formalization of human perception, this paper adopts LPIPS, a metric that correlates well with human perception based on neural network activations. Then, two adversarial attack methods are proposed to generate adversarial examples under the metric. And an adversarial training method is also proposed. The experiments on various threat models validate the effectiveness of the proposed method." ]
A key challenge in adversarial robustness is the lack of a precise mathematical characterization of human perception, used in the definition of adversarial attacks that are imperceptible to human eyes. Most current attacks and defenses try to avoid this issue by considering restrictive adversarial threat models such as those bounded by L2 or L∞ distance, spatial perturbations, etc. However, models that are robust against any of these restrictive threat models are still fragile against other threat models, i.e. they have poor generalization to unforeseen attacks. Moreover, even if a model is robust against the union of several restrictive threat models, it is still susceptible to other imperceptible adversarial examples that are not contained in any of the constituent threat models. To resolve these issues, we propose adversarial training against the set of all imperceptible adversarial examples. Since this set is intractable to compute without a human in the loop, we approximate it using deep neural networks. We call this threat model the neural perceptual threat model (NPTM); it includes adversarial examples with a bounded neural perceptual distance (a neural network-based approximation of the true perceptual distance) to natural images. Through an extensive perceptual study, we show that the neural perceptual distance correlates well with human judgements of perceptibility of adversarial examples, validating our threat model. Under the NPTM, we develop novel perceptual adversarial attacks and defenses. Because the NPTM is very broad, we find that Perceptual Adversarial Training (PAT) against a perceptual attack gives robustness against many other types of adversarial attacks. We test PAT on CIFAR-10 and ImageNet-100 against five diverse adversarial attacks: L2, L∞, spatial, recoloring, and JPEG. We find that PAT achieves state-of-the-art robustness against the union of these five attacks—more than doubling the accuracy over the next best model—without training against any of them. That is, PAT generalizes well to unforeseen perturbation types. This is vital in sensitive applications where a particular threat model cannot be assumed, and to the best of our knowledge, PAT is the first adversarial training defense with this property.
[ { "affiliations": [], "name": "Cassidy Laidlaw" }, { "affiliations": [], "name": "Sahil Singla" }, { "affiliations": [], "name": "Soheil Feizi" } ]
[ { "authors": [ "Battista Biggio", "Igino Corona", "Davide Maiorca", "Blaine Nelson", "Nedim Šrndić", "Pavel Laskov", "Giorgio Giacinto", "Fabio Roli" ], "title": "Evasion Attacks against Machine Learning at Test Time", "venue": "Machine Learning and Knowledge Discovery in Databases, Lecture Notes in Computer Science,", "year": 2013 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "arXiv preprint arXiv:1607.02533,", "year": 2016 }, { "authors": [ "Cihang Xie", "Jianyu Wang", "Zhishuai Zhang", "Yuyin Zhou", "Lingxi Xie", "Alan Yuille" ], "title": "Adversarial examples for semantic segmentation and object detection", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and Harnessing Adversarial Examples", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards Deep Learning Models Resistant to Adversarial Attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P. Xing", "Laurent El Ghaoui", "Michael I. Jordan" ], "title": "Theoretically Principled Trade-off Between Robustness and Accuracy", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Logan Engstrom", "Brandon Tran", "Dimitris Tsipras", "Ludwig Schmidt", "Aleksander Madry" ], "title": "A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations", "venue": "arXiv preprint arXiv:1712.02779,", "year": 2017 }, { "authors": [ "Eric Wong", "Frank R. Schmidt", "J. Zico Kolter" ], "title": "Wasserstein Adversarial Examples via Projected Sinkhorn Iterations", "venue": "arXiv preprint arXiv:1902.07906,", "year": 2019 }, { "authors": [ "Chaowei Xiao", "Jun-Yan Zhu", "Bo Li", "Warren He", "Mingyan Liu", "Dawn Song" ], "title": "Spatially Transformed Adversarial Examples", "venue": "arXiv preprint arXiv:1801.02612,", "year": 2018 }, { "authors": [ "Hossein Hosseini", "Radha Poovendran" ], "title": "Semantic Adversarial Examples", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2018 }, { "authors": [ "Cassidy Laidlaw", "Soheil Feizi" ], "title": "Functional Adversarial Attacks", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Anand Bhattad", "Min Jin Chong", "Kaizhao Liang", "Bo Li", "David A. Forsyth" ], "title": "Big but Imperceptible Adversarial Perturbations via Semantic Manipulation", "venue": null, "year": 1904 }, { "authors": [ "Yang Song", "Rui Shu", "Nate Kushman", "Stefano Ermon" ], "title": "Constructing Unrestricted Adversarial Examples with Generative Models", "venue": "In Proceedings of the 32nd International Conference on Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Xiaohui Zeng", "Chenxi Liu", "Yu-Siang Wang", "Weichao Qiu", "Lingxi Xie", "Yu-Wing Tai", "Chi Keung Tang", "Alan L. Yuille" ], "title": "Adversarial Attacks Beyond the Image Space", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Matt Jordan", "Naren Manoj", "Surbhi Goel", "Alexandros G. Dimakis" ], "title": "Quantifying perceptual distortion of adversarial examples, 2019", "venue": null, "year": 2019 }, { "authors": [ "Pratyush Maini", "Eric Wong", "J. Zico Kolter" ], "title": "Adversarial Robustness Against the Union of Multiple Perturbation Models. arXiv:1909.04068 [cs, stat], September 2019", "venue": "URL http://arxiv.org/abs/ 1909.04068", "year": 1909 }, { "authors": [ "Florian Tramer", "Dan Boneh" ], "title": "Adversarial Training and Robustness for Multiple Perturbations", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Daniel Kang", "Yi Sun", "Dan Hendrycks", "Tom Brown", "Jacob Steinhardt" ], "title": "Testing Robustness Against Unforeseen Adversaries", "venue": "arXiv preprint arXiv:1908.08016,", "year": 2019 }, { "authors": [ "T.B. Brown", "N. Carlini", "C. Zhang", "C. Olsson", "P. Christiano", "I. Goodfellow" ], "title": "Unrestricted adversarial examples", "venue": "arXiv preprint arXiv:1809.08352,", "year": 2018 }, { "authors": [ "Tom B. Brown", "Dandelion Mané", "Aurko Roy", "Martín Abadi", "Justin Gilmer" ], "title": "Adversarial Patch. arXiv:1712.09665 [cs], May 2018", "venue": "URL http://arxiv.org/abs/1712.09665", "year": 2018 }, { "authors": [ "Zhou Wang", "A.C. Bovik", "H.R. Sheikh", "E.P. Simoncelli" ], "title": "Image quality assessment: from error visibility to structural similarity", "venue": "IEEE Transactions on Image Processing,", "year": 2004 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A. Efros", "Eli Shechtman", "Oliver Wang" ], "title": "The Unreasonable Effectiveness of Deep Features as a Perceptual Metric", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards Evaluating the Robustness of Neural Networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Alexey Kurakin", "Ian J. Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "ArXiv, abs/1611.01236,", "year": 2016 }, { "authors": [ "Mahmood Sharif", "Lujo Bauer", "Michael K. Reiter" ], "title": "On the Suitability of Lp-norms for Creating and Preventing Adversarial Examples", "venue": "[cs],", "year": 2018 }, { "authors": [ "Isaac Dunn", "Tom Melham", "Daniel Kroening" ], "title": "Semantic Adversarial Perturbations using Learnt Representations", "venue": "[cs],", "year": 2020 }, { "authors": [ "Qiuling Xu", "Guanhong Tao", "Siyuan Cheng", "Lin Tan", "Xiangyu Zhang" ], "title": "Towards Feature Space Adversarial Attack. arXiv:2004.12385 [cs, eess], April 2020", "venue": "URL http://arxiv.org/abs/2004.12385", "year": 2004 }, { "authors": [ "Charles Jin", "Martin Rinard" ], "title": "Manifold Regularization for Locally Stable Deep Neural Networks. March 2020", "venue": "URL https://arxiv.org/abs/2003.04286v2", "year": 2003 }, { "authors": [ "David Stutz", "Matthias Hein", "Bernt Schiele" ], "title": "Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks. arXiv:1910.06259 [cs, stat], February 2020", "venue": "URL http://arxiv.org/abs/1910", "year": 1910 }, { "authors": [ "Z. Wang", "E.P. Simoncelli", "A.C. Bovik" ], "title": "Multiscale structural similarity for image quality assessment", "venue": "In The Thrity-Seventh Asilomar Conference on Signals, Systems Computers,", "year": 2003 }, { "authors": [ "Mehul P. Sampat", "Zhou Wang", "Shalini Gupta", "Alan Conrad Bovik", "Mia K. Markey" ], "title": "Complex Wavelet Structural Similarity: A New Image Similarity Index", "venue": "IEEE Transactions on Image Processing,", "year": 2009 }, { "authors": [ "Rafał Mantiuk", "Kil Joong Kim", "Allan G. Rempel", "Wolfgang Heidrich" ], "title": "HDR-VDP-2: a calibrated visual metric for visibility and quality predictions in all luminance conditions", "venue": "ACM Transactions on Graphics,", "year": 2011 }, { "authors": [ "Zhou Wang", "Eero P. Simoncelli" ], "title": "Maximum differentiation (MAD) competition: A methodology for comparing computational models of perceptual quantities", "venue": "Journal of Vision,", "year": 2008 }, { "authors": [ "Xun Huang", "Ming-Yu Liu", "Serge Belongie", "Jan Kautz" ], "title": "Multimodal Unsupervised Image-to-image Translation", "venue": null, "year": 2018 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A Style-Based Generator Architecture for Generative Adversarial Networks", "venue": null, "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "ImageNet Classification with Deep Convolutional Neural Networks", "venue": "Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein", "Alexander C. Berg", "Li Fei-Fei" ], "title": "ImageNet Large Scale Visual Recognition Challenge", "venue": "International Journal of Computer Vision (IJCV),", "year": 2015 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning Multiple Layers of Features from Tiny Images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Francesco Croce", "Matthias Hein" ], "title": "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "Proceedings of the International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Hossein Hosseini", "Baicen Xiao", "Mayoore Jaiswal", "Radha Poovendran" ], "title": "On the Limitation of Convolutional Neural Networks in Recognizing Negative Images", "venue": "In 16th IEEE International Conference on Machine Learning and Applications (ICMLA),", "year": 2017 }, { "authors": [ "Huan Zhang", "Hongge Chen", "Zhao Song", "Duane S. Boning", "Inderjit S. Dhillon", "Cho-Jui Hsieh" ], "title": "The Limitations of Adversarial Training and the Blind-Spot Attack", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yogesh Balaji", "Tom Goldstein", "Judy Hoffman" ], "title": "Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets. arXiv:1910.08051 [cs, stat], October 2019", "venue": "URL http://arxiv.org/abs/ 1910.08051", "year": 1910 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ], "title": "Robustness May Be at Odds with Accuracy. arXiv:1805.12152 [cs, stat], September 2019", "venue": "URL http://arxiv. org/abs/1805.12152", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep Residual Learning for Image Recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic Differentiation in PyTorch", "venue": "NIPS-W,", "year": 2017 }, { "authors": [ "Logan Engstrom", "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras" ], "title": "Robustness (python library), 2019", "venue": "URL https://github.com/MadryLab/robustness", "year": 2019 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very Deep Convolutional Networks for Large-Scale Image Recognition", "venue": "URL https://arxiv.org/abs/1409.1556v6", "year": 2014 }, { "authors": [], "title": "2019) have noted that there is often a tradeoff the between adversarial robustness of a classifier and its accuracy. That is, models which have higher accuracy under adversarial attack", "venue": null, "year": 2019 }, { "authors": [ "relative mCE" ], "title": "The relative corruption error is defined by Hendrycks and Dietterich (2019) for a classifier f and corruption type c as Relative", "venue": null, "year": 2019 }, { "authors": [ "Kang" ], "title": "Hyperparameters for the adversarial training experiments on CIFAR-10 and ImageNet-100. For CIFAR-10, hyperparameters are similar to those used by Zhang et al. (2019a)", "venue": "For ImageNet-100,", "year": 2019 }, { "authors": [ "Zhang" ], "title": "LAYERS FOR LPIPS CALCULATION Calculating the LPIPS distance using a neural network classifier g(·) requires choosing layers whose normalized, flattened activations φ(·) should be compared between images. For AlexNet and VGG-16, we use the same layers to calculate LPIPS distance", "venue": null, "year": 2018 }, { "authors": [ "He" ], "title": "AlexNet (Krizhevsky et al., 2012), we use the activations after each of the first five ReLU functions. For VGG-16 (Simonyan and Zisserman, 2014), we use the activations directly before the five max pooling layers. In ResNet-50, we use the outputs of the conv2_x, conv3_x, conv4_x, and conv5_x layers", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Many modern machine learning algorithms are susceptible to adversarial examples: carefully crafted inputs designed to fool models into giving incorrect outputs (Biggio et al., 2013; Szegedy et al., 2014; Kurakin et al., 2016a; Xie et al., 2017). Much research has focused on increasing classifiers’ robustness against adversarial attacks (Goodfellow et al., 2015; Madry et al., 2018; Zhang et al., 2019a). However, existing adversarial defenses for image classifiers generally consider simple threat models. An adversarial threat model defines a set of perturbations that may be made to an image in order to produce an adversarial example. Common threat models include L2 and L∞ threat models, which constrain adversarial examples to be close to the original image in L2 or L∞ distances. Some work has proposed additional threat models which allow spatial perturbations (Engstrom et al., 2017; Wong et al., 2019; Xiao et al., 2018), recoloring (Hosseini and Poovendran, 2018; Laidlaw and Feizi, 2019; Bhattad et al., 2019), and other modifications (Song et al., 2018; Zeng et al., 2019) of an image.\nThere are multiple issues with these unrealistically constrained adversarial threat models. First, hardening against one threat model assumes that an adversary will only attempt attacks within that threat model. Although a classifier may be trained to be robust against L∞ attacks, for instance,\nan attacker could easily generate a spatial attack to fool the classifier. One possible solution is to train against multiple threat models simultaneously (Jordan et al., 2019; Laidlaw and Feizi, 2019; Maini et al., 2019; Tramer and Boneh, 2019). However, this generally results in a lower robustness against any one of the threat models when compared to hardening against that threat model alone. Furthermore, not all possible threat models may be known at training time, and adversarial defenses do not usually generalize well to unforeseen threat models (Kang et al., 2019).\nThe ideal solution to these drawbacks would be a defense that is robust against a wide, unconstrained threat model. We differentiate between two such threat models. The unrestricted adversarial threat model (Brown et al., 2018) encompasses any adversarial example that is labeled as one class by a classifier but a different class by humans. On the other hand, we define the perceptual adversarial threat model as including all perturbations of natural images that are imperceptible to a human. Most existing narrow threat models such as L2, L∞, etc. are near subsets of the perceptual threat model (Figure 1). Some other threat models, such as adversarial patch attacks (Brown et al., 2018), may perceptibly alter an image without changing its true class and as such are contained in the unrestricted adversarial threat model. In this work, we focus on the perceptual threat model.\nThe perceptual threat model can be formalized given the true perceptual distance d∗(x1,x2) between images x1 and x2, defined as how different two images appear to humans. For some threshold ∗, which we call the perceptibility threshold, images x and x′ are indistinguishable from one another as long as d∗(x,x′) ≤ ∗. Note that in general ∗ may depend on the specific input. Then, the perceptual threat model for a natural input x includes all adversarial examples x̃ which cause misclassification but are imperceptibly different from x, i.e. d∗(x, x̃) ≤ ∗. The true perceptual distance d∗(·, ·), however, cannot be easily computed or optimized against. To solve this issue, we propose to use a neural perceptual distance, an approximation of the true perceptual distance between images using neural networks. Fortunately, there have been many surrogate perceptual distances proposed in the computer vision literature such as SSIM (Wang et al., 2004). Recently, Zhang et al. (2018) discovered that comparing the internal activations of a convolutional neural network when two different images are passed through provides a measure, Learned Perceptual Image Patch Similarity (LPIPS), that correlates well with human perception. We propose to use the LPIPS distance d(·, ·) in place of the true perceptual distance d∗(·, ·) to formalize the neural perceptual threat model (NPTM).\nWe present adversarial attacks and defenses for the proposed NPTM. Generating adversarial examples bounded by the neural perceptual distance is difficult compared to generating Lp adversarial examples because of the complexity and non-convexness of the constraint. However, we develop two attacks for the NPTM, Perceptual Projected Gradient Descent (PPGD) and Lagrangian Perceptual Attack (LPA) (see Section 4 for details). We find that LPA is by far the strongest adversarial attack at a given level of perceptibility (see Figure 4), reducing the most robust classifier studied to only 2.4%\naccuracy on ImageNet-100 (a subset of ImageNet) while remaining imperceptible. LPA also finds adversarial examples outside of any of the other threat models studied (see Figure 2). Thus, even if a model is robust to many narrow threat models (Lp, spatial, etc.), LPA can still cause serious errors.\nIn addition to these attacks, which are suitable for evaluation of a classifier against the NPTM, we also develop Fast-LPA, a more efficient version of LPA that we use in Perceptual Adversarial Training (PAT). Remarkably, using PAT to train a neural network classifier produces a single model with high robustness against a variety of imperceptible perturbations, including L∞, L2, spatial, recoloring, and JPEG attacks, on CIFAR-10 and ImageNet-100 (Tables 2 and 3). For example, PAT on ImageNet-100 gives 32.5% accuracy against the union of these five attacks, whereas L∞ and L2 adversarial training give 0.5% and 12.3% accuracy, respectively (Table 1). PAT achieves more than double the accuracy against this union of five threat models despite not explicitly training against any of them. Thus, it generalizes well to unseen threat models.\nDoes the LPIPS distance accurately reflect human perception when it is used to evaluate adversarial examples? We performed a study on Amazon Mechanical Turk (AMT) to determine how perceptible 7 different types of adversarial perturbations such as L∞, L2, spatial, and recoloring attacks are at multiple threat-specific bounds. We find that LPIPS correlates well with human judgements across all the different adversarial perturbation types we examine. This indicates that the NPTM closely matches the true perceptual threat model and reinforces the utility of our perceptual attacks to measure adversarial robustness against an expansive threat model. Furthermore, this study allows calibration of a variety of attack bounds to a single perceptibility metric. We have released our dataset of adversarial examples along with the annotations made by participants for further study1." }, { "heading": "2 RELATED WORK", "text": "Adversarial robustness Adversarial robustness has been studied extensively for L2 or L∞ threat models (Goodfellow et al., 2015; Carlini and Wagner, 2017; Madry et al., 2018) and non-Lp threat models such as spatial perturbations (Engstrom et al., 2017; Xiao et al., 2018; Wong et al., 2019), recoloring of an image (Hosseini and Poovendran, 2018; Laidlaw and Feizi, 2019; Bhattad et al., 2019), and perturbations in the frequency domain (Kang et al., 2019). The most popular known adversarial defense for these threat models is adversarial training Kurakin et al. (2016b); Madry et al. (2018); Zhang et al. (2019a) where a neural network is trained to minimize the worst-case loss in a region around the input. Recent evaluation methodologies such as Unforeseen Attack Robustness (UAR) (Kang et al., 2019) and the Unrestricted Adversarial Examples challenge (Brown et al., 2018) have raised the problem of finding an adversarial defense which gives good robustness under more general threat models. Sharif et al. (2018) conduct a perceptual study showing that Lp threat models are a poor approximation of the perceptual threat model. Dunn et al. (2020) and Xu et al. (2020) have developed adversarial attacks that manipulate higher-level, semantic features. Jin and Rinard (2020) train with a manifold regularization term, which gives some robustness to unseen perturbation types. Stutz et al. (2020) also propose a method which gives robustness against unseen perturbation types, but requires rejecting (abstaining on) some inputs.\nPerceptual similarity Two basic similarity measures for images are the L2 distance and the Peak Signal-to-Noise Ratio (PSNR). However, these similarity measures disagree with human vision on perturbations such as blurring and spatial transformations, which has motivated others including SSIM (Wang et al., 2004), MS-SSIM (Wang et al., 2003), CW-SSIM (Sampat et al., 2009), HDR-VDP-2 (Mantiuk et al., 2011) and LPIPS (Zhang et al., 2018). MAD competition (Wang and Simoncelli, 2008) uses a constrained optimization technique related to our attacks to evaluate perceptual measures.\nPerceptual adversarial robustness Although LPIPS was previously proposed, it has mostly been used for development and evaluation of generative models (Huang et al., 2018; Karras et al., 2019). Jordan et al. (2019) first explored quantifying adversarial distortions with LPIPS distance. However, to the best of our knowledge, we are the first to apply a more accurate perceptual distance to the\n1Code and data can be downloaded at https://github.com/cassidylaidlaw/perceptual-advex.\nproblem of improving adversarial robustness. As we show, adversarial defenses based on L2 or L∞ attacks are unable to generalize to a more diverse threat model. Our method, PAT, is the first adversarial training method we know of that can generalize to unforeseen threat models without rejecting inputs." }, { "heading": "3 NEURAL PERCEPTUAL THREAT MODEL (NPTM)", "text": "Since the true perceptual distance between images cannot be efficiently computed, we use approximations of it based on neural networks, i.e. neural perceptual distances. In this paper, we focus on the LPIPS distance (Zhang et al., 2018) while we note that other neural perceptual distances can also be used in our attacks and defenses.\nLet g : X → Y be a convolutional image classifier network defined on images x ∈ X . Let g have L layers, and let the internal activations (outputs) of the l-th layer of g(x) for an input x be denoted as gl(x). Zhang et al. (2018) have found that normalizing and then comparing the internal activations of convolutional neural networks correlates well with human similarity judgements. Thus, the first step in calculating the LPIPS distance using the network g(·) is to normalize the internal activations across the channel dimension such that the L2 norm over channels at each pixel is one. Let ĝl(x) denote these channel-normalized activations at the l-th layer of the network. Next, the activations are normalized again by layer size and flattened into a single vector φ(x) , ( ĝ1(x)√ w1h1 , . . . , ĝL(x)√ wLhL\n) where wl and hl are the width and height of the activations in layer l, respectively. The function φ : X → A thus maps the inputs x ∈ X of the classifier g(·) to the resulting normalized, flattened internal activations φ(x) ∈ A, where A ⊆ Rm refers to the space of all possible resulting activations. The LPIPS distance d(x1,x2) between images x1 and x2 is then defined as:\nd(x1,x2) , ‖φ(x1)− φ(x2)‖2 . (1) In the original LPIPS implementation, Zhang et al. (2018) learn weights to apply to the normalized activations based on a dataset of human perceptual judgements. However, they find that LPIPS is a good surrogate for human vision even without the additional learned weights; this is the version we use since it avoids the need to collect such a dataset.\nNow let f : X → Y be a classifier which maps inputs x ∈ X to labels f(x) ∈ Y . f(·) could be the same as g(·), or it could be a different network; we experiment with both. For a given natural input x with the true label y, a neural perceptual adversarial example with a perceptibility bound is an input x̃ ∈ X such that x̃ must be perceptually similar to x but cause f to misclassify:\nf(x̃) 6= y and d(x, x̃) = ‖φ(x)− φ(x̃)‖2 ≤ . (2)" }, { "heading": "4 PERCEPTUAL ADVERSARIAL ATTACKS", "text": "We propose attack methods which attempt to find an adversarial example with small perceptual distortion. Developing adversarial attacks that utilize the proposed neural perceptual threat model is more difficult than that of standard Lp threat models, because the LPIPS distance constraint is more complex than Lp constraints. In general, we find an adversarial example that satisfies (2) by maximizing a loss function L within the LPIPS bound. The loss function we use is similar to the\nmargin loss from Carlini and Wagner (2017), defined as L(f(x), y) , max\ni 6=y\n( zi(x)− zy(x) ) ,\nwhere zi(x) is the i-th logit output of the classifier f(·). This gives the constrained optimization max x̃ L(f(x̃), y) subject to d(x, x̃) = ‖φ(x)− φ(x̃)‖2 ≤ . (3) Note that in this attack problem, the classifier network f(·) and the LPIPS network g(·) are fixed. These two networks could be identical, in which case the same network that is being attacked is used to calculate the LPIPS distance that bounds the attack; we call this a self-bounded attack. If a different network is used to calculate the LPIPS bound, we call it an externally-bounded attack.\nBased on this formulation, we propose two perceptual attack methods, Perceptual Projected Gradient Descent (PPGD) and Lagrangian Perceptual Attack (LPA). See Figures 3 and 7 for sample results.\nPerceptual Projected Gradient Descent (PPGD) The first of our two attacks is analogous to the PGD (Madry et al., 2018) attacks used for Lp threat models. In general, these attacks consist of iteratively performing two steps on the current adversarial example candidate: (a) taking a step of a certain size under the given distance that maximizes a first-order approximation of the misclassification loss, and (b) projecting back onto the feasible set of the threat model.\nIdentifying the ideal first-order step is easy in L2 and L∞ threat models; it is the gradient of the loss function and the sign of the gradient, respectively. However, computing this step is not straightforward with the LPIPS distance, because the distance metric itself is defined by a neural network. Following (3), we desire to find a step δ to maximize L(f(x+δ), y) such that d(x+δ,x) = ‖φ(x + δ)− φ(x)‖2 ≤ η, where η is the step size. Let f̂(x) := L(f(x), y) for an input x ∈ X . Let J be the Jacobian of φ(·) at x and ∇f̂ be the gradient of f̂(·) at x. Then, we can approximate (3) using a first-order Taylor’s approximation of φ and f̂ as follows:\nmax δ f̂(x) + (∇f̂)>δ subject to ‖Jδ‖2 ≤ η. (4) We show that this constrained optimization can be solved in a closed form: Lemma 1. Let J+ denote the pseudoinverse of J . Then the solution to (4) is given by\nδ∗ = η (J>J)−1(∇f̂) ‖(J+)>(∇f̂)‖2 .\nSee Appendix A.1 for the proof. This solution is still difficult to efficiently compute, since calculating J+ and inverting J>J are computationally expensive. Thus, we approximately solve for δ∗ using the conjugate gradient method; see Appendix A.1 for details.\nPerceptual PGD consists of repeatedly finding first-order optimal δ∗ to add to the current adversarial example x̃ for a number of steps. Following each step, if the current adversarial example x̃ is outside the LPIPS bound, we project x̃ back onto the threat model such that d(x̃,x) ≤ . The exact projection is again difficult due to the non-convexity of the feasible set. Thus, we solve it approximately with a technique based on Newton’s method; see Algorithm 4 in the appendix.\nLagrangian Perceptual Attack (LPA) The second of our two attacks uses a Lagrangian relaxation of the attack problem (3) similar to that used by Carlini and Wagner (2017) for constructing L2 and L∞ adversarial examples. We call this attack the Lagrangian Perceptual Attack (LPA). To derive the attack, we use the following Lagrangian relaxation of (3):\nmax x̃\nL(f(x̃), y)− λmax ( 0, ‖φ(x̃)− φ(x)‖2 − ) . (5)\nThe perceptual constraint cost, multiplied by λ in (5), is designed to be 0 as long as the adversarial example is within the allowed perceptual distance; i.e. d(x̃,x) ≤ ; once d(x̃,x) > , however, it increases linearly by the LPIPS distance from the original input x. Similar to Lp attacks of Carlini and Wagner (2017), we adaptively change λ to find an adversarial example within the allowed perceptual distance; see Appendix A.2 for details." }, { "heading": "5 PERCEPTUAL ADVERSARIAL TRAINING (PAT)", "text": "The developed perceptual attacks can be used to harden a classifier against a variety of adversarial attacks. The intuition, which we verify in Section 7, is that if a model is robust against neural\nperceptual attacks, it can demonstrate an enhanced robustness against other types of unforeseen adversarial attacks. Inspired by adversarial training used to robustify models against Lp attacks, we propose a method called Perceptual Adversarial Training (PAT).\nSuppose we wish to train a classifier f(·) over a distribution of inputs and labels (x, y) ∼ D such that it is robust to the perceptual threat model with bound . Let Lce denote the cross entropy (negative log likelihood) loss and suppose the classifier f(·) is parameterized by θf . Then, PAT consists of optimizing f(·) in a manner analogous to Lp adversarial training (Madry et al., 2018):\nmin θf E (x,y)∼D\n[ max\nd(x̃,x)≤ Lce(f(x̃), y)\n] . (6)\nThe training formulation attempts to minimize the worst-case loss within a neighborhood of each training point x. In PAT, the neighborhood is bounded by the LPIPS distance. Recall that the LPIPS distance is itself defined based on a particular neural network classifier. We refer to the normalized, flattened activations of the network used to define LPIPS as φ(·) and θφ to refer to its parameters. We explore two variants of PAT differentiated by the choice of the network used to define φ(·). In externally-bounded PAT, a separate, pretrained network is used to calculate φ(·), the LPIPS distance d(·, ·). In self-bounded PAT, the same network which is being trained for classification is used to calculate the LPIPS distance, i.e. θφ ⊆ θf . Note that in self-bounded PAT the definition of the LPIPS distance changes during the training as the classifier is optimized.\nThe inner maximization in (6) is intractable to compute exactly. However, we can use the perceptual attacks developed in Section 4 to approximately solve it. Since the inner maximization must be solved repeatedly during the training process, we use an inexpensive variant of the LPA attack called Fast-LPA. In contrast to LPA, Fast-LPA does not search over values of λ. It also does not include a projection step at the end of the attack, which means it may sometimes produce adversarial examples outside the training bound. While this makes it unusable for evaluation, it is fine for training. Using Fast-LPA, PAT is nearly as fast as adversarial training; see Appendix A.3 and Algorithm 3 for details." }, { "heading": "6 PERCEPTUAL EVALUATION", "text": "We conduct a thorough perceptual evaluation of our NPTM and attacks to ensure that the resulting adversarial examples are imperceptible. We also compare the perceptibility of perceptual adversarial attacks to five narrow threat models: L∞ and L2 attacks, JPEG attacks (Kang et al., 2019), spatially transformed adversarial examples (StAdv) (Xiao et al., 2018), and functional adversarial attacks (ReColorAdv) (Laidlaw and Feizi, 2019). The comparison allows us to determine if the LPIPS distance is a good surrogate for human comparisons of similarity. It also allows us to set bounds across threat models with approximately the same level of perceptibility.\nTo determine how perceptible a particular threat model is at a particular bound (e.g. L∞ attacks at = 8/255), we perform an experiment based on just noticeable differences (JND). We show pairs of images to participants on Amazon Mechanical Turk (AMT), an online crowdsourcing platform. In each pair, one image is a natural image from ImageNet-100 and one image is an adversarial perturbation of the natural image, generated using the particular attack against a classifier hardened to that attack. One of the images, chosen randomly, is shown for one second, followed by a blank screen for 250ms, followed by the second image for one second. Then, participants must choose whether they believe the images are the same or different. This procedure is identical to that used by Zhang et al. (2018) to originally validate the LPIPS distance. We report the proportion of pairs for which participants report the images are “different” as the perceptibility of the attack. In addition to adversarial example pairs, we also include sentinel image pairs which are exactly the same; only 4.1% of these were annotated as “different.”\nWe collect about 1,000 annotations of image pairs for each of 3 bounds for all five threat models, plus our PPGD and LPA attacks (14k annotations total for 2.8k image pairs). The three bounds for each attack are labeled as small, medium, and large; bounds with the same label have similar perceptibility across threat models (see Appendix D Table 4). The dataset of image pairs and associated annotations is available for use by the community.\nTo determine if the LPIPS threat model is a good surrogate for the perceptual threat model, we use various classifiers to calculate the LPIPS distance d(·, ·) between the pairs of images used in the perceptual study. For each classifier, we determine the correlation between the mean LPIPS distance\nit assigns to image pairs from each attack and the perceptibility of that attack (Figure 4c). We find that AlexNet (Krizhevsky et al., 2012), trained normally on ImageNet (Russakovsky et al., 2015), correlates best with human perception of these adversarial examples (r = 0.94); this agrees with Zhang et al. (2018) who also find that AlexNet-based LPIPS correlates best with human perception (Figure 4). A normally trained ResNet-50 correlates similarly, but not quite as well. Because AlexNet is the best proxy for human judgements of perceptual distance, we use it for all externally-bounded evaluation attacks. Note that even with an untrained network at initialization, the LPIPS distance correlates with human perception better than the L2 distance. This means that even during the first few epochs of self-bounded PAT, the training adversarial examples are perceptually-aligned.\nWe use the results of the perceptual study to investigate which attacks are strongest at a particular level of perceptibility. We evaluate each attack on a classifier hardened against that attack via adversarial training, and plot the resulting success rate against the proportion of correct annotations from the perceptual study. Out of the narrow threat models, we find that L2 attacks are the strongest for their perceptibility. However, our proposed PPGD and LPA attacks reduce a PAT-trained classifier to even lower accuracies (8.2% for PPGD and 0% for LPA), making it the strongest attack studied." }, { "heading": "7 EXPERIMENTS", "text": "We compare Perceptual Adversarial Training (PAT) to adversarial training against narrow threat models (Lp, spatial, etc.) on CIFAR-10 (Krizhevsky and Hinton, 2009) and ImageNet-100 (the subset of ImageNet (Russakovsky et al., 2015) containing every tenth class by WordNet ID order). We find that PAT results in classifiers with robustness against a broad range of narrow threat models. We also show that our perceptual attacks, PPGD and LPA, are strong against adversarial training with narrow threat models. We evaluate with externally-bounded PPGD and LPA (Section 4), using AlexNet to determine the LPIPS bound because it correlates best with human judgements (Figure 4c). For L2 and L∞ robustness evaluation we use AutoAttack (Croce and Hein, 2020), which combines four strong attacks, including two PGD variants and a black box attack, to give reliable evaluation.\nEvaluation metrics For both datasets, we evaluate classifiers’ robustness to a range of threat models using two summary metrics. First, we compute the union accuracy against all narrow threat models (L∞, L2, StAdv, ReColorAdv, and JPEG for ImageNet-100); this is the proportion of inputs for which a classifier is robust against all these attacks. Second, we compute the unseen mean accuracy, which is the mean of the accuracies against all the threat models not trained against; this measures how well robustness generalizes to other threat models.\nCIFAR-10 We test ResNet-50s trained on the CIFAR-10 dataset with PAT and adversarial training (AT) against six attacks (see Table 2): L∞ and L2 AutoAttack, StAdv (Xiao et al., 2018), ReCol-\norAdv (Laidlaw and Feizi, 2019), and PPGD and LPA. This allows us to determine if PAT gives robustness against a range of adversarial attacks. We experiment with using various models to calculate the LPIPS distance during PAT. We try using the same model both for classification and to calculate the LPIPS distance (self-bounded PAT). We also use AlexNet trained on CIFAR-10 prior to PAT (externally-bounded PAT). We find that PAT outperforms Lp adversarial training and TRADES (Zhang et al., 2019a), improving the union accuracy from <5% to >20%, and nearly doubling mean accuracy against unseen threat models from 26% to 49%. Surprisingly, we find that PAT even outperforms threat-specific AT against StAdv and ReColorAdv; see Appendix F.5 for more details. Ablation studies of PAT are presented in Appendix F.1.\nImageNet-100 We compare ResNet-50s trained on the ImageNet-100 dataset with PAT and adversarial training (Table 3). Classifiers are tested against seven attacks at the medium bound from the perceptual study (see Section 6 and Appendix Table 4). Self- and externally-bounded PAT give similar results. Both produce more than double the next highest union accuracy and also significantly increase the mean robustness against unseen threat models by around 15%.\nPerceptual attacks On both CIFAR-10 and ImageNet-100, we find that Perceptual PGD (PPGD) and Lagrangian Perceptual Attack (LPA) are the strongest attacks studied. LPA is the strongest,\nreducing the most robust classifier to 9.8% accuracy on CIFAR-10 and 2.4% accuracy on ImageNet100. Also, models most robust to LPA in both cases are those that have the best union and unseen mean accuracies. This demonstrates the utility of evaluating against LPA as a proxy for adversarial robustness against a range of threat models. See Appendix E for further attack experiments.\nComparison to other defenses against multiple attacks Besides the baseline of adversarially training against a single attack, we also compare PAT to adversarially training against multiple attacks (Tramer and Boneh, 2019; Maini et al., 2019). We compare three methods for multiple-attack training: choosing a random attack at each training iteration, optimizing the average loss across all attacks, and optimizing the maximum loss across all attacks. The latter two methods are very expensive, increasing training time by a factor equal to the number of attacks trained against, so we only evaluate these methods on CIFAR-10. As in Tramer and Boneh (2019), we find that the maximum loss strategy leads to the greatest union accuracy among the multiple-attack training methods. However, PAT performs even better on CIFAR-10, despite training against none of the attacks and taking one fourth of the time to train. The random strategy, which is the only feasible one on ImageNet-100, performs much worse than PAT. Even the best multiple-attack training strategies still fail to generalize to the unseen neural perceptual attacks, PPGD and LPA, achieving much lower accuracy than PAT.\nOn CIFAR-10, we also compare PAT to manifold regularization (MR) (Jin and Rinard, 2020), a non-adversarial training defense. MR gives union accuracy close to PAT-self, but much lower clean accuracy; for PAT-AlexNet, which gives similar clean accuracy to MR, the union accuracy is much higher.\nThreat model overlap In Figure 2, we investigate how the sets of images vulnerable to L2, spatial, and perceptual attacks overlap. Nearly all adversarial examples vulnerable to L2 or spatial attacks are also vulnerable to LPA. However, there is only partial overlap between the examples vulnerable to L2 and spatial attacks. This helps explain why PAT results in improved robustness against spatial attacks (and other diverse threat models) compared to L2 adversarial training.\nWhy does PAT work better than Lp adversarial training? In Figure 5, we give further explanation of why PAT results in improved robustness against diverse threat models. We generate many adversarial examples for the L∞, L2, JPEG, StAdv, and ReColorAdv threat models and measure their distance from the corresponding natural inputs using Lp distances and the neural perceptual distance, LPIPS. While Lp distances vary widely, LPIPS gives remarkably comparable distances to different types of adversarial examples. Covering all threat models during L∞ or L2 adversarial training would require using a huge training bound, resulting in poor performance. In contrast, PAT can obtain robustness against all the narrow threat models at a reasonable training bound.\nRobustness against common corruptions In addition to evaluating PAT against adversarial examples, we also evaluate its robustness to random perturbations in the CIFAR-10-C and ImageNetC datasets (Hendrycks and Dietterich, 2019). We find that PAT gives increased robustness (lower relative mCE) against these corruptions compared to adversarial training; see Appendix G for details." }, { "heading": "8 CONCLUSION", "text": "We have presented attacks and defenses for the neural perceptual threat model (realized by the LPIPS distance) and shown that it closely approximates the true perceptual threat model, the set of all perturbations to natural inputs which fool a model but are imperceptible to humans. Our work provides a novel method for developing defenses against adversarial attacks that generalize to unforeseen threat models. Our proposed perceptual adversarial attacks and PAT could be extended to other vision algorithms, or even other domains such as audio and text." }, { "heading": "ACKNOWLEDGMENTS", "text": "This project was supported in part by NSF CAREER AWARD 1942230, HR 00111990077, HR00112090132, HR001119S0026, NIST 60NANB20D134, AWS Machine Learning Research Award and Simons Fellowship on “Foundations of Deep Learning.”" }, { "heading": "APPENDIX", "text": "" }, { "heading": "A PERCEPTUAL ATTACK ALGORITHMS", "text": "" }, { "heading": "A.1 PERCEPTUAL PGD", "text": "Recall from Section 4 that Perceptual PGD (PPGD) consists of repeatedly applying two steps: a first-order step in LPIPS distance to maximize the loss, followed by a projection into the allowed set of inputs. Here, we focus on the first-order step; see Appendix A.4 for how we perform projection onto the LPIPS ball.\nWe wish to solve the following constrained optimization for the step δ given the step size η and current input x:\nmax δ L(f(x + δ), y) subject to ‖Jδ‖2 ≤ η (7)\nLet f̂(x) := L(f(x), y) for an input x ∈ X . Let J be the Jacobian of φ(·) at x and ∇f̂ be the gradient of f̂(·) at x. Lemma 1. The first-order approximation of (7) is\nmax δ\nf̂(x) + (∇f̂)>δ subject to ‖Jδ‖2 ≤ η, (8)\nand can be solved in closed-form by\nδ∗ = η (J>J)−1(∇f̂) ‖(J+)>(∇f̂)‖2 .\nwhere J+ is the pseudoinverse of J .\nProof. We solve (8) using Lagrange multipliers. First, we take the gradient of the objective: ∇δ [ f̂(x) + (∇f̂)>δ ] = ∇f̂\nWe can rewrite the constraint by squaring both sides to obtain\nδ>J>Jδ − 2 ≤ 0 Taking the gradient of the constraint gives\n∇δ [ δ>J>Jδ − 2 ] = 2J>Jδ\nNow, we set one gradient as a multiple of the other and solve for δ:\nJ>Jδ = λ(∇f̂) (9) δ = λ(J>J)−1(∇f̂) (10)\nSubstituting into the constraint from (8) gives ‖Jδ‖2 = η\n‖Jλ(J>J)−1(∇f̂)‖2 = η λ‖J(J>J)−1(∇f̂)‖2 = η\nλ‖((J>J)−1J>)>(∇f̂)‖2 = η λ‖(J+)>(∇f̂)‖2 = η\nλ = η\n‖(J+)>(∇f̂)‖2 We substitute this value of λ into (10) to obtain\nδ∗ = η (J>J)−1(∇f̂) ‖(J+)>(∇f̂)‖2 . (11)\nSolution with conjugate gradient method Calculating (11) directly is computationally intractable for most neural networks, since inverting J>J and calculating the pseudoinverse of J are expensive. Instead, we approximate δ∗ by using the conjugate gradient method to solve the following linear system, based on (9): J>Jδ = ∇f̂ (12) ∇f̂ is easy to calculate using backpropagation. The conjugate gradient method does not require calculating fully J>J ; instead, it only requires the ability to perform matrix-vector products J>Jv for various vectors v.\nWe can approximate Jv using finite differences given a small, positive value h:\nJv ≈ φ(x + hv)− φ(x) h\nThen, we can calculate J>Jv by introducing an additional variable u and using autograd:\n∇u [ (φ(x+ u)) > Jv ] u=0 =\n[( dφ\ndu (x+ u)\n)> Jv + (φ(x+ u)) > d\ndu Jv ] u=0\n=\n[( dφ\ndu (x+ u)\n)> Jv + (φ(x+ u)) > 0 ] u=0\n=\n( dφ\ndu (x)\n)> Jv = J>Jv\nThis allows us to efficiently approximate the solution of (12) to obtain (J>J)−1∇f̂ . We use 5 iterations of the conjugate gradient algorithm in practice.\nFrom there, it easy to solve for λ, given that (J+)>∇f̂ = J(J>J)−1∇f̂ . Then, δ∗ can be calculated via (10). See Algorithm 1 for the full attack.\nComputational complexity PPGD’s running time scales with the number of steps T and the number of conjugate gradient iterations K. It also depends on whether the attack is self-bounded (the same network is used for classification and the LPIPS distance) or externally-bounded (different networks are used).\nFor each of the T steps, θ(x̃),∇x̃L(f(x̃), y), and φ(x̃ + hδk) must be calculated once (lines 4 and 15 in Algorithm 1). This takes 2 forward passes and 1 backward pass for the self-bounded case, and 3 forward passes and 1 backward pass for the externally-bounded case.\nIn addition, J>Jv needs to be calculated (in the MULTIPLYJACOBIAN routine) K + 1 times. Each calculation of J>Jv requires 1 forward and 1 backward pass, assuming φ(x̃) is already calculated.\nFinally, the projection step takes n+ 1 forward passes for n iterations of the bisection method (see\nSection A.4).\nIn all, the algorithm requires T (K + n+ 4) forward passes and T (K + n+ 3) backward passes in the self-bounded case. In the externally-bounded case, it requires T (K + n+ 5) forward passes and the same number of backward passes.\nAlgorithm 1 Perceptual PGD (PPGD) 1: procedure PPGD(classifier f(·), LPIPS network φ(·), input x, label y, bound , step η) 2: x̃← x + 0.01 ∗ N (0, 1) . initialize perturbations with random Gaussian noise 3: for t in 1, . . . , T do . T is the number of steps 4: ∇f̂ ← ∇x̃L(f(x̃), y) 5: δ0 ← 0 6: r0 ← ∇f̂ −MULTIPLYJACOBIAN(φ, x̃, δ0) 7: p0 ← r0 8: for k in 0, . . . ,K − 1 do . conjugate gradient algorithm; we use K = 5 iterations 9: αk ← r > k rk p>k MULTIPLYJACOBIAN(φ,x̃,pk) 10: δk+1 ← δk + αkpk 11: rk+1 ← rk − αkMULTIPLYJACOBIAN(φ, x̃, pk) 12: βk ← r>k+1rk+1\nr>k rk\n13: pk+1 ← rk+1 + βkpk 14: end for 15: m← ‖φ(x̃ + hδk)− φ(x̃)‖/h . m ≈ ‖Jδk‖ for small h; we use h = 10−3 16: x̃← (η/m)δk 17: x̃← PROJECT(d, x̃,x, ) 18: end for 19: return x̃ 20: end procedure 21: 22: procedure MULTIPLYJACOBIAN(φ(·), x̃, v) . calculates J>Jv; J is the Jacobian of φ at x̃ 23: Jv← (φ(x̃ + hv)− φ(x̃))/h . h is a small positive value; we use h = 10−3 24: J>Jv← ∇u [ φ(x̃ + u)>Jv ] u=0 25: return J>Jv 26: end procedure" }, { "heading": "A.2 LAGRANGIAN PERCEPTUAL ATTACK (LPA)", "text": "Our second attack, Lagrangian Perceptual Attack (LPA), optimizes a Lagrangian relaxation of the perceptual attack problem (3):\nmax x̃\nL(f(x̃), y)− λmax ( 0, ‖φ(x̃)− φ(x)‖2 − ) . (13)\nTo optimize (13), we use a variation of gradient descent over x̃, starting at x with a small amount of noise added. We perform our modified version of gradient descent for T steps. We use a step size η, which begins at and decays exponentially to /10.\nAt each step, we begin by taking the gradient of (13) with respect to x̃; let ∆ refer to this gradient. Then, we normalize ∆ to have L2 norm 1, i.e. ∆̂ = ∆/‖∆‖2. We wish to take a step in the direction of ∆̂ of size η in LPIPS distance. If we wanted to take a step of size η in L2 distance, we could just take the step η∆̂. However, taking a step of particular size in LPIPS distance is harder. We assume that the LPIPS distance is approximately linear in the direction ∆̂. We can approximate the directional derivative of the LPIPS distance in the direction ∆̂ using finite differences:\nd dα d(x̃, x̃ + α∆̂) ≈ d(x̃, x̃ + h∆) h = m.\nHere, h is a small positive value, and we assign the approximation of the directional derivative to m. Now, we can write the first-order Taylor expansion of the perceptual distance torwards the direction\n∆̂ as follows: d(x̃, x̃ + α∆̂) ≈ d(x̃, x̃) +mα = mα.\nwe want to take a step of size η. Plugging in and solving, we obtain\nη = d(x̃, x̃ + α∆̂) ≈ mα η ≈ mα\nη/m ≈ α.\nSo, the approximate step we should take is (η/m)∆̂. We take this step at each of the T iterations of our modified gradient descent method.\nWe begin with λ = 10−2. After performing gradient descent, if d(x, x̃) > (i.e. the adversarial example is outside the constraint) we increase λ by a factor of 10 and repeat the optimization. We repeat this entire process five times, meaning we search over λ ∈ {10−2, 10−1, 100, 101, 102}. Finally, if the resulting adversarial example is still outside the constraint, we project it into the threat model; see Appendix 5.\nComputational complexity LPA’s running time scales with the number of iterations S used to search for λ as well as the number of gradient descent steps T . φ(x) may be calculated once during the entire attack, which speeds it up. Then, each step of gradient descent requires 2 forward and 1 backward passes in the self-bounded case, and 3 forward and 2 backward passes in the externallybounded case.\nThe projection at the end of the attack requires n+ 1 forward passes for n iterations of the bisection method (see Section A.4).\nIn total, the attack requires 2ST + n + 2 forward passes and ST + n + 2 backward passes in the self-bounded case, and 3ST + n + 2 forward passes and 2ST + n + 2 backward passes in the externally-bounded case.\nAlgorithm 2 Lagrangian Perceptual Attack (LPA) 1: procedure LPA(classifier network f(·), LPIPS distance d(·, ·), input x, label y, bound ) 2: λ← 0.01 3: x̃← x + 0.01 ∗ N (0, 1) . initialize perturbations with random Gaussian noise 4: for i in 1, . . . , S do . we use S = 5 iterations to search for the best value of λ 5: for t in 1, . . . , T do . T is the number of steps 6: ∆← ∇x̃ [ L(f(x̃), y)− λmax ( 0, d(x̃,x ) − ) ] . take the gradient of (5)\n7: ∆̂ = ∆/‖∆‖2 . normalize the gradient 8: η = ∗ (0.1)t/T . the step size η decays exponentially 9: m← d(x̃, x̃ + h∆̂)/h . m ≈ derivative of d(x̃, ·) in the direction of ∆̂; h = 0.1\n10: x̃← x̃ + (η/m)∆̂ . take a step of size η in LPIPS distance 11: end for 12: if d(x̃,x) > then 13: λ← 10λ . increase λ if the attack goes outside the bound 14: end if 15: end for 16: x̃← PROJECT(d, x̃,x, ) 17: return x̃ 18: end procedure" }, { "heading": "A.3 FAST LAGRANGIAN PERCEPTUAL ATTACK", "text": "We use the Fast Lagrangian Perceptual Attack (Fast-LPA) for Perceptual Adversarial Training (PAT, see Section 5). Fast-LPA is similar to LPA (Appendix A.2), with two major differences. First, Fast-LPA does not search over λ values; instead, during the T gradient descent steps, λ is increased exponentially from 1 to 10. Second, we remove the projection step at the end of the attack. This means that Fast-LPA may produce adversarial examples outside the threat model. This means that Fast-LPA cannot be used for evaluation, but it is fine for training.\nComputational complexity Fast-LPA’s running time can be calculated similarly to LPA’s (see Section A.2), except that S = 1 and there is no projection step. Let T be the number of steps taken during the attack. Then Fast-LPA requires 2T + 1 forward passes and T + 1 backward passes for the self-bounded case, and 3T +1 forward passes and 2T +1 backward passes for the externally-bounded case.\nIn comparison, PGD with T iterations requires T forward passes and T backward passes. Thus, Fast-LPA is slightly slower, requiring T + 1 more forward passes and no more backward passes.\nAlgorithm 3 Fast Lagrangian Perceptual Attack (Fast-LPA) 1: procedure FASTLPA(classifier network f(·), LPIPS distance d(·, ·), input x, label y, bound ) 2: x̃← x + 0.01 ∗ N (0, 1) . initialize perturbations with random Gaussian noise 3: for t in 1, . . . , T do . T is the number of steps 4: λ← 10t/T . λ increases exponentially 5: ∆← ∇x̃ [ L(f(x̃), y)− λmax ( 0, d(x̃,x ) − ) ] . take the gradient of (5)\n6: ∆̂ = ∆/‖∆‖2 . normalize the gradient 7: η = ∗ (0.1)t/T . the step size η decays exponentially 8: m← d(x̃, x̃ + h∆̂)/h . m ≈ derivative of d(x̃, ·) in the direction of ∆̂; h = 0.1 9: x̃← x̃ + (η/m)∆̂ . take a step of size η in LPIPS distance\n10: end for 11: return x̃ 12: end procedure" }, { "heading": "A.4 PERCEPTUAL PROJECTION", "text": "We explored two methods of projecting adversarial examples into the LPIPS thread model. The method we use throughout the paper is based on Newton’s method and is shown in algorithm 4. However, we also experimented with the bisection root finding method, shown in Algorithm 5 (also see Appendix E).\nIn general, given an adversarial example x̃, original input x, and LPIPS bound , we wish to find a projection x̃′ of x̃ such that d(x̃′,x) ≤ . Assume for this section that d(x̃,x) > , i.e. the current adversarial example x̃ is outside the bound. If d(x̃,x) ≤ , then we can just let x̃′ = x̃ and be done. Newton’s method The second projection method we explored uses the generalized Newton–Raphson method to attempt to find the closest projection x̃′ to the current adversarial example x̃ such that the projection is within the threat model, i.e. d(x̃′,x) ≤ . To find such a projection, we again define a function r(·) and look for its roots:\nr(x̃′) = d(x̃′,x)− . If we can find a projection x̃′ close to x̃ such that r(x̃′) ≤ 0, then this projection will be contained within the threat model, since r(x̃′) ≤ 0⇒ d(x̃′,x) ≤ . To find such a root, we use the generalized Newton-Raphson method, an iterative algorithm. Beginning with x̃′0 = x̃, we update x̃ ′ iteratively using the step\nx̃′i+1 = x̃ ′ i − [ ∇r(x̃′i) ]+( r(x̃′i) + s ) ,\nwhere A+ denotes the pseudoinverse of A, and s is a small positive constant (the “overshoot”), which helps the algorithm converge. We continue this process until r(x̃′t) ≤ 0, at which point the projection is complete.\nThis algorithm usually takes 2-3 steps to converge with s = 10−2. Each step requires 1 forward and 1 backward pass to calculate r(x̃′t) and its gradient. The method also requires 1 forward pass at the beginning to calculate φ(x).\nAlgorithm 4 Perceptual Projection (Newton’s Method) procedure PROJECT(LPIPS distance d(·, ·), adversarial example x̃, original input x, bound )\nx̃′0 for i in 0, . . . do\nr(x̃′i)← d(x̃′i,x)− if r(x̃′i) ≤ 0 then\nreturn x̃′i end if x̃′i+1 = x̃ ′ i − [ ∇r(x̃′i) ]+( r(x̃′i) + s ) . s is the “overshoot”; we use s = 10−2\nend for end procedure\nBisection method The first projection method we explored (and the one we use throughout the paper) attempts to find a projection x̃′ along the line connecting the current adversarial example x̃ and original input x. Let δ = x̃−x. Then we can represent our final projection x̃′ as a point between x and x̃ as x̃′ = x + αδ, for some α ∈ [0, 1]. If α = 0, x̃′ = x; if α = 1, x̃′ = x̃. Now, define a function r : [0, 1]→ R as\nr(δ) = d(x + αδ,x)− . This function has the following properties:\n1. r(0) < 0, since r(0) = d(x,x)− = − . 2. r(1) > 0, since r(1) = d(x̃,x)− > 0 because d(x̃,x) > . 3. r(α) = 0 iff d(x̃′,x) = .\nWe use the bisection root finding method to find a root α∗ of r(·) on the interval [0, 1], which exists since r(·) is continuous and because of items 1 and 2 above. By item 3, at this root, the projected adversarial example is within the threat model:\nd(x̃′,x) = d(x + α∗δ,x) = r(α∗) + =\nWe use n = 10 iterations of the bisection method to calculate α∗. This requires n+ 1 forward passes through the LPIPS network, since φ(x) must be calculated once, and φ(x + αδ) must be calculated n times. See Algorithm 5 for the full projection algorithm.\nAlgorithm 5 Perceptual Projection (Bisection Method) procedure PROJECT(LPIPS distance d(·, ·), adversarial example x̃, original input x, bound )\nαmin, αmax ← 0, 1 δ ← x̃− x for i in 1, . . . , n do\nα← (αmin + αmax)/2 x̃′ ← x + αδ if d(x, x̃′) > then\nαmax ← α else\nαmin ← α end if\nend for return x̃′\nend procedure" }, { "heading": "B ADDITIONAL RELATED WORK", "text": "Here, we expand on the related work discussed in Section 2 discuss some additional existing work on adversarial robustness.\nAdversarial attacks Much of the initial work on adversarial robustness focused on perturbations to natural images which were bounded by the L2 or L∞ distance (Carlini and Wagner, 2017; Goodfellow et al., 2015; Madry et al., 2018). However, recently the community has discovered many other types of perturbations that are imperceptible and can be optimized to fool a classifier, but are outside Lp threat models. These include spatial perturbations using flow fields (Xiao et al., 2018), translation and rotation (Engstrom et al., 2017), and Wassterstein distance bounds (Wong et al., 2019). Attacks that manipulate the colors in images uniformly also been proposed (Hosseini and Poovendran, 2018; Hosseini et al., 2017; Zhang et al., 2019b) and have been generalized into “functional adversarial attacks” by Laidlaw and Feizi (2019).\nA couple papers have proposed adversarial threat models that do not focus on a simple, manually defined perturbation type. Dunn et al. (2020) use a generative model of images; they perturb the features at various layers in the generator to create adversarial examples. Xu et al. (2020) train an autoencoder and then perturb images in representation space rather than pixel space." }, { "heading": "C ADDITIONAL ADVERSARIAL EXAMPLES", "text": "" }, { "heading": "D ADDITIONAL PERCEPTUAL STUDY RESULTS", "text": "" }, { "heading": "E PERCEPTUAL ATTACK EXPERIMENTS", "text": "We experiment with variations of the two validation attacks, PPGD and LPA, described in Section 4. As described in Appendix A.4, we developed two methods for projecting candidate adversarial examples into the LPIPS ball surrounding a natural input. We attack a single model using PPGD and LPA with both projection methods. We also compare self-bounded to externally-bounded attacks.\nWe find that LPA tends to be more powerful than PPGD. Finally, we note that externally-bounded LPA is extremely powerful, reducing the accuracy of a PAT-trained classifier on ImageNet-100 to just 2.4%.\nBesides these experiments, we always use externally-bounded attacks with AlexNet for evaluation. AlexNet correlates with human perception of adversarial examples (Figure 6) and provides a standard measure of LPIPS distance; in contrast, self-bounded attacks by definition have varying bounds across evaluated models." }, { "heading": "F PAT EXPERIMENTS", "text": "" }, { "heading": "F.1 ABLATION STUDY", "text": "We perform an ablation study of Perceptual Adversarial Training (PAT). First, we examine Fast-LPA, the training attack. We attempt training without step size (η) decay and/or without increasing λ during Fast-LPA, and find that PAT performs best with both η decay and λ increase.\nTraining a classifier with PAT gives robustness against a wide range of adversarial threat models (see Section 7). However, it tends to give low accuracy against natural, unperturbed inputs. Thus, we use a technique from Balaji et al. (2019) to improve natural accuracy in PAT-trained models: at each training step, only inputs which are classified correctly without any perturbation are attacked. In addition to increasing natural accuracy, this also improves the speed of PAT since only some inputs from each batch must be attacked. In this ablation study, we compare attacking every input with Fast-LPA during training to only attacking the natural inputs which are already classified correctly. We find that the latter method achieves higher natural accuracy at the cost of some robust accuracy." }, { "heading": "F.2 PROJECTION DURING TRAINING", "text": "We choose not to add a projection step to the end of Fast-LPA during training because it slows down the attack, requiring many more passes through the network per training step. However, we tested self-bounded PAT with a projection step and found that it increased clean accuracy slightly but decreased robust accuracy significantly. We believe this is because not projecting increases the effective bound on the training attacks, leading to better robustness. To test this, we tried training without projection using a smaller bound ( = 0.4 instead of = 0.5) and found the results closely matched the results when using projection at the larger bound. That is, PAT with projection at = 0.5 is similar to PAT without projection at = 0.4. These results are shown in 7." }, { "heading": "F.3 SELF-BOUNDED VS. ALEXNET-BOUNDED PAT", "text": "Performing externally-bounded PAT with AlexNet produces more robust models on CIFAR-10 than self-bounded PAT. This is not the case on ImageNet-100, where self- and AlexNet-bounded PAT perform similarly.\nThere is a simple explanation for this: the effective training bound on CIFAR-10 is greater for AlexNet-bounded PAT than for self-bounded PAT. To measure this, we generate adversarial examples\nfor all the test threat models on CIFAR-10 (L2, L∞, StAdv, and ReColorAdv). We find that the average LPIPS distance for all adversarial examples using AlexNet is 1.13; for a PAT-trained ResNet50, it is 0.88. Because of this disparity, we use a lower training bound for self-bounded PAT ( = 0.5) than for AlexNet-bounded PAT ( = 1). However, this means that the average test attack has 76% greater LPIPS distance than the training attacks for self-bounded PAT, whereas the average test attack has only 13% greater LPIPS distance for AlexNet-bounded PAT. This explains why AlexNet-bounded PAT gives better robustness; it only has to generalize to slightly larger attacks on average.\nWe tried performing AlexNet-bounded PAT with a more comparable bound ( = 0.7) to self-bounded PAT. This gives the average test attack about 80% greater LPIPS distance than the training attacks, similar to self-bounded PAT. Table 8 shows that the results are more similar for self-bounded and AlexNet-bounded PAT with = 0.7." }, { "heading": "F.4 ACCURACY-ROBUSTNESS TRADEOFF", "text": "Tsipras et al. (2019) have noted that there is often a tradeoff the between adversarial robustness of a classifier and its accuracy. That is, models which have higher accuracy under adversarial attack may have lower accuracy against clean images. We observe this phenomenon with adversarial training and PAT. Since PAT gives greater robustness against several narrow threat models, models trained with it tend to have lower accuracy on clean images than models trained with narrow adversarial training. In Figure 8, we show the robust and clean accuracies of several models trained on CIFAR-10 and ImageNet-100 with PAT and adversarial training. While some PAT models have lower clean accuracy than adversarially trained models, at least one PAT model on each dataset surpasses the Pareto frontier of the accuracy-robustness tradeoff for adversarial training. That is, there are PAT-trained models on both datasets with both higher robust accuracy and higher clean accuracy than adversarial training." }, { "heading": "F.5 PERFORMANCE AGAINST STADV AND RECOLORADV", "text": "It was surprising to find that PAT outperformed threat-specific adversarial training (AT) against the StAdv and ReColorAdv attacks on CIFAR-10 (it does not do so on ImageNet-100). In Table 2 (partially reproduced in Table 9 below), PAT-AlexNet improves robustness over AT against StAdv from 54% to 65%; PAT-self improves robustness over AT against ReColorAdv from 65% to 71%.\nWe conjecture that, for these threat models, this is because training against a wider set of perturbations at training time helps generalize robustness to new inputs at test time, even within the same threat model. To test this, we additionally train classifiers using adversarial training against the StAdv and ReColorAdv attacks with double the default bound. The results are shown in Table 9 below. We find that, because these classifiers are exposed to a wider range of spatial and recoloring perturbations during training, they perform better than PAT against those attacks at test time (76% vs 65% for StAdv and 81% vs 71% for ReColorAdv).\nThis suggests that PAT not only improves robustness against a wide range of adversarial threat models, it can actually improve robustness over threat-specific adversarial training by incorporating a wider range of attacks during training." }, { "heading": "G COMMON CORRUPTIONS EVALUATION", "text": "The metric we use to evaluate PAT against common corruptions is mean relative corruption error (relative mCE). The relative corruption error is defined by Hendrycks and Dietterich (2019) for a classifier f and corruption type c as\nRelative CEfc =\n∑5 s=1E f s,c − E\nf clean∑5\ns=1E AlexNet s,c − EAlexNetclean\nwhere Efs,c is the error of classifier f against corruption type c at severity level s, and E f clean is the error of classifier f on unperturbed inputs. The relative mCE is defined as the mean relative CE over all perturbation types.\nThe relative mCE for classifiers trained with normal training, adversarial training, and PAT is shown in Tables 10 and 11. PAT gives better robustness (lower relative mCE) against common corruptions on both CIFAR-10-C and ImageNet-100-C. The only category of perturbations where L2 adversarial training outperforms PAT is “noise” on CIFAR-10-C, which makes sense because Gaussian and other types of noise are symmetrically distributed in an L2 ball. For the other perturbation types and on ImageNet-100-C, PAT outperforms L2 and L∞ adversarial training, indicating that robustness against a wider range of worst-case perturbations also gives robustness against a wider range of random perturbations." }, { "heading": "H EXPERIMENT DETAILS", "text": "For all experiments, we train ResNet-50 (He et al., 2016) with SGD for 100 epochs. We use 10 attack iterations for training and 200 for testing, except for PPGD and LPA, where we use 40 for testing since they are more expensive. Self-bounded PAT takes about 12 hours to train for CIFAR-10 on an Nvidia RTX 2080 Ti GPU, and about 5 days to train for ImageNet-100 on 4 GPUs. We implement PPGD, LPA, and PAT using PyTorch (Paszke et al., 2017).\nWe preprocess images after adversarial perturbation, but before classification, by standardizing them based on the mean and standard deviation of each channel for all images in the dataset. We use the default data augmentation techniques from the robustness library (Engstrom et al., 2019). The CIFAR-10 dataset can be obtained from https://www.cs.toronto.edu/~kriz/cifar. html. The ImageNet-100 dataset is a subset of the ImageNet Large Scale Visual Recognition Challenge (2012) (Russakovsky et al., 2015) including only every tenth class by WordNet ID order. It can be obtained from http://www.image-net.org/download-images." }, { "heading": "Parameter CIFAR-10 ImageNet-100", "text": "" }, { "heading": "H.1 LAYERS FOR LPIPS CALCULATION", "text": "Calculating the LPIPS distance using a neural network classifier g(·) requires choosing layers whose normalized, flattened activations φ(·) should be compared between images. For AlexNet and VGG-16, we use the same layers to calculate LPIPS distance as do Zhang et al. (2018). For AlexNet (Krizhevsky et al., 2012), we use the activations after each of the first five ReLU functions. For VGG-16 (Simonyan and Zisserman, 2014), we use the activations directly before the five max pooling layers. In ResNet-50, we use the outputs of the conv2_x, conv3_x, conv4_x, and conv5_x layers, as listed in Table 1 of He et al. (2016)." } ]
2,021
PERCEPTUAL ADVERSARIAL ROBUSTNESS: DEFENSE AGAINST UNSEEN THREAT MODELS
SP:71d2c08c45a1f4635bb51699e5833c74699731f2
[ "The paper contains two curriculum learning algorithms of which one assume knowledge of the parameters found by the baseline, uniform-sampling, model to push updates in that direction, and the second orders images according to an increasing stddev/entropy of pixels. While the first approach is impractical because of the strong assumption, the second approach demonstrates small gains that lie within random variance (Fig. 5, Fig. 6) and would be not straight-forward to apply to non-image data e.g. text. These reasons make the paper hard to accept.", "This work studies a number of curriculums for faster training of neural networks. They first propose a curriculum named DCL+ that is designed to order data points based on their alignment of gradient with the direction of optimization. This curriculum depends on the evaluation of individual gradients of datapoints as well as an approximation to a local optima. Next, they study a number of easy-to-compute statistical measures for ordering data points." ]
In practice, sequence of mini-batches generated by uniform sampling of examples from the entire data is used for training neural networks. Curriculum learning is a training strategy that sorts the training examples by their difficulty and gradually exposes them to the learner. In this work, we propose two novel curriculum learning algorithms and empirically show their improvements in performance with convolutional and fully-connected neural networks on multiple real image datasets. Our dynamic curriculum learning algorithm tries to reduce the distance between the network weight and an optimal weight at any training step by greedily sampling examples with gradients that are directed towards the optimal weight. The curriculum ordering determined by our dynamic algorithm achieves a training speedup of ∼ 45% in our experiments. We also introduce a new task-specific curriculum learning strategy that uses statistical measures such as standard deviation and entropy values to score the difficulty of data points in natural image datasets. We show that this new approach yields a mean training speedup of ∼ 43% in the experiments we perform. Further, we also use our algorithms to learn why curriculum learning works. Based on our study, we argue that curriculum learning removes noisy examples from the initial phases of training, and gradually exposes them to the learner acting like a regularizer that helps in improving the generalization ability of the learner.
[]
[ { "authors": [ "PN Arora" ], "title": "On the shannon measure of entropy", "venue": "Information Sciences,", "year": 1981 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In Proceedings of the 26th Annual International Conference on Machine Learning,", "year": 2009 }, { "authors": [ "Haw-Shiuan Chang", "Erik Learned-Miller", "Andrew McCallum" ], "title": "Active bias: Training more accurate neural networks by emphasizing high variance samples", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Dami Choi", "Christopher J Shallue", "Zachary Nado", "Jaehoon Lee", "Chris J Maddison", "George E Dahl" ], "title": "On empirical comparisons of optimizers for deep learning", "venue": null, "year": 1910 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "J. Mach. Learn. Res.,", "year": 2011 }, { "authors": [ "Yang Fan", "Fei Tian", "Tao Qin", "Xiang-Yang Li", "Tie-Yan Liu" ], "title": "Learning to teach", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Alex Graves", "Marc G. Bellemare", "Jacob Menick", "Rémi Munos", "Koray Kavukcuoglu" ], "title": "Automated curriculum learning for neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Sheng Guo", "Weilin Huang", "Haozhi Zhang", "Chenfan Zhuang", "Dengke Dong", "Matthew R Scott", "Dinglong Huang" ], "title": "Curriculumnet: Weakly supervised learning from large-scale web images", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Guy Hacohen", "Daphna Weinshall" ], "title": "On the power of curriculum learning in training deep networks", "venue": null, "year": 1904 }, { "authors": [ "Di Hu", "Zheng Wang", "Haoyi Xiong", "Dong Wang", "Feiping Nie", "Dejing Dou" ], "title": "Curriculum audiovisual learning", "venue": "arXiv preprint arXiv:2001.09414,", "year": 2020 }, { "authors": [ "Jiwoon Jeon", "R Manmatha" ], "title": "Using maximum entropy for automatic image annotation", "venue": "In International Conference on Image and Video Retrieval,", "year": 2004 }, { "authors": [ "Angelos Katharopoulos", "François Fleuret" ], "title": "Not all samples are created equal: Deep learning with importance sampling", "venue": "arXiv preprint arXiv:1803.00942,", "year": 2018 }, { "authors": [ "Nitish Shirish Keskar", "Richard Socher" ], "title": "Improving generalization performance by switching from adam to SGD", "venue": null, "year": 2017 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "M.P. Kumar", "Benjamin Packer", "Daphne Koller" ], "title": "Self-paced learning for latent variable models", "venue": "In Advances in Neural Information Processing Systems", "year": 2010 }, { "authors": [ "V. Kumar", "Priyanka Gupta" ], "title": "Importance of statistical measures in digital image processing", "venue": null, "year": 2012 }, { "authors": [ "Xuebo Liu", "Houtim Lai", "Derek F Wong", "Lidia S Chao" ], "title": "Norm-based curriculum learning for neural machine translation", "venue": "arXiv preprint arXiv:2006.02014,", "year": 2020 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Online batch selection for faster training of neural networks", "venue": "arXiv preprint arXiv:1511.06343,", "year": 2015 }, { "authors": [ "Mario Mastriani", "Alberto E. Giraldez" ], "title": "Enhanced directional smoothing algorithm for edgepreserving smoothing of synthetic-aperture radar", "venue": "images. CoRR,", "year": 2016 }, { "authors": [ "T. Matiisen", "A. Oliver", "T. Cohen", "J. Schulman" ], "title": "Teacher–student curriculum learning", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "Rémy Portelas", "Cédric Colas", "Katja Hofmann", "Pierre-Yves Oudeyer" ], "title": "Teacher algorithms for curriculum learning of deep rl in continuously parameterized environments", "venue": "In Conference on Robot Learning,", "year": 2020 }, { "authors": [ "Sashank J. Reddi", "Satyen Kale", "Sanjiv Kumar" ], "title": "On the convergence of adam and beyond", "venue": null, "year": 1904 }, { "authors": [ "Herbert Robbins", "Sutton Monro" ], "title": "A stochastic approximation method", "venue": "Ann. Math. Statist.,", "year": 1951 }, { "authors": [ "Claude E Shannon" ], "title": "Prediction and entropy of printed english", "venue": "Bell system technical journal,", "year": 1951 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Tijmen Tieleman", "Geoffrey Hinton" ], "title": "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine", "venue": null, "year": 2012 }, { "authors": [ "Daphna Weinshall", "Gad Cohen" ], "title": "Curriculum learning by transfer learning: Theory and experiments with deep", "venue": "networks. CoRR,", "year": 2018 }, { "authors": [ "Ashia C Wilson", "Rebecca Roelofs", "Mitchell Stern", "Nati Srebro", "Benjamin Recht" ], "title": "The marginal value of adaptive gradient methods in machine learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Yunzhi Zhang", "Pieter Abbeel", "Lerrel Pinto" ], "title": "Automatic curriculum learning through value disagreement", "venue": "arXiv preprint arXiv:2006.09641,", "year": 2020 }, { "authors": [ "Peilin Zhao", "Tong Zhang" ], "title": "Stochastic optimization with importance sampling for regularized loss minimization", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Hacohen" ], "title": "We exploit hyper-parameter grid-search to tune the hyper-parameters", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951) is a simple yet widely used algorithm for machine learning optimization. There have been many efforts to improve its performance. A number of such directions, such as AdaGrad (Duchi et al., 2011), RMSProp (Tieleman & Hinton, 2012), and Adam (Kingma & Ba, 2014), improve upon SGD by fine-tuning its learning rate, often adaptively. However, Wilson et al. (2017) has shown that the solutions found by adaptive methods generalize worse even for simple overparameterized problems. Reddi et al. (2019) introduced AMSGrad hoping to solve this issue. Yet there is performance gap between AMSGrad and SGD in terms of the ability to generalize (Keskar & Socher, 2017). Further, Choi et al. (2019) shows that more general optimizers such as Adam and RMSProp can never underperform SGD when all their hyperparameters are carefully tuned. Hence, SGD still remains one of the main workhorses of the ML optimization toolkit.\nSGD proceeds by stochastically making unbiased estimates of the gradient on the full data (Zhao & Zhang, 2015). However, this approach does not match the way humans typically learn various tasks. We learn a concept faster if we are presented the easy examples first and then gradually exposed to examples with more complexity, based on a curriculum. An orthogonal extension to SGD (Weinshall & Cohen, 2018), that has some promise in improving its performance is to choose examples according to a specific strategy, driven by cognitive science – this is curriculum learning (CL) (Bengio et al., 2009), wherein the examples are shown to the learner based on a curriculum." }, { "heading": "1.1 RELATED WORKS", "text": "Bengio et al. (2009) formalizes the idea of CL in machine learning framework where the examples are fed to the learner in an order based on its difficulty. The notation of difficulty of examples\nhas not really been formalized and various heuristics have been tried out: Bengio et al. (2009) uses manually crafted scores, self-paced learning (SPL) (Kumar et al., 2010) uses the loss values with respect to the learner’s current parameters, and CL by transfer learning uses the loss values with respect to a pre-trained learner to rate the difficulty of examples in data. Among these works, what makes SPL particular is that they use a dynamic CL strategy, i.e. the preferred ordering is determined dynamically over the course of the optimization. However, SPL does not really improve the performance of deep learning models, as noted in (Fan et al., 2018). Similarly, Loshchilov & Hutter (2015) uses a function of rank based on latest loss values for online batch selection for faster training of neural networks. Katharopoulos & Fleuret (2018) and Chang et al. (2017) perform importance sampling to reduce the variance of stochastic gradients during training. Graves et al. (2017) and Matiisen et al. (2020) propose teacher-guided automatic CL algorithms that employ various supervised measures to define dynamic curricula. The most recent works in CL show its advantages in reinforcement learning (Portelas et al., 2020; Zhang et al., 2020).\nThe recent work by Weinshall & Cohen (2018) introduces the notion of ideal difficult score to rate the difficulty of examples based on the loss values with respect to the set of optimal hypotheses. They theoretically show that for linear regression, the expected rate of convergence at a training step t for an example monotonically decreases with its ideal difficulty score. This is practically validated by Hacohen & Weinshall (2019) by sorting the training examples based on the performance of a network trained through transfer learning. However, there is a lack of theory to show that CL improves the performance of a completely trained network. Thus, while CL indicates that it is possible to improve the performance of SGD by a judicious ordering, both the theoretical insights as well as concrete empirical guidelines to create this ordering remain unclear.\nWhile the previous CL works employ tedious methods to score the difficulty level of the examples, Hu et al. (2020) uses the number of audio sources to determine the difficulty for audiovisual learning. Liu et al. (2020) uses the norm of word embeddings as a difficulty measure for CL for neural machine translation. In light of these recent works, we discuss the idea of using task-specific statistical (unsupervised) measures to score examples making it easy to perform CL on real image datasets without the aid of any pre-trained network." }, { "heading": "1.2 OUR CONTRIBUTIONS", "text": "Our work proposes two novel algorithms for CL. We do a thorough empirical study of our algorithms and provide some more insights into why CL works. Our contributions are as follows:\n• We propose a novel dynamic curriculum learning (DCL) algorithm to study the behaviour of CL. DCL is not a practical CL algorithm since it requires the knowledge of a reasonable local optima as needs to compute the gradients of full data after ever training epoch. DCL uses the gradient information to define a curriculum that minimizes the distance between the current weight and a desired local minima. However, this simplicity in the definition of DCL makes it easier to analyze its performance formally.\n• Our DCL algorithm generates a natural ordering for training the examples. Previous CL works have demonstrated that exposing a part of the data initially and then gradually exposing the rest is a standard way to setup a curriculum. We use two variants of our DCL framework to show that it is not just the subset of data which is exposed to the model that matters, but also the ordering within the data partition that is exposed. We also analyze how DCL is able to serve as a regularizer and improve the generalization of networks.\n• We contribute a simple, novel and practical CL approach for image classification tasks that does the ordering of examples in a completely unsupervised manner using statistical measures. Our insight is that statistical measures could have an association with the difficulty of examples in real data. We empirically analyze our argument of using statistical scoring measures (especially standard deviation) over permutations of multiple datasets and networks. Additionally, we study why CL based on standard deviation scoring works using our DCL framework.\nAlgorithm 1 Approximate greedy dynamic curriculum learning (DCL+).\nInput: Data X , local minima w̃, weight wt, batch size b, and pacing function pace.\nOutput: Sequence of mini-batches Bt for the next training epoch.\n1: ãt ← w̃ −wt 2: ρt ← [ ] 3: Bt ← [ ] 4: for (i = 0; N ; 1) do 5: append − ã T t · ∇fi(wt) ‖ãt‖2 to ρt 6: end for 7: X̃ ← X sorted according to ρt, in\nascending order 8: size← pace(t) 9: for (i = 0; size; b) do\n10: append X̃ [i, ..., i+ b− 1] to Bt 11: end for 12: return Bt" }, { "heading": "2 PRELIMINARIES", "text": "At any training step t, SGD updates the weight wt using ∇fi(wt) which is the gradient of loss of example xi with respect to the current weight. The learning rate and the data are denoted by η and X = {(xi, yi)}N−1i=0 respectively, where xi ∈ R\nd denotes a data point and yi ∈ [K] its corresponding label for a dataset with K classes. We denote the learner as hϑ : Rd → [K]. Generally, SGD is used to train hϑ by giving the model a sequence of mini-batches {B0, B1, ..., BT−1}, where Bi ⊆ X ∀i ∈ [T ]. Each Bi is generated by uniformly sampling examples from the data. We denote this approach as vanilla.\nIn CL, the curriculum is defined by two functions, namely the scoring function and the pacing function. The scoring function, scoreϑ(xi, yi) : Rd× [K]→ R, scores each example in the dataset. Scoring function is used to sort X in an ascending order of difficulty. A data point (xi, yi) is said to be easier than (xj , yj) if scoreϑ(xi, yi) < scoreϑ(xj , yj), where both the examples belong to X . Unsupervised scoring measures do not use the data labels to determine the difficulty of data points. The pacing function, paceϑ(t) : [T ] → [N ], determines how much of the data is to be exposed at a training step t ∈ [T ]. We define speedup for CL model as its improvement over vanilla model (in terms of the number of training steps) to achieve a given test accuracy. For example, CL has 2× speedup if vanilla model achieves 90% test accuracy in 100 training steps while CL achieves the same 90% test accuracy in 50 training steps." }, { "heading": "3 DYNAMIC CURRICULUM LEARNING", "text": "For DCL algorithms (Kumar et al., 2010; Graves et al., 2017; Matiisen et al., 2020), examples are scored and sorted after every few training steps since the parameters of the scoring function change dynamically with the learner as training proceeds. Hacohen & Weinshall (2019) and Bengio et al. (2009) use a fixed scoring function and pace function for the entire training process. They empirically show that a curriculum helps to learn fast in the initial phase of the training process. In this section, we propose and analyze our novel DCL algorithm that updates the difficulty scores of all the examples in the training data at every epoch using their gradient information. We hypothesize the following: Given a weight initialization and a local minima obtained by full training of vanilla SGD, the curriculum ordering determined by our DCL variant leads to speedup in training. We\nfirst describe the algorithm, then the underlying intuition, and finally validate the hypothesis using experiments.\nOur DCL algorithm iteratively works on reducing the L2 distance,Rt, between the weight parameter wt and a given optimal weight w̄ at any training step t. Suppose, for any t̃ < t, St̃,t is the ordered set containing the (t− t̃+1) indices of training examples that are to be shown to the learner from the training steps t̃ through t. Let us define at = (w̄ −wt), Rt = ‖at‖2, and θt̃i as the angle between ∇fi(wt) and at̃. Then, using a geometrical argument, (see Figure 1),\nR2t = ( Rt̃ − η j=t−1∑ j=t̃, i∈St̃,t−1 ( ‖∇fi(wj)‖2 cos θt̃i ))2 + η2 ( j=t−1∑ j=t̃, i∈St̃,t−1 ( ‖∇fi(wj)‖2 sin θt̃i ))2\n= R2t̃ − 2ηRt̃ j=t−1∑\nj=t̃, i∈St̃,t−1\n( ‖∇fi(wj)‖2 cos θt̃i ) + η2 ( j=t−1∑ j=t̃, i∈St̃,t−1 ( ‖∇fi(wj)‖2 cos θt̃i ))2\n+ η2 ( j=t−1∑\nj=t̃, i∈St̃,t−1\n( ‖∇fi(wj)‖2 sin θt̃i ))2 (1)\nFor a vanilla model, S0,T is generated by uniformly sampling indices from [N ] with replacement. Since, finding a set S0,T to minimize R2T and an optimal w̄ are intractable for nonconvex optimization problems, we approximate the DCL algorithm (DCL+, see Algorithm 1). We approximate w̄ with w̃, which is a local minima obtained from training the vanilla SGD model. Also, to reduce computational expense while sampling examples, we neglect the terms with coefficient η2 in equation 1 while designing our algorithm. Algorithm 1 uses a greedy approach to minimize R2t by sampling examples at every epoch using the scoring function\nscoret(xi) = −‖∇fi(wt)‖2 cos θti = − aTt · ∇fi(wt) ‖at‖2 = ρt,i. (2)\nLet us denote the models that use the natural ordering of mini-batches greedily generated by Algorithm 1 for training networks as DCL+. DCL- uses the same sequence of mini-batches that DCL+ exposes to the network at any given epoch, but the order is reversed. We empirically show that DCL+ achieves a faster and better convergence with various initializations of w0. We use learning rates with an exponential step-decay rate for the optimizers in all our experiments as traditionally\n0 1000 2000 3000 4000 5000 Training step\n0.15\n0.20\n0.25\n0.30\n0.35\n0.40\n0.45\n0.50\nTo p-\n1 te\nst a\ncc ur\nac y\nk = 0.2 k = 0.4 k = 0.6 k = 0.8 k = 1.0\nFigure 3: Learning curves for experiment 2 with varying pace(t) = bkNc for DCL+. The parameter k needs to be finely tuned for improving the generalization of the network. A low k value exposes only examples with less noise to the network at every epoch whereas a high k value exposes most of the dataset including highly noisy examples to the network. A moderate k value shows less noisy examples along with some examples with moderate level of noise to the learner. Here, a moderate k = 0.6 generalizes the best.\ndone (Simonyan & Zisserman, 2014; Szegedy et al., 2016). For a fair comparison, we tune the learning rates and decay rates of the models.\nExperimental setup: In our experiments, we set pace(t) = bkNc ∀t, where k ∈ [b/N, 1] is a tunable hyper-parameter. We use a 2-layer fully-connected network (FCN) with 10 hidden neurons and Exponential Linear Unit (ELU) nonlinearities to empirically validate our algorithms (k = 0.9) on a subset of the MNIST dataset with class labels 0 and 1 (Experiment 1). Since, this is a very easy task (as the vanilla model accuracy is as high as ∼ 99.9%), we compare the test loss values across training steps in Figure 2a to see the behaviour of DCL on an easy task. DCL+ shows the fastest convergence, although all the networks achieve the same test accuracy. DCL+ achieves vanilla’s final test loss score at training step 682 (∼ 30% speedup). In Experiment 2, we use a 2-layered FCN with 128 hidden neurons and ELU nonlinearities to evaluate our DCL algorithms (k = 0.6) on a relatively difficult small mammals dataset (Krizhevsky et al., 2009), a super-class of CIFAR-100. Figure 2b shows that DCL+ achieves a faster and better convergence than vanilla with respect to the test set accuracy in experiment 2. DCL+ achieves vanilla’s convergence test accuracy score at training step 1896 (∼ 60% speedup). Further experimental details are deferred to Appendix B.1. Since, DCL is computationally expensive, we perform DCL experiments only on small datasets. Fine-tuning of k is crucial for improving the performance of DCL+ on the test set (see Figure 3). We fine-tune k by trial-and-error over the test accuracy score." }, { "heading": "4 WHY IS A CURRICULUM USEFUL?", "text": "At an intuitive level, we can say that DCL+ converges faster than the vanilla SGD as we greedily sample those examples whose gradient steps are the most aligned towards an approximate optimal weight vector. In previous CL works, mini-batches are generated by uniformly sampling examples from a partition of the dataset which is made by putting a threshold on the difficulty scores of the examples. Notice that our DCL algorithms generate mini-batches with a natural ordering at every epoch. We design DCL+ and DCL- to investigate an important question: can CL benefit from having a set of mini-batches with a specific order or is it just the subset of data that is exposed to the learner that matters? Figure 2 shows that the ordering of mini-batches matters while comparing DCL+ and DCL-, which expose the same set of examples to the learner in any training epoch. Once the mini-batch sequence for an epoch is computed, DCL- provides mini-batches to the learner in the decreasing order of noise. This is the reason for DCL- to have high discontinuities in the test loss curve after every epoch in Figure 2a. With our empirical results, we argue that the ordering of mini-batches within an epoch does matter.\nBengio et al. (2009) illustrates that removing examples that are misclassified by a Bayes classifier (“noisy” examples) provides a good curriculum for training networks. SPL tries to remove examples that might be misclassified during a training step by avoiding examples with high loss. CL by transfer learning avoids examples that are noisy to an approximate optimal hypotheses in the initial phases of training. DCL+ and DCL- try to avoid examples with noisy gradients that might slow\nAlgorithm 2 Curriculum learning method. Input: Data X , batch size b, scoring function score, and pacing function pace. Output: Sequence of mini-batches [B0, B1, ..., BT−1].\n1: sort X according to score, in ascending order 2: B ← [ ] 3: for (i = 1; T ; 1) do 4: size← pace(i) 5: X̃i ← X [0, 1, ..., size− 1] 6: uniformly sample Bi of size b from X̃i 7: end for 8: return B\ndown the convergence towards the desired optimal minima. Guo et al. (2018) empirically shows that avoiding noisy examples improves the initial learning of convolutional neural networks (CNNs). According to their work, adding noisy examples to later phases of training serves as a regularizer and improves the generalization capability of CNNs. DCL+ uses its pace function to avoid highly noisy examples (in terms of gradients). In our DCL experiments, the parameter k is chosen such that few moderately noisy examples (examples present in the last few mini-batches within an epoch) are included in training along with lesser noisy examples to improve the network’s generalization. We show the importance of tuning the pace function for DCL+ in Figure 3. Hence, the parameter k serves as a regularizer and helps in improving the generalization of networks." }, { "heading": "5 STATISTICAL MEASURES FOR DEFINING CURRICULA", "text": "In this section, we discuss our simple approach of using task-specific statistical measures to define curricula for real image classification tasks. We perform experiments and validate our proposal over various image classification datatsets with different network architectures.\nBased on the classification task, one could find a statistical measure that could serve the purpose of a scoring function for defining a curriculum. For instance, standard deviation and entropy are informative statistical measures for images and used widely in digital image processing (DIP) tasks (Kumar & Gupta, 2012; Arora, 1981). Mastriani & Giraldez (2016) uses standard deviation filters for effective edge-preserving smoothing of radar images. Natural images might have a higher standard deviation if they have a lot of edges and/or vibrant range of colors. Edges and colours are among the most important features that help in image classification at a higher level. Figure 4 shows 10 images that have the lowest and highest standard deviations in the CIFAR-100 dataset. Entropy measure gives a measure of image information content and is used for various DIP tasks such as automatic image annotation (Jeon & Manmatha, 2004). We experiment using the standard deviation measure (stddev), the Shanon’s entropy measure (entropy) (Shannon, 1951), and different norm measures as scoring function for CL (see Algorithm 2). The performance improvement with norm measures is not consistent and significant over the experiments we perform (see Appendix A for details). For\na flattened image example represented as x = [x(0), x(1), ..., x(d−1)]T ∈ Rd, we define\nµ(x) =\n∑d−1 i=0 x(i)\nd and\nstddev(x) =\n√∑d−1 i=0 (x(i) − µ(x))2\nd .\n(3)\nWe use a fixed exponential pace function that exponentially increases the amount of data exposed to the network after every fixed step length number of training steps. For a training step i, it is formally given as: pace(i) = bmin(1, starting fraction · incb i step length c) · Nc, where starting fraction is the fraction of the data that is exposed to the model initially, inc is the exponential factor by which the the pace function value increases after a step, and N is the total number of examples in the data.\nBaseline and experimental setup: We denote CL models with scoring function stddev as stddev+, −stddev as stddev-, entropy as entropy+, and −entropy as entropy-. Even though vanilla is a competitive benchmark, we also use the CL by transfer learning framework (Hacohen & Weinshall, 2019) (denoted as TL) as a baseline. We employ two network architectures for our experiments: a) FCN-512 – A 2-layer FCN with 512 hidden neurons and ELU nonlinearities, and b) CNN-8 – A moderately deep CNN with 8 convolution layers and 2 fully-connected layers. We perform the following experiments: CNN-8 with a) CIFAR-100 (Experiment 3), b) CIFAR-10 (Experiment 4), c)\nsmall mammals (Experiment 5) (these are the benchmark experiments used in Hacohen & Weinshall (2019)), and FCN-512 with d) MNIST (Experiment 6), e) Fashion-MNIST (Experiment 7). For experiments 3−5, we use the same experimental setup as used in Hacohen & Weinshall (2019). More experimental details are deferred to Appendix B. In all our experiments, the models use finetuned hyper-parameters for the purpose of an unbiased comparison. Our experiments (3 and 4) show that both stddev and entropy measures as scoring function provide superior results. Since stddev performs the best, we further investigate its benefits on multiple tasks (experiments 5, 6, and 7). Figures 5 and 6 show the results from our experiments. The best CL models achieve speedups of 56.6%, 55.0%, 46.9%, 8.1%, and 50.1% on experiments 3 − 7, respectively. With these empirical evidences we argue that stddev is a good measure to define curricula for image classification tasks.\nAnalyzing stddev with our DCL framework: We use our DCL framework to understand why stddev works as a scoring function. We try to analyze the relation between the standard deviation\nand ρt,i values of examples over training epochs. Figure 7 shows the plots of standard deviations on the Y-axis against examples plotted on the X-axis ranked based on their ρt,i values in an ascending order at various stages of training. It shows the dynamics of ρt,i over initial, intermediate and final stages of training. Relation between ρt,i and stddev is evident from these plots. In the initial stage of training, examples with high standard deviations tend to have high ρ values. In the final stage of training (the trend changes to the exact opposite after the intermediate stage), examples with high ρ values tend to have low standard deviation values. This shows that stddev can also be useful in removing noisy examples from the initial phases of training and hence help in defining a good curriculum." }, { "heading": "6 CONCLUSION", "text": "In this paper, we propose two novel CL algorithms that show improvement in performance over multiple image classification tasks with CNNs and FCNs. Our DCL algorithm greedily samples data to move towards an optimal weight in a faster manner. It tries to avoid noisy gradients from slowing down the convergence. Its two variants, DCL+ and DCL-, provide insights on how important ordering of mini-batches is for CL. The requirement to finely tune the pace function of DCL+ shows that adding a moderate amount of noisy examples to training helps in improving the network’s generalization capability. In this work, a fresh approach to define curricula for image classification tasks based on statistical measures is introduced. This technique makes it easy to score examples in a completely unsupervised manner without the aid of any teacher network. We thoroughly evaluate our new CL method and find it benefits from a faster (mean speedup of ∼ 43%) and better convergence (test accuracy improvement of ∼ 0.2% − 2.2%). We use our DCL framework to understand stddev. With our results, we argue that CL algorithms help in faster initial learning by removing noisy examples that slow down the convergence towards a minima. Gradually, they add noisy examples to training in order to improve the performance of the network on unseen data." }, { "heading": "A ADDITIONAL EMPIRICAL RESULTS", "text": "In Section 5, we study the performance of CL using stddev and entropy as scoring measures. Other important statistical measures are mode, median, and norm (Kumar & Gupta, 2012). A high standard deviation for a real image could mean that the image is having a lot of edges and a wide range of colors. A low entropy could mean that an image is less noisy. Norm of an image gives information about its brightness. Intuitively, norm is not a good measure for determining difficulty of images as low norm valued images are really dark and high norm valued images are really bright. We experiment with different norm measures and find that they do not serve as a good CL scoring measure since they have lesser improvement with higher accuracy variance over multiple trials when compared to stddev- on the CIFAR datasets. We use two norm measures\nnorm(x) = ‖x‖2, and class norm(x) = ‖x− µx‖2\n(4)\nwhere x is an image in the dataset represented as a vector, and µx is the mean of all the images belonging to the class of x. All the orderings are performed based on the scoring function and the examples are then arranged to avoid class imbalance within a mini-batch in our experiments. Let us denote the models that use the scoring functions norm as norm+, −norm as norm-, class norm as class norm+, and −class norm as -class norm. Figure 8 shows the results of our experiments on CIFAR-100 and CIFAR-10 datasets with CNN-8 using norm and class norm scoring functions. We find that the improvements reported for norm-, the best model among the models that use norm measures, have a lower improvement than stddev-. Also, norm- has a higher STE when compared to both vanilla and stddev-. Hence, based on our results, we suggest that standard deviation is a more useful statistical measure than norm measures for defining curricula for image classification tasks." }, { "heading": "B EXPERIMENTAL DETAILS", "text": "B.1 NETWORK ARCHITECTURES\nAll FCNs (denoted as FCN-M) we use are 2-layered with a hidden layer consisting of M neurons with ELU nonlinearities. Experiment 1 employs FCN-10 while experiment 2 employs FCN-128 with no bias parameters. The outputs from the last layer is fed into a softmax layer. Experiments 6 and 7 employ FCN-512 with bias terms. The batch-size is 50.\nFor experiments 3 − 5, we use the CNN architecture that is used in Hacohen & Weinshall (2019). The codes are available in their GitHub repository. The network (CNN-8) contains 8 convolution layers with 32, 32, 64, 64, 128, 128, 256, and 256 filters respectively and ELU nonlinearities. Except for the last two layers with filter size 2 × 2, all other layers have a fliter size of 3 × 3. Batch normalization is performed after every convolution layer, and 2 × 2 max-pooling and 0.25 dropout layers after every two convolution layers. The output from the CNN is flattened and fed into a fully-connected layer with 512 neurons followed by a 0.5 dropout layer. A softmax layer follows the fully-connected output layer that has a number of neurons same as the number of classes in the dataset. The batch-size is 100. All the CNNs and FCNs are trained using SGD with cross-entropy loss. SGD uses an exponential step-decay learning rate. Our codes will be published on acceptance.\nB.2 HYPER-PARAMETER TUNING\nFor fair comparison of models, the hyper-parameters should be finely tuned as rightly mentioned in Hacohen & Weinshall (2019). We exploit hyper-parameter grid-search to tune the hyper-parameters of the models in our experiments. For vanilla models, grid-search is easier since they do not have a pace function. For CL models, we follow a coarse two-step tuning process as they have a lot of hyper-parameters. First we tune the optimizer hyper-parameters fixing other hyper-parameters. Then we fix the obtained optimizer parameters and tune other hyper-parameters.\nB.3 DATASET DETAILS\nWe use CIFAR-100, CIFAR-10, small mammals, MNIST, and Fashion-MNIST datasets. CIFAR100 and CIFAR-10 contain 50, 000 training and 10, 000 test images of shape 32× 32× 3 belonging to 100 and 10 classes, respectively. small mammals is a super-class of CIFAR-100 containing 5 classes. It has 2, 500 training and 500 test images. MNIST and Fashion-MNIST contain 60, 000 training and 10, 000 test gray-scale images of shape 28 × 28 belonging to 10 different classes. All the datasets are pre-processed before training to have a zero mean and unit standard deviation across each channel." } ]
2,020
A SIMPLE APPROACH TO DEFINE CURRICULA FOR TRAINING NEURAL NETWORKS
SP:3f2384e43d16f4b06bf238e4ce097d4e34f25ee7
[ "The following work presents a CLEVR-based compositionality benchmark. The task of the model is to verify logical statements about an image, and in order to achieve such, must learn how to map individual statements to a composition of functions over the image checking for color, placement, shape, etc. Specific to this dataset is that it is explicitly few-shot, which forces the models to generalize very quickly and to infer under uncertainty.", "This work proposes the CURI dataset to measure productive concept learning under uncertainty. The dataset is designed using a concept space defined by a language and formulated as a few-shot meta-learning problem to tell apart in-concept samples from out-of-concept samples. The authors also design several out-of-generalization data splits that test models' ood generalization performance. Together with an oracle model, the authors show using the prototypical network that the compositional concept learning and reasoning problem in CURI is challenging." ]
Humans can learn and reason under substantial uncertainty in a space of infinitely many concepts, including structured relational concepts (“a scene with objects that have the same color”) and ad-hoc categories defined through goals (“objects that could fall on one’s head”). In contrast, standard classification benchmarks: 1) consider only a fixed set of category labels, 2) do not evaluate compositional concept learning and 3) do not explicitly capture a notion of reasoning under uncertainty. We introduce a new few-shot, meta-learning benchmark, Compositional Reasoning Under Uncertainty (CURI) to bridge this gap. CURI evaluates different aspects of productive and systematic generalization, including abstract understandings of disentangling, productive generalization, learning boolean operations, variable binding, etc. Importantly, it also defines a model-independent “compositionality gap” to evaluate difficulty of generalizing out-of-distribution along each of these axes. Extensive evaluations across a range of modeling choices spanning different modalities (image, schemas, and sounds), splits, privileged auxiliary concept information, and choices of negatives reveal substantial scope for modeling advances on the proposed task. All code and datasets will be available online.
[]
[ { "authors": [ "Aishwarya Agrawal", "Dhruv Batra", "Devi Parikh", "Aniruddha Kembhavi" ], "title": "Don’t just assume; look and answer: Overcoming priors for visual question answering", "venue": null, "year": 2017 }, { "authors": [ "Dzmitry Bahdanau", "Philippe Beaudoin", "Aaron Courville" ], "title": "CLOSURE: Assessing Systematic Generalization of CLEVR Models", "venue": "arXiv preprint,", "year": 2019 }, { "authors": [ "Anton Bakhtin", "Laurens van der Maaten", "Justin Johnson", "Laura Gustafson", "Ross Girshick" ], "title": "PHYRE: A new benchmark for physical reasoning", "venue": null, "year": 2019 }, { "authors": [ "David G T Barrett", "Felix Hill", "Adam Santoro", "Ari S Morcos", "Timothy Lillicrap" ], "title": "Measuring abstract reasoning in neural networks", "venue": null, "year": 2018 }, { "authors": [ "L W Barsalou" ], "title": "Ad hoc categories", "venue": "Mem. Cognit.,", "year": 1983 }, { "authors": [ "Prithvijit Chattopadhyay", "Ramakrishna Vedantam", "Ramprasaath R Selvaraju", "Dhruv Batra", "Devi Parikh" ], "title": "Counting everyday objects in everyday scenes", "venue": null, "year": 2016 }, { "authors": [ "Mark Everingham", "Luc Van Gool", "Christopher K I Williams", "John Winn", "Andrew Zisserman" ], "title": "The pascal visual object classes (VOC) challenge", "venue": "Int. J. Comput. Vis.,", "year": 2010 }, { "authors": [ "L Fei-Fei", "R Fergus", "P Perona" ], "title": "One-shot learning of object categories", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2006 }, { "authors": [ "J Feldman" ], "title": "Minimization of boolean complexity in human concept", "venue": "learning. Nature,", "year": 2000 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-Agnostic Meta-Learning for fast adaptation of deep networks", "venue": null, "year": 2017 }, { "authors": [ "Jerry Fodor" ], "title": "The Language of Thought", "venue": null, "year": 1975 }, { "authors": [ "Jerry A Fodor", "Zenon W" ], "title": "Pylyshyn. Connectionism and cognitive architecture: A critical analysis", "venue": "Cognition, 28:3–71,", "year": 1988 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Noah D Goodman", "Joshua B Tenenbaum", "Jacob Feldman", "Thomas L Griffiths" ], "title": "A rational analysis of rule-based concept learning", "venue": "Cognitive Science,", "year": 2008 }, { "authors": [ "Noah D Goodman", "Joshua B Tenenbaum", "Tobias Gerstenberg" ], "title": "Concepts in a probabilistic language of thought", "venue": null, "year": 2015 }, { "authors": [ "Erin Grant", "Joshua C. Peterson", "Tom Griffiths" ], "title": "Learning deep taxonomic priors for concept learning from few positive examples", "venue": "The Annual Meeting of the Cognitive Science Society,", "year": 2019 }, { "authors": [ "Irina Higgins", "Nicolas Sonnerat", "Loic Matthey", "Arka Pal", "Christopher P Burgess", "Matthew Botvinick", "Demis Hassabis", "Alexander Lerchner" ], "title": "SCAN: Learning abstract hierarchical compositional visual concepts", "venue": null, "year": 2017 }, { "authors": [ "Felix Hill", "Adam Santoro", "David G T Barrett", "Ari S Morcos", "Timothy Lillicrap" ], "title": "Learning to make analogies by contrasting abstract relational structure", "venue": null, "year": 2019 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Comput.,", "year": 1997 }, { "authors": [ "Justin Johnson", "Bharath Hariharan", "Laurens van der Maaten", "Li Fei-Fei", "C Lawrence Zitnick", "Ross Girshick" ], "title": "CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning", "venue": null, "year": 2016 }, { "authors": [ "Rie Johnson", "Tong Zhang" ], "title": "Supervised and Semi-Supervised Text Categorization LSTM for Region Embeddings", "venue": "In International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Charles Kemp", "Alan Jern" ], "title": "Abstraction and relational learning", "venue": null, "year": 2009 }, { "authors": [ "Charles Kemp", "Aaron Bernstein", "Joshua B Tenenbaum" ], "title": "A Generative Theory of Similarity", "venue": "In Proceedings of the 27th Annual Conference of the Cognitive Science Society,", "year": 2005 }, { "authors": [ "Daniel Keysers", "Nathanael Schärli", "Nathan Scales", "Hylke Buisman", "Daniel Furrer", "Sergii Kashubin", "Nikola Momchev", "Danila Sinopalnikov", "Lukasz Stafiniak", "Tibor Tihon", "Dmitry Tsarkov", "Xiao Wang", "Marc van Zee", "Olivier Bousquet" ], "title": "Measuring compositional generalization: A comprehensive method on realistic data", "venue": null, "year": 2019 }, { "authors": [ "Brenden M Lake", "Marco Baroni" ], "title": "Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Brenden M. Lake", "Steven T. Piantadosi" ], "title": "People Infer Recursive Visual Concepts from Just a Few Examples", "venue": "Computational Brain & Behavior,", "year": 2019 }, { "authors": [ "Brenden M Lake", "Ruslan Salakhutdinov", "Joshua B. Tenenbaum" ], "title": "The Omniglot Challenge: A 3-Year Progress Report", "venue": "Current Opinion in Behavioral Sciences,", "year": 2019 }, { "authors": [ "Christoph H Lampert", "Hannes Nickisch", "Stefan Harmeling" ], "title": "Attribute-based classification for zero-shot visual object categorization", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 2014 }, { "authors": [ "G L Murphy" ], "title": "The Big Book of Concepts", "venue": null, "year": 2002 }, { "authors": [ "Kevin P. Murphy" ], "title": "Machine learning : a probabilistic perspective", "venue": "URL https://www.amazon. com/Machine-Learning-Probabilistic-Perspective-Computation/dp/ 0262018020/ref=sr_1_2?ie=UTF8&qid=1336857747&sr=8-2", "year": 2013 }, { "authors": [ "Matthew C. Overlan", "Robert A. Jacobs", "Steven T. Piantadosi" ], "title": "Learning abstract visual concepts via probabilistic program induction in a Language of Thought", "venue": null, "year": 2017 }, { "authors": [ "Steven T Piantadosi", "Joshua B Tenenbaum", "Noah D Goodman" ], "title": "Bootstrapping in a language of thought: A formal model of numerical concept", "venue": "learning. Cognition,", "year": 2012 }, { "authors": [ "Steven T Piantadosi", "Joshua B Tenenbaum", "Noah D Goodman" ], "title": "The logical primitives of thought: Empirical foundations for compositional cognitive models", "venue": "Psychol. Rev.,", "year": 2016 }, { "authors": [ "Steven Thomas Piantadosi" ], "title": "Learning and the language of thought", "venue": "PhD thesis, Massachusetts Institute of Technology,", "year": 2011 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": null, "year": 2016 }, { "authors": [ "Laura Ruis", "Jacob Andreas", "Marco Baroni", "Diane Bouchacourt", "Brenden M. Lake" ], "title": "A Benchmark for Systematic Generalization in Grounded Language Understanding", "venue": "In Advances in Neural Information Processing Systems", "year": 2020 }, { "authors": [ "Adam Santoro", "David Raposo", "David G Barrett", "Mateusz Malinowski", "Razvan Pascanu", "Peter Battaglia", "Timothy Lillicrap" ], "title": "A simple neural network module for relational reasoning", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Lukasz" ], "title": "Dataset: A dataset of datasets for learning to learn from few examples", "venue": null, "year": 2019 }, { "authors": [ "Kaiser", "Illia Polosukhin. Attention is all you need. June" ], "title": "Ramakrishna Vedantam, Ian Fischer, Jonathan Huang, and Kevin Murphy", "venue": "Generative models of", "year": 2017 }, { "authors": [ "Duo Wang", "Mateja Jamnik", "Pietro Lio" ], "title": "Abstract diagrammatic reasoning with multiplex graph", "venue": null, "year": 2016 }, { "authors": [ "Piantadosi" ], "title": "2016) in choosing the sampling probabilities for various completions based on how well humans seem to be able to learn the corresponding primitive. For example, we sample utterances with disjunctions (or) less frequently since they are known to be difficult for humans to learn", "venue": "Based on Kemp & Jern", "year": 2009 } ]
[ { "heading": "1 INTRODUCTION", "text": "Human concept learning is more flexible than today’s AI systems. Human conceptual knowledge is productive: people can understand and generate novel concepts via compositions of existing concepts (“an apartment dog”) (Murphy, 2002), unlike standard machine classifiers that are limited to a fixed set of classes (“dog”, “cat”, etc.). Further, humans can induce goal-based, “ad hoc” categories such as “things to take from one’s apartment in a fire” (children, dogs, keepsakes, etc.) (Barsalou, 1983). Thus, unlike AI systems, humans reason seamlessly in large, essentially “unbounded” concept spaces.\nBeyond unboundedness, a natural challenge in such concept spaces is uncertainty – the right concept to be inferred is uncertain, as a plethora of candidate concepts could explain observations. For e.g. in Figure 1 (top, image panel), the “right” concept could be that “All objects are blue and have the same size”, but it could also be “There are less than four objects in the scene”, or “All objects have the same color”. Humans gracefully handle such uncertainty and underdetermination (Tenenbaum & Griffiths, 2001; Xu & Tenenbaum, 2007; Goodman et al., 2008; Piantadosi et al., 2016). Popular compositional reasoning benchmarks such as CLEVR (Johnson & Zhang, 2016) for visual question answering and Ravens Progressive Matrices (Santoro et al., 2017) for deductive, analogical reasoning are compositionally rich and challenging in nature, but do not tackle ambiguity and underdetermination.\nWe address this gap in the literature, and propose the Compositional Reasoning Under Uncertainty (CURI) benchmark to study how modern machine learning systems can learn concepts spanning a large, productively defined space (Figure 1). In pursuit of this goal, we instantiate a meta learning task where a model must acquire a compositional concept from finite samples. A signature of productivity in human thought is our ability to handle novel combinations of known, atomic components. Thus, in CURI we instantiate different systematic train-test splits to analyze different forms of generalization in concept learning, involving novel combinations of intrinsic properties (e.g. color, shape) with boolean operators, counting, extrinsic object properties (e.g. object location), and a novel test of variable binding in context of compositional learning.\nWhile related systematic splits have been proposed in prior work in context of other tasks such as question answering and analogical reasoning (Barrett et al., 2018; Hill et al., 2019; Agrawal et al.,\n2017; Johnson et al., 2016; Vedantam et al., 2017; Higgins et al., 2017; Bakhtin et al., 2019; Lake & Baroni, 2018; Ruis et al., 2020), ours is the first benchmark which tests different qualitative aspects of reasoning about productive concepts under uncertainty.\nCompositional Reasoning Under Uncertainty (CURI) Task. Concretely, the CURI task tests few-shot learning of relational concepts in a large compositional conceptual space, with design inspiration from studies in cognitive modeling using a language of thought (LOT) approach (Fodor, 1975; Piantadosi, 2011; Kemp et al., 2005). CURI includes scene-based concepts such as “All objects have the same color” and “There exists a blue object while the rest are triangles” (Figure 1) but unlike CLEVR (Johnson et al., 2016) there are too few examples to deduce answers with certainty. Our benchmark is defined through a series of meta-learning episodes (see example in Figure 2): given positive and negative examples of a new concept Dsupp (known as the “support set”), the goal of an episode is to classify new examples Dquery (the “query set”). As in few-shot classification (Fei-Fei et al., 2006), meta-learning (Vinyals et al., 2016), and other open-set tasks (Lampert et al., 2014), models are evaluated on novel classes outside the (meta-)training set. Unlike previous work (Triantafillou et al., 2019; Lake et al., 2019) that focuses on atomic concepts, our benchmarks concerns more structured, relational concepts built compositionally from a set of atomic concepts, and involves reasoning under uncertainty – an ideal learner must marginalize over many hypotheses when making predictions (Gelman et al., 2004; Xu & Tenenbaum, 2007; Piantadosi et al., 2016).\nWe also vary the modality in which scenes are presented—rendering them as images, symbolic schemas, and sounds— enabling future research on modality-specific representational choices for compositional reasoning under uncertainty. Finally, we vary the concepts learned by the model during meta-training and meta-testing to test different aspects of systematic generalization.\nCompositionality Gap. In addition to defining systematic splits, we also characterize (for the first time, in our knowledge), the difficulty of\ngeneralization entailed by each split by introducing the notion of a model-independent “compositionality gap”. Concretely, the compositionality gap is the difference in test performance between an ideal Bayesian learner with access to the full hypothesis space, and a Bayesian learner with access to only a (potentially large) list of the hypotheses examined during meta-training. A large gap indicates that any learner must extrapolate compositionally from the training hypotheses to solve the task; additionally, models can be compared to ideal learners that either do or do not engage in such extrapolation. We anticipate that this tool will be more broadly useful for analyzing other benchmarks with compositional splits.\nModels. We evaluate models around various dimensions which concern the difficulty of learning productive concepts under uncertainty, including: 1) the modality in which the input is rendered (image, schemas, sounds), 2) method used for reasoning across objects in a scene (transformer," }, { "heading": "Variables", "text": "" }, { "heading": "Types", "text": "" }, { "heading": "Constants (Illustrated for Images)", "text": "relation-network, global average pooling, concatenation), 3) whether or not training provides groundtruth symbolic descriptions of concepts, and 4) how negative examples are sampled. Overall, our evaluations suggest that there is substantial room for improvement in compositional reasoning under uncertainty, w.r.t the compositionality gap, representing a novel challenge for compositional learning.\nSummary of contributions: 1) We introduce the Compositional Reasoning Under Uncertainty (CURI) benchmark for evaluating compositional, relational learning under uncertainty from observational data; 2) We introduce a ‘compositionality gap’ metric for measuring the difficulty of systematic generalization from train to test; 3) We provide various baseline models for benchmarking progess." }, { "heading": "2 RELATED WORK", "text": "Compositional Learning. Related work has examined systematic generalization in pattern completion using Raven’s matrices (PGM) (Santoro et al., 2017; Hill et al., 2019) and visual question answering with CLEVR (Johnson et al., 2016; Bahdanau et al., 2019). CURI’s use of the CLEVR renderer further invites particular comparison with that benchmark. Compared to these more deductive reasoning tests, CURI examines few-shot concept learning under substantial inherent uncertainty. Unlike puzzle solving or question answering, an ideal inductive learner on CURI cannot know the right rule with certainty. In essence, unlike CLEVR the “question” to be answered is not given to the model as input, but must be inferred – making the task more challenging. While PGMs do involve such an inference, once the constraints of a puzzle are identified, it does not: 1) have any uncertainty in the reasoning (which is crucial) and 2) involve any “concept” learning – where a concept applies to multiple images – as much as it involves “instance” matching to complete a sequence. In contrast, a successful CURI model behaves as if marginalizing over many hypotheses consistent with the observations e.g., (Tenenbaum & Griffiths, 2001; Xu & Tenenbaum, 2007; Piantadosi et al., 2016), an ability which is rarely studied directly in deep learning models (although see (Grant et al., 2019)).\nRecently, Keysers et al. (2019) proposed a method to create “difficult” systematic splits based on the principle that they should share atoms but have maximally different compositions. This is complementary to our splits, which provide interpretable notions of what each split tests such as disentangling, complexity, variable binding etc. Moreover, our variable binding split is predicated on having different atoms between train and test, and thus cannot be recovered by their methodology.\nLanguage of Thought (LOT). Our choice of compositional concepts was most closely inspired by (Piantadosi et al., 2016) along with other studies of human concept learning in the Language of Thought (LOT) framework (Fodor, 1975; Goodman et al., 2008; Kemp & Jern, 2009; Piantadosi et al., 2012; Goodman et al., 2015; Overlan et al., 2017; Lake & Piantadosi, 2019). In typical LOT studies of human learning, the conceptual space H is defined through a probabilistic context-free grammar G, which specifies a set of conceptual primitives and their rules of combination. Here, we use a LOT-inspired grammar G to generate an unbounded set conceptsH, while evaluating machine learning models trained without access to the underlying LOT." }, { "heading": "3 COMPOSITIONAL REASONING UNDER UNCERTAINTY (CURI) DATASET", "text": "Concept space. The compositional concepts in CURI were inspired by the empirical and cognitive modeling work of Piantadosi et al. (2016). The space of concepts (LOT) is defined by a context\nfree grammar (G). Figure 3 shows the LOT and specifies how primitives and functions compose to produce a large unbounded concept space. The LOT has three variables: x, representing an object in a scene, S = {x}Ni=1 representing the set of all objects in the scene, and S−x = S/{x}, representing the set of all objects in the scene except x. Each concept describes a rule composed of object and scene properties, logical operators, and/or comparison operators, and can be evaluated on a given scene S to determine whether the scene satisfies the rule.\nObject and scene properties are defined by functions which can be applied to objects or scenes: for example, size?(x) yields the size of an object x, while size?(S) returns a set with the sizes of all the objects ({size?(x) : x ∈ S}). Comparison and logical operators can be used to compare and relate various properties of objects in scenes. In contrast to Piantadosi et al. (2016), we include a count operator, which determines how many times a condition is satisfied by a set, which allows us to check how well deep learning models are able to count (Chattopadhyay et al., 2016; Johnson et al., 2016; Agrawal et al., 2017). Finally, quantifiers such as exists and for-all enrich the LOT by specifying the number of objects which must satisfy a given condition.\nConsider the following example concept (Figure 1 bottom): “There exists a blue object in the scene and the rest of the objects are squares.” To access the color of a given object, we use color?(x) and to access the shape of a given object, we use shape?(x). To determine whether an object matches a specific property, we can combine this with equality: shape?(x) = “square”. Finally, we can use exists to specify that at least one object must be blue, S−x to specify all the objects except for that blue object, and all to specify that all the objects in S−x must be squares. Putting it all together: exists x ∈ S (color?(x) = “blue”) and all (shape?(S−x) = “square”). Structured Generalization Splits. A signature of productivity is the ability to handle novel combinations of known components (Fodor, 1975; Fodor & Pylyshyn, 1988). Thus, in CURI, we consider splits that require generalizing to novel combinations of known elements from our LOT (Figure 3), including combinations of constants, variables, and functions. We achieve this by creating disjoint splits of concepts Htrain and Htest for training and evaluating models. By varying the held out elements and their combinations, we obtain splits that evaluate different axes of generalization. In practice, we use our grammar G to sample and filter a large set of concepts (see Appendix B.2 for more details), which yields a set of 14,929 conceptsH for training and evaluation. We next describe how each split dividesH intoHtrain andHtest, to test productive, out of distribution generalization: • Instance IID: Evaluates generalization to novel episodes from the same concept set. This is the\nstandard setup in machine learning (Murphy, 2013), in whichHtrain =Htest. This is the only split where train and test concepts overlap. • Concept IID: Evaluates generalization to novel concepts based on an arbitrary random split of\nthe concepts intoHtrain andHtest.1 • Counting: Evaluates the ability to learn a new concept h with novel property-count combina-\ntions, e.g, the training concepts never filter for exactly ‘3 squares’. • Extrinsic properties: Evaluates the ability to learn a new concept h, with novel combinations\nof extrinsic (e.g. location) and intrinsic (e.g. color) object properties. • Intrinsic properties: Evaluates the ability to learn a new concept h with novel combinations of\nintrinsic properties, e.g., the training concepts never reference both ‘red’ and ‘rubber’. • Boolean operations: Evaluates the ability to learn concepts which require application of a\nfamiliar boolean operation to a property to which the operation has never been applied previously. • Complexity split: Evaluates generalization from simple concepts (those which have less than\nor equal to 10 symbols) to more complex concepts (longer than 10 symbols). This is indicative of the productivity (Fodor, 1975) exhibited by models, in generalizing from simpler concepts to more complex concepts. • Variable binding: Evaluates learning of entirely novel intrinsic properties, e.g. the training\nconcepts involve only “red”, “blue”, and “green” but test concepts involve “yellow” (although ‘yellow’ objects can still appear in training scenes). This is indicative of inferential coherence (Fodor, 1975) in models, in generalizing rules of inference to novel atoms.\nA model that infers the underlying LOT during meta-training would be expected to perform well on any such systematic split. By comparing the performance of current models to to such ideal learners,\n1While some strings h might be different in surface form, they may yeild the same results when applied to images. In this split we account for such synonomy, and ensure that no two concepts which are synonyms are in different splits. See Appendix B.6 for more details.\nthis benchmark will allow us to evaluate progress on the systematic out-of-distribution generalization capabilities of our current models. Appendix C provides more details on the strucutred splits.\nFrom Concepts to Meta-learning Episodes. A single episode comprises a support set (Dsupp) and a query set (Dquery), each of which is generated from a given concept, h. Formally, a support or query set D has input data u and corresponding label y, i.e. D = {{yi}Ni=1, {ui}Ni=1}. Each support and query set contains 5 positive and 20 negative examples — negative examples are oversampled since the space of negatives is generally much larger than that for positives. The set of positive examples are sampled uniformly from a categorical distribution over all positives. However, we consider two types of negatives: 1) easy negatives, in which the negatives are also sampled at random, and 2) hard negatives, in which negatives are generated from a closely related concept which also evaluates true on the positive examples in Dsupp, such that these negatives are maximally confusing. Altogether, for each split, our train, validation, and test sets contain 500000, 5000, and 20000 episodes, respectively.\nCompositionality Gap. A key aspect of our benchmark is to define the difficulty in learning that arises from the compositional structure of the concept space. Most of the splits above are structured in a way such that Htest ∩ Htrain = ∅ – forcing a learner to use the compositional structure of the concept space to generalize toHtest. We conceptualize the difficulty of this task through the notion of its compositionality gap. Intuitively, the compositionality gap captures the difference between the generalization performance of an ideal compositional learner (strong oracle) compared to an ideal non-compositional learner that is unable to extrapolate outside the training concepts (weak oracle).\nFormally, let Ω ∈ {strong,weak} denote an oracle over a concept space HΩ. The posterior predictive distribution of an oracle for query scene u and query label y ∈ {0, 1} is then given as: pΩ(y|u, Dsupp) = ∑ h∈HΩ pΩ(y|h,u)pΩ(h|Dsupp), where pΩ(h|Dsupp) ∝ pΩ(h) p({yi}Ni=1|h; {ui}Ni=1) and pΩ(h) denote the posterior and prior, respectively. Given a metric of interest M (e.g., mean average precision or accuracy), the compositionality gap of a learning task is then simply defined as the difference in performance of the strong and weak oracle when evaluating on concepts fromHtest, i.e., M(pstrong)−M(pweak). Using this notion of compositionality gap, we can then define ideal learners, i.e., the strong and weak oracle, simply via their priors. In particular, let w(h) denote a weight on importance of each hypothesis2 and let I denote the indicator function. We then define the prior of an oracle as pΩ(h) = ∑ h′∈HΩ w(h\n′)I[h′ = h],. The difference between strong and weak oracle lies in which concepts can be accessed in these priors.\nIn this formalism, the strong oracle has access to the union of train and test concepts; that is Hstrong = Htrain ∪Htest. The weak oracle, on the other hand only assumes access toHweak = Htrain, which means it is unable to consider any hypothesis outside what has been seen in training and assigning it zero probability mass. Given a support set Dsupp this difference in priors leads then to different inferences on posteriors and allows us to quantify how compositionally novel a learning task is relative to these ideal learners." }, { "heading": "4 METRICS AND BASELINES", "text": "During meta-test, given Dsupp models are evaluated on their ability to learn novel concepts. We use two metrics for quantifying this: 1) Accuracy: evaluates the accuracy of model predictions across the query set Dquery, as is standard practice in meta-learning (Lake et al., 2019; Snell et al., 2017). Since there are more negative than positive labels, we report class balanced accuracy for better interpretability, averaging accuracies for the positive and negative query examples; and 2) mean Average Precision (mAP): evaluates models on a much larger number of test scenes T for each episode (comprising 44,787 scenes, 3 per each concept inH). This resolves an issue that with a small query set, a strong model could achieve perfect accuracy without grasping the concept. Since episodes typically have many more negative than positive examples, Average Precision sweeps over different thresholds of a model’s score and reports the average of the precision values at different recall rates, e.g., (Everingham et al., 2010). mAP is then the mean across all of the meta-test episodes.\n2Set to log-linear in the prefix serialization length of hypothesis, inspired by the observation that longer hypotheses are more difficult for humans (Feldman, 2000). See Appendix B.3 for more details." }, { "heading": "4.1 TRAINING LOSS", "text": "Denote by u ∈ RM the input to the model, which can be either in the form of image, sound or schema. We work in a binary classification setting with labels y live in the space Y ∈ {0, 1}. Then, given a support set Dsupp = {ui, yi}Ti=1 and a query set Dquery = {ui, yi}Ti=1, sampled in accordance with a productive concept h, our training objective for a single training instance can be written as Lquery + αLconcept. Here Lquery = ∑ u,y∈Dquery log p(Y = y|u, Dsupp) is a standard maximum likelihood meta-learning loss (Ravi & Larochelle, 2016; Snell et al., 2017; Finn et al., 2017), and Lconcept = log p(H = h|Dsupp) is an optional regularizer designed to encourage retaining information about the hypothesis of interest from the support set." }, { "heading": "4.2 BASELINE MODEL ARCHITECTURES", "text": "Our baseline models (shown in Figure 4) parameterize the probability in the Lquery term above using prototypical networks (Snell et al., 2017). The prototypical network consists of an embedding function f = fθ and uses it to compute prototypes cp and cn for positive and negative examples by averaging f(u) for positive and negative examples in the support set respectively. In equations, given a query datapoint u′, we compute\np(Y = y|u;Dsupp) = exp(||f(u′)− cp||2\nexp(||f(u′)− cp||2) + exp(||f(u′)− cn||2 (1)\nIn this formalism, the models we study in this paper span different choices for f . Roughly, in each modality, we start with an encoder that converts the raw input into a set of vectors, and then a pooling operation that converts that set of vectors into a single vector. In the case of images and sound (input as spectrograms), the encoder is a ResNet-18; and the set of vectors is a subsampling of spatial locations; and for schemas we vectorize components with a lookup table and combine them into a set via feed-forward networks. In the case of images and sounds, the output of the encoder is enriched with position vectors. For the pooling operation, we study global averaging, concatenation, relation networks (Santoro et al., 2017) and transformers (Vaswani et al., 2017) equipped with different pooling operations (max, mean, sum, min) for reasoning inspired by Wang et al. (2019) (Figure 4 middle panel, also see Appendix F for more details).\nFor the probability in Lconcept, we represent the concept as a sequence by prefix serialization and then use an LSTM (Hochreiter & Schmidhuber, 1997) to parameterize p(h|Dsupp) = ΠSs=1p(hs|h1···t−1;Dsupp). At each step of the LSTM we concatenate [cp, cn] to the input." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "We first discuss the compositionality gap induced by the different generalization splits and then delve into the impact of modeling choices on performance on the generalization splits. All models are trained for 1 million steps, and are run with with 3 independent training runs to report standard deviations. We sweep over 3 modalities (image, schema, sound), 4 pooling schemes (avg-pool, concat, relation-net, transformer), 2 choices of negatives (hard negatives, random negatives) and choice of language (α = 0.0, 1.0). Unless mentioned otherwise in the main paper we focus on results with hard negatives and α = 0.0. When instantiated for a given modality, we note that the encoders f(u) (Figure 4) all have a similar number of parameters. The appendix contains details of the exact hyperparameters (Appendix E), and more comprehensive results for each split (Appendix G.7)." }, { "heading": "5.1 DATASET DESIGN AND COMPOSITIONALITY", "text": "How compositional are the structured splits?. Our main results are shown in Figure 5. Using our model-independent measure of the compositionality gap (Section 3), different splits present varying challenges for generalizing from train to test. The most difficult splits, with the largest compositionality gaps, are the Binding (color) and Binding (shape), which is reasonable since they require learning concepts with entirely new property-values. In contrast, the easiest split with the smallest compositionality gaps is the Instance IID split since it does not require compositionality. Finally, while the mAP metric exposes a larger value of comp gap, the ordering of splits in terms of comp gap is same for both metrics – suggesting similar coarse-grained notions of compositionality.\nResults for the best overall architecture, a relation network (relation-net), is shown in Figure 5. Network performance on the easiest data format (schema; yellow bars) is generally better than the weak oracle, but substantially worse than the strong oracle. Counting is a particularly challenging split where the models underperform even the weak oracle. Broadly, this suggests that the models capture some notion of compositionality – especially for images and schemas – relative to a weak oracle that rigidly considers only training hypotheses, but there is substantial room to improve (especially with respect to the more stringent mAP metric). These results demonstrate that CURI provides a challenging yet tractable setting for evaluating the compositional capabilities of models.\nFinally, we found that the performance on the Instance IID split is not equal to the weak (and strong) oracle—which are both equal in this case—indicating that the best model does not make ideal posterior predictions even when compositionality is not an issue. Ideal predictions in this case would require the network to behave as if marginalizing over the training hypotheses, as the strong oracle does. A similar plot to Figure 5 can be found in Appendix G.4 for random negatives.\nInfluence of Negatives. Previous work (Hill et al., 2019) has shown that the choice of random v.s. hard negatives for training and evaluation impacts compositional generalization substantially in the case of a particular set of analogical reasoning models. However, we argue that such decisions on dataset design can be made more objectively if one can evaluate the model-independent comp gap. In our context, we find that the comp gap with mAP when using random negatives decreases on average by 5.5 ± 1.4% compared to when we use hard negatives. This indicates that it is not only the choice ofHtrain andHtest, which are identical for a given compositional split (say Counting),\nbut also the choice of the negatives which “makes” the task compositionally novel. More generally, this indicates that the comp gap has utility as a more general diagnostic tool for making principled design decisions in compositional learning settings without the confound of specific model decisions." }, { "heading": "5.2 DIFFERENCES BETWEEN MODELS", "text": "Best Models. In general, the best performing model is the relation-net applied to schema inputs, outperforming other combinations of models and input modalities on the Boolean, Concept IID, Complexity, and Instance IID splits on both the mAP as well as accuracy metrics (Figure 5); although as mentioned above, none of the models are close to the strong oracle. It is closely followed by the transformer model on schema inputs, which performs the best on Binding (color), Binding (shape), and Intrinsic splits (Appendix G.7). Utilizing schema inputs proves easier for abstraction except for the Extrinsic setting, where the task requires generalization to novel locations for objects in images, which is well supported by the inductive bias of the CNN encoder (Figure 4). In this case, the image-transformer gets an mAP of 62.1± 0.7%, compared to the next best schema-transformer model at 60.9 ± 0.7. Further, relational learning proves more crucial in the schema case than for images, with all image models (regardless of pooling) performing better than 59.4 ± 1.3% mAP (achieved for image-avg-pool) while schema-avg-pool models get only get to 53.4± 1.5%. When to use a transformer? Transformer models appear to outperform relation networks in splits concerning disentangling. For instance, for the Intrinsic split with schema-relation-net is at 55.1± 0.8% v.s. 57.9± 0.6% for schema-transformer. Similarly, for the Extrinsic split the imagetransformer is at 62.1± 0.7% compared to the image-relation-net at 60.8± 1.1%. We hypothesize that this is because the iterative message passing via. attention in transformers improves object representations for disentangling compared to relation networks that lack such a mechanism.\nWhat is the relative difficulty of abstraction from different modalities? One of the key contributions of our work is in providing multiple modalities (image, schema, sound) for productive concept learning. We next characterize the difficulty of abstraction based on modality for the various generalization settings. In the Intrinsic setting, we find that the schema models, which have access to a “perfect” disentangled representation significantly outperform image models—a schema-avg-pool model gets an mAP of 52.7± 3.1% while an image-avg-pool model gets to 34.4± 0.0% mAP. Similarly, for the Counting split where the total number of objects are exactly specified in the schema (Figure 4), schemas are substantially better than images. For example, schema-relation-nets get to 56.25 ± 5.32% mAP while image-avg-pool is at 48.4 ± 1.2% mAP. Interestingly, the next best model—image-relation-net—is substantially worse, at 39.45 ± 1.6%. Curiously, for while transformer models perform well at disentangling, they seem to be quite poor for Counting, with image-transformer models getting to only 32.4± 1.4% mAP, suggesting a potential weakness for transformers. Overall, there appears to be an intimate link between the generalization setting and the input modality, suggesting avenues where representation learning could be improved for a given modality (e.g. images), relative to the kind of reasoning one is interested in (e.g. counting).\nWhen does language help? On average, training models with explicit concept supervision using the concept loss (Section 4.1) improves performance by 2.8± 0.6% mAP (SEM error). This is a small boost relative to the gap between the original model and the strong oracle, suggesting that this simple auxiliary loss is not sufficient to internalize the LOT in a neural network. Overall, image models benefit more from language than schema models which natively utilize symbols (Appendix G.3)." }, { "heading": "6 CONCLUSION", "text": "We introduced the compositional reasoning under uncertainty (CURI) benchmark for evaluating fewshot concept learning in a large compositional space, capturing the kinds of producitivity, unboundness and underdetermination that characterize human conceptual reasoning. We instantiate a series of meta-learning tasks, and evaluate numerous baseline models on various aspects of compositional reasoning under uncertainty, including inferential coherence, boolean operation learning, counting, disentangling, etc. Further, we introduce the notion of a compositionality gap to quantify the difficultly of each generalization type, and to estimate the degree of compositionality in current deep learning models. We hope our contributions of dataset, compositionality gaps, evaluation metrics and baseline models help spur progress in the important research direction of productive concept learning." }, { "heading": "A EXAMPLE EPISODES FROM THE DATASET", "text": "We show examples from the Concept IID split test set comprising the ground truth productive concept (top), along with the support and query sets for meta learning (rendered as images), the alternate hypotheses which are consistent with the support set – that is, other hypotheses which could also have generated the positive and negative examples in the support set – and the concepts based on which we pick the hard negatives Figures 6 to 11." }, { "heading": "B ADDITIONAL DATASET DETAILS", "text": "We first provide more details of the concept space G, then explain how we obtain H, the space of concepts for training and evaluation, provide more details of the structured splits, and finally explain the weight w(h) based on which we sample concepts." }, { "heading": "B.1 MORE DETAILS OF THE GRAMMAR", "text": "We provide below the full grammar used to sample concepts, where A → B|C means that A can expand to either B or C under the rules defined by the grammar. We always start expanding at the START token and then follow the rules of the grammar until we hit a terminal node (which does not have any expansions defined). As and where possible, we followed the insights from Piantadosi et al. (2016) in choosing the sampling probabilities for various completions based on how well humans seem to be able to learn the corresponding primitive. For example, we sample utterances with disjunctions (or) less frequently since they are known to be difficult for humans to learn. Based on Kemp & Jern (2009), we chose to represent location as a discrete entity, such that relative, and categorical notions of left or right simply become comparisons in the location space (location? x > location? S−x), unlike the CLEVR dataset (Johnson et al., 2016) which defines categorical relational objects.\nHere is the full grammar G used for sampling the concepts (as explained in the main paper, S−x = S/{x}). Note that the grammar always generates strings in postfix notation and thus the operands in each expansion occur before the operation:\nSTART −> λ S . BOOL e x i s t s = | λ S . BOOL f o r−a l l =\nBOOL −> BOOL BOOL and | BOOL BOOL or | BOOL n o t | C C = | SH SH = | M M = | SI SI = | L L = | NUM NUM = | SI SI > | L L > | NUM NUM > | SETFC C a l l | SETFSH SH a l l | SETFM M a l l | SETFSI SI a l l | SETFL L a l l | SETFC C any | SETFSH SH any | SETFM M any | SETFSI SI any | SETFL L any\nNUM −> SETFC C c o u n t = | SETFSH SH c o u n t = | SETFM M c o u n t = | SETFSI SI c o u n t = | SETFL L c o u n t = NUM −> 1 | 2 | 3\nSETFC −> SET FC SETFSH−> SET FSH SETFM −> SET FM SETFSI−> SET FSI SETFL −> SET FL\nC −> gray | r e d | b l u e | g r e e n | brown | p u r p l e | cyan | ye l l ow C −> OBJECT FC\nSH −> cube | s p h e r e | c y l i n d e r SH −> OBJECT FSH\nM −> r u b b e r | m e t a l M −> OBJECT FM\nSI −> l a r g e | s m a l l SI −> OBJECT FSI\nL −> 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 L −> OBJECT FL\nFC −> c o l o r ? FSH −> shape ? FM −> m a t e r i a l ? FSI −> s i z e ? FL −> l o c a t i o n X ? | l o c a t i o n Y ?\nOBJECT−> x SET : S | S−x" }, { "heading": "B.2 SAMPLING", "text": "We sample 2000000 initial hypotheses from the CFG G, and impose a maximum depth in the recursion tree of 6 when sampling. That is, no node has a depth larger than 6 in the recursion through which we generate concepts from the grammar G. We then reject and filter the hypotheses to obtain a set of “interesting” hypothesesH used in the main paper explained in more detail below: Rejection Sampling: We reject the following string combinations after sampling from the grammar G:\n• All programs which contain \"λS. for-all x\" and \"S−x\" in the same program. This is asking that for all objects in a scene, a certain property is satisfied by everything other than the object, which is the same as saying, for all objects in the scene.\n• All programs where we compare the properties of the same object to itself, e.g. color? (x) == color? (x), where color? can be any function applied to the object. • All programs where we have the following string: exists(color?(S) == color?(x)) or for-all(color?(S) == color?(x)) where color? can be any function applied to the object. • All programs which evaluate to true on schemas more than 10% of the time and less than 10\ntimes. The former condition ensures that we work with concepts which are in some sense interesting and surprising (as opposed to concepts which are always trivially true), and the second condition ensures that we have unique schmeas or datapoints to place in the support and query sets, which both have 5 positive images each.\nWe provide examples of concepts which get rejected for being true too often below:\nexists=x \\in S or( =(locationX?( x ), locationY?( x ) ), any(color?( S ), brown )\n) exists=x \\in S and(\nexists=(locationY?( S ), locationX?( x ) ), any(color?( S ), brown )\n) exists=x \\in S or(\nall(color?( S ), gray ), all(color?( S ), brown )\n)\nSee Appendix C for more details on the structured generalization splits which yeild train concepts Htrain and test conceptsHtest.\nB.3 CONCEPT PRIOR WEIGHT w(h)\nWe next explain the form of the prior weight w(h) that we use for defining the prior over the concepts provided to the models (both oracles as well as deep learning models). Given l(h), the number of tokens in the postfix serialization of the concept h, the unnormalized weight w̃(h) is log-linear in the length, and is defined as follows:\nw̃(h) ∝ exp−0.2 · l(h) (2)\nGiven a split Ω ∈ {train, test}, the final, normalized weight is given as:\nw(h) = w̃(h)∑ HΩ w̃(h)\n(3)\nAs explained in the main paper, the final prior for a hypothesis given a split Ω is p(h) =∑ h′∈HΩ w(h)I[h = h ′].\nOur choice of the log-linear weight is inspired by the observation in cognitive science that longer boolean concepts are harder for people to learn (Feldman, 2000).\nHere are some examples of hypotheses with a high weight (computed on Ω = train ∪ test): e x i s t s x i n S =( 2 , c o u n t =( c o l o r ? ( $S_{−x}$ ) , cyan ) ) , e x i s t s x i n S >( l o c a t i o n Y ? ( x ) , 6 ) , =( c o u n t =( c o l o r ? ( S ) , brown ) , 3 ) , >( c o u n t =( l o c a t i o n X ? ( S ) , 3 ) , 2 ) , any ( l o c a t i o n Y ? ( S ) , 6 ) , = ( 1 , c o u n t =( l o c a t i o n Y ? ( S ) , 7 ) ) , = ( 3 , c o u n t =( l o c a t i o n Y ? ( S ) , 3 ) ) , a l l ( l o c a t i o n X ? ( S ) , 2 ) , e x i s t s x i n S a l l ( l o c a t i o n Y ? ( $S_{−x}$ ) , 5 ) ,\n=( 2 , c o u n t =( c o l o r ? ( S ) , b l u e ) ) , f o r−a l l x i n S n o t ( >(6 , l o c a t i o n X ? ( x ) ) ) , =( c o u n t =( c o l o r ? ( S ) , g r ay ) , 2 ) , = ( 2 , c o u n t =( c o l o r ? ( S ) , g r ay ) )" }, { "heading": "B.4 EXECUTION ON IMAGES.", "text": "In order to create the perceptual inputs in the dataset U , we sample images using the renderer for CLEVR from Johnson et al. (2016), changing the range of objects to [2, 5], to reduce clutter and enable easier learning of predicates like any and all for models. 3 The CLEVR renderer produces scenes u with pixels as well as an associated schema file us detailing the properties of all the objects sampled in the scene, including their location, shape, size, material, and rotation. Based on this, we convert our sampled concepts into postfix notation and execute them on the schemas using an operator stack. Concretely, execution of the concept h ∈ H on us yields a boolean true or false value {0, 1}. We execute each such hypothesis on a set of 990K images, yielding scores of how often a hypothesis is true for an image. We threshold this score to retain the subset of hypotheses which are true for no more than 10% of the images and are true at least for 10 images, to pick a subset of “interesting” hypothesesH′ for training models. Bias. The image dataset here sampled itself has a bias in terms of the location coordinates (in the pixel space). The CLEVR dataset generation process samples objects in the 3d (top-down x, y) coordinate space uniformly (from a grid of -3, to +3). However, since the camera is always looking into the scene from outside, the image formation geometry implies that in the camera / image coordinates most of the objects appear to be away from the scene and very few are close to the camera. Thus, in terms of the y-coordinates we observe in the image coordinates a bias in terms of the distribution not being unifrom. This also makes sense in general, as even in the real world, objects are not found very close to the camera or very far away from the camera in general. See Figure 12 for all the biases in the observation space u computed over 990K sampled images." }, { "heading": "B.5 AUDIO.", "text": "To build the audio data, we use clips of orchestral instruments playing various pitches downloaded from https://philharmonia.co.uk/resources/sound-samples/. We make the following mappings of object properties:\n• x location→ temporal location. larger x bin means the note is played later. • y location→ pitch. All pitches between the instruments are the same (up to octaves). • color→ instrument\n– gray→ trumpet – red→ clarinet – blue→ violin – green→ flute – brown→ oboe – purple→ saxaphone – cyan→ french-horn – yellow→ guitar\n• shape→ amplitude profile; either getting louder, getting softer, or constant volume • size→ total volume • material→ low-pass filtering or no filtering.\nAll binned quantities use the same number of bins as in the image domain.\n3Since the chances of a constraint being true for all obejcts reduce exponentially as the number of objects increases." }, { "heading": "B.6 ANALYSIS OF SYNONOMY OF CONCEPTS.", "text": "We next show an analysis of concepts which have the same evaluation signatures on a large set of 990K images, and are thus synonymous (in context of the dataset at hand). Note that while some of these concepts might be truly synonymous to each other (for example, A > B is the same as B < A), others might be synonymous in context of the image distribution we work with. For example, size can never be greater than 0.7 in our dataset and location can never be greater than 8, and thus asking if location is greater than 8 or size is greater than 0.7 has the same semantics on our dataset. In Figure 13 we show each such “concept” or “meaning”, which is a cluster of hypotheses which all evaluate to the same truth value and plot a histogram of how many hypotheses each cluster tends to have. We notice that most of the concepts have 1 synonym (i.e. there is only one concept with the particular) evaluation signature, with a long tail going upto 80 synonyms in a concept. In the Concept IID split we ensure that none of the concepts which have the same signature are found across the train/val/test splits." }, { "heading": "C DETAILED DISCUSSION OF THE STRUCTURED SPLITS", "text": "We provide more details on how each of the structured splits described in Sec. 3 of the main paper are created. Assuming access toH, the space of concepts sampled and filtered from the grammar G, we use various heuristics to produce the generalization splits in the paper:\n• Instance IID: This split is trivial sinceHtrain = Htest = H • Concept IID: This split divides concepts into train and test by picking concepts at random\nfromH and assigning them toHtrain orHtest while ensuring that no two concepts which are synonyms Appendix B.6 are found in different splits.\n• Boolean: This split forms cross product of all possible colors and {and, or} boolean operators, and partitions a subset of such combinations which we want to only occur in test. We use the following tokens for test:\nWe then createHtest to contain all concepts which have any of the tokens above. For example, if a concept has purple, we would place it inHtest. After every feasible candidate is placed inHtest based on this heurisitc, the remaining concepts inH are assigned toHtrain. • Binding (shape): This split takes all possible shapes in the dataset, and partitions a subset of\nshapes that we only want to occur in test. We use the following tokens for test:\n‘cylinder’\nWe then createHtest to contain all concepts which have any of the tokens above. For example, if a concept has cylinder, we would place it in Htest. After every feasible candidate is placed inHtest based on this heurisitc, the remaining concepts inH are assigned toHtrain. • Complexity: This split partitions into train and test based on length of the postfix serialization\nof the concept. Specifically, concepts shorter than 10 tokens are placed inHtrain and longer concepts are placed inHtest." }, { "heading": "D CREATING SUPPORT AND QUERY SETS", "text": "We next explain how we go from the initial dataset U – which contains a large number of images, schema and sounds – and a concept spaceHtrain andHtest, to a dataset for meta learning. To create the training/validation/test sets for models, we sample a series of episodes, each containing a support set and a query set. We illustrate the sampling procedure for a training episode below:" }, { "heading": "Support Set Sampling with Hard Negatives", "text": "1. Pick a concept h ∼ ptrain(h), with a preference for shorter hypotheses being more frequent based on the weights used to define the prior Appendix B.3\n2. Pick 5 images (P ), uniformly at random from U such that h(us) = 1, where the concept is evaluated on the schema to determine the label Appendix B.4\n3. Identify other concepts h′ ∈ H s.t. h(u(S)) = 1 and h′ 6= h 4. Pick images such that h′(us) = 1 and h(us) = 0 as negatives (N ). If no such images exist,\npick random images from U as negatives until we have 20 negatives. 5. Return Dsupp = P ∪N .\nThe sampling procedure for the Query set iterates all the steps above (except step 1, where we choose the concept h). Step 3 and 4 outline a procedure for identifying hard negatives for training the model, by looking at other hypotheses which also explain a chosen set of positives P and using them to clarify what the concept of interest is.\nWe give below an analogous procedure for easy negatives:" }, { "heading": "Support Set Sampling with Easy Negatives", "text": "1. Pick a concept h ∼ ptrain(h), with a preference for shorter hypotheses being more frequent based on the weights used to define the prior Appendix B.3\n2. Pick 5 images (P ), uniformly at random from U such that h(us) = 1, where the concept is evaluated on the schema to determine the label Appendix B.4\n3. Pick 20 random images from U as negatives, N. 4. Return Dsupp = P ∪N .\nSimilar to hard negatives, the sampling procedure for the Query set iterates all the steps above (except step 1, where we choose the concept h)." }, { "heading": "E REPRODUCIBILITY AND HYPERPARAMETERS", "text": "For all the models in Figure. 4 in the main paper, we use the following hyperparameters. All the modalities are processed into a set of objects {oi}Ni=1 where each oi ∈ R64 for image and sound\nmodels while for schema oi ∈ R96. Further, the we use a learning rate of 1e-4 for image models, 1e-3 for schema models, and 5e-5 using the best learning rate for each modality across an initial sweep. The batch size for image and sound models is 8 episodes per batch, while for schema we use a batch size of 64. All models use the Adam optimizer. The overall scene representation across all the modalities is set to 256, that is, u ∈ R256. All our models are initialized with the method proposed in (Glorot & Bengio, 2010), which we found to be crucial for training relation networks well. The initial representation from the first stage of the encoder (Fig. 4 in main paper) with the objects for images has a size of 10x8 i.e. there are 80 objects, while for sound representations have 38 objects. In the schema case the number of objects is the ground truth number of objects which is provided as input to the model.\nWe trained all the models on the training set, and picked the best performing checkpoints on training – measured in terms of mAP– to report all the results in the main paper. Our image and schema models are trained for 1 million steps (16 epochs for images, 128 epochs for schemas) while the sound models are trained for 500K steps, and checkpoints are stored after every 30K steps.\nAll our models fit on a GPU with 16GB capacity except the relation network trained with image inputs, which needs at 32GB GPU. We use the pytorch framework to implement all our models." }, { "heading": "F MODEL ARCHITECTURES FOR POOLING", "text": "In this section we detail the exact architectures used for the different pooling operations we consider in this paper as shown in Figure 4 center panel. We first establish some notation. Let oi ∈ RK be the output object feature from the modality specific encoder (Figure 4, left panel), and let us denote by O = {oi}|O|i=1 the set of features for each of the objects in the scene, which includes optional position information indicating where the object is present in the scene (Figure 4). Let N be the requested dimensionality of the feature space from the pooling operation. Given this, we can describe the pooling operations used as follows:\n• avg-pool: We first average the representations across all the objects {oi}|O|i=1 and then pass the averaged representation through an MLP with 256 x 512 x 384 x N units with batch normalization and rectified linear unit nonlinearity in the hidden layers.\n• concat: We first concatenate all the object representations in O, followed by an MLP with 256 x 512 x 256 x N units with batch normalization and rectified linear units nonlinearity in the hidden layers.\n• relation-net: For relation networks, following (Santoro et al., 2017) we use relative position encoding that captures the relative positioning of the objects in a scene for image and sound modalities, and use the location information already present in the schema modality. Based on this, in the terminology of Santoro et al. (2017) our g() MLP has 256 x 256 x 256 x 256 hidden units with rectified linear unit non linearity and batch normalization whereas our f() MLP has 256 x 256 x N units with recitifed linear unit non linearlity and batch normalization in the middle (non-output) layers. Different from the original paper, we do not use dropout as we did not observe any overfitting in our experiments.\n• transformer: We use a 2-head multi-head attention layer stacked 4 times, with the feedforward network dimenstions set to 512. After forwarding through this module, we take the output vectors o′i for each object processed through these initial layers and pool across objects by doing max(), mean(), sum(), min() operations and concatenating their outputs, similar to previous work by Wang et al. (2019). The final representation then does a linear projection of this concatenated vector to N , the dimensionality expected from the pooling module." }, { "heading": "G ADDITIONAL RESULTS", "text": "" }, { "heading": "G.1 HYPERPARAMETER SWEEPS – OBJECT FEATURE DIMENSIONS", "text": "We next show the hyperparameter sweeps for image models in deterimining the choice of the dimensionality to represent each object oi for our image models (Figure 14). The same choice of\ndimensionality was replicated for sound models. In our initial sweeps, on the Concept IID split, across different choices of the dimensionality of objects, we found relation networks to outperform concat and global average pooling models substantially, and thus we picked the object dimensions based on what performs best for relation networks since overall we are interested in the best possible choice of models for a given split and modality. Based on the results in Figure 14 we picked oi ∈ R64.\nG.2 IMAGE RELATION NETWORKS LEARNING RATE SWEEPS\nWe picked the learning rate for image models based on the best performing image relation network model, which across an initial sweep we found to yeild the best class of models. Figure 15 shows the performance of the models across learning rates of {1e-4, 5e-4, 2.5e-4}." }, { "heading": "G.3 SWEEP ON USE OF LANGUAGE", "text": "As explained in the main paper (Figure. 4), the parameter α controls the tradeoff between the query accuracy and the likelihood of the concept expressed as a prefix string. We generally found across a broad range of values in {0.0, 0.01, 0.10, 1.0} the models generally performed the best at α = 1.0. Our initial experiments with α = 10.0 suggested substantially worse performance so we discareded it from the sweep. See Figures 16 to 18 for the corresponding results.\nRelation Networks on Easy Negatives" }, { "heading": "G.4 RESULTS ON EASY NEGATIVES", "text": "In Figure 19 we show results for the relation-net model on various splits, where easy negatives are used to populate the support and query sets during training and evaluation, unlike the case of hard negatives discussed in the main paper (Figure. 5). Notice that the compositionality gap (comp gap) is lower in general for easy negatives compared to the hard negatives as reported in the main paper. Further, we find that the best models are substantially closer to the strong oracle compared to Figure. 5 main paper, showing that on the easier, less compositional task it is easier for machine learning models to approach the strong oracle (especially in terms of accuracy). Finally, it is interesting to note that with easy negatives it appears that the best models outperform the weak oracle on the Counting split, while with the hard negatives one finds that the models are worse than the weak oracle, suggesting poor generalization for counting.\nG.5 FINER α SWEEP FOR COUNTING\nFinally, we ran a finer alpha sweep for the Counting split since it appeared on our initial sweep that the counting split was not performing better with language. Concretely, we ran a new set of experiments sweeping over α values of {0.01, 0.10, 1.0, 5.0, 10.0, 100.0}. Across this broader range of values, we found models still did not show any statistically significant gains from using language v.s. not for the Counting split.\nG.6 CHOICE OF METRIC: MAP v.s. ACCURACY\nIn general, the mAP metric opens up a larger comp gap for the various splits than indicated by CBA. For example, with hard negatives, while CBA indicates a gap of 14.2% for Counting compared to 0% for Instance IID, mAP suggests a gap of 34.4% for Counting relative to 0% for Instance IID. For the Binding (color) split its 86.5% comp gap (mAP) v.s. 34.0% for CBA. mAP, while being more expensive to compute evaluates more thoroughly to test if a concept h is truly learnt by the model, by probing its performance on a large, representative set of negatives T , providing a more stringent test of compositional generalization." }, { "heading": "G.7 DETAILED RESULTS ON ALL THE SPLITS IN HARD NEGATIVES SETTING", "text": "In this section we provide the full results of all of the tested models on each of the splits considered in the paper, in the hard negatives setting. Tables 1-9 show the results of different models (sorted in a descending order based on mAP for each of the splits considered in the paper, in the case where models do not have access to language." }, { "heading": "G.8 DETAILED RESULTS ON ALL THE SPLITS IN EASY NEGATIVES SETTING.", "text": "In this section we provide the full results of all of the tested models on each of the splits considered in the paper, in the easy negatives setting. Tables 10-18 show the results of different models (sorted in a descending order based on mAP for each of the splits considered in the paper, in the case where models do not have access to language. Note that we did not evaluate transformer models or sound models in this setting as this is qualitatively less interesting than the hard negatives setting and is not the main focus of the paper." } ]
2,020
CURI: A BENCHMARK FOR PRODUCTIVE CONCEPT LEARNING UNDER UNCERTAINTY
SP:0a4cf8c20a5ac64540faf909d0e6d3af34e4036c
[ "This paper proposes a neurosymbolic module network that predicts a program structure following a dependency parse, populates that program's arguments, and executes it to answer numerical reasoning questions over text. They claim that compared to Gupta et al. (2020), this approach doesn't require as many domain-specific heuristics to find gold programs or as much precomputation -- it is learned with weak supervision only (just the answers). The model has a number of pieces allowing the model to reference entities, numbers, and dates in a cross-attentive fashion. Results show that on numerical questions from the DROP dataset, the model outperforms that of Gupta et al. and is competitive with other approaches when appropriate assumptions are made.", "The paper proposes a new model for numerical reasoning in machine comprehension. Given a passage and a query, the model outputs an arithmetic expression over numbers/dates in the passage (e.g. max(23, 26, 42)). The model is trained with weak supervision in the form of numerical answers only. This weak supervision is used to define reward for reinforcement learning training. A key claimed advantage of the model compared to the prior art is that it trains end-to-end from the rewards as the only form of supervision. This is contrasted to neural module networks, which require program supervision for good performance, as well as GenBERT, which requires additional synthetic training data for pretraining. Two key quantitative results include: " ]
Neural Module Networks (NMNs) have been quite successful in incorporating explicit reasoning as learnable modules in various question answering tasks, including the most generic form of numerical reasoning over text in Machine Reading Comprehension (MRC). However, to achieve this, contemporary NMNs need strong supervision in executing the query as a specialized program over reasoning modules and fail to generalize to more open-ended settings without such supervision. Hence we propose Weakly-Supervised Neuro-Symbolic Module Network (WNSMN) trained with answers as the sole supervision for numerical reasoning based MRC. It learns to execute a noisy heuristic program obtained from the dependency parsing of the query, as discrete actions over both neural and symbolic reasoning modules and trains it end-to-end in a reinforcement learning framework with discrete reward from answer matching. On the numerical-answer subset of DROP, WNSMN outperforms NMN by 32% and the reasoning-free language model GenBERT by 8% in exact match accuracy when trained under comparable weak supervised settings. This showcases the effectiveness and generalizability of modular networks that can handle explicit discrete reasoning over noisy programs in an end-to-end manner.
[]
[ { "authors": [ "Jacob Andreas", "Marcus Rohrbach", "Trevor Darrell", "Dan Klein" ], "title": "Deep compositional question answering with neural module", "venue": "networks. CoRR,", "year": 2015 }, { "authors": [ "Jacob Andreas", "Marcus Rohrbach", "Trevor Darrell", "Dan Klein" ], "title": "Neural module networks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Ghulam Ahmed Ansari", "Amrita Saha", "Vishwajeet Kumar", "Mohan Bhambhani", "Karthik Sankaranarayanan", "Soumen Chakrabarti" ], "title": "Neural program induction for kbqa without gold programs or query annotations", "venue": "In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Danqi Chen", "Christopher Manning" ], "title": "A fast and accurate dependency parser using neural networks", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Xinyun Chen", "Chen Liang", "Adams Wei Yu", "Denny Zhou", "Dawn Song", "Quoc V. Le" ], "title": "Neural symbolic reader: Scalable integration of distributed and symbolic representations for reading comprehension", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "Bert: Pre-training of deep bidirectional transformers for language understanding, 2018", "venue": "URL http://arxiv.org/abs/ 1810.04805", "year": 2018 }, { "authors": [ "Dheeru Dua", "Yizhong Wang", "Pradeep Dasigi", "Gabriel Stanovsky", "Sameer Singh", "Matt Gardner" ], "title": "DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs", "venue": "In Proc. of NAACL,", "year": 2019 }, { "authors": [ "Avia Efrat", "Elad Segal", "M. Shoham" ], "title": "Tag-based multi-span extraction in reading", "venue": "comprehension. ArXiv,", "year": 2019 }, { "authors": [ "Mor Geva", "Ankit Gupta", "Jonathan Berant" ], "title": "Injecting numerical reasoning skills into language models", "venue": "In ACL,", "year": 2020 }, { "authors": [ "Xiaoxiao Guo", "Tim Klinger", "Clemens Rosenbaum", "Joseph P. Bigus", "Murray Campbell", "Ban Kawas", "Kartik Talamadupula", "Gerry Tesauro", "Satinder Singh" ], "title": "Learning to query, reason, and answer questions on ambiguous texts", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Nitish Gupta", "Kevin Lin", "Dan Roth", "Sameer Singh", "Matt Gardner" ], "title": "Neural module networks for reasoning over text", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Mohammad Javad Hosseini", "Hannaneh Hajishirzi", "Oren Etzioni", "Nate Kushman" ], "title": "Learning to solve arithmetic word problems with verb categorization", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Minghao Hu", "Yuxing Peng", "Zhen Huang", "Dongsheng Li" ], "title": "A multi-type multi-span network for reading comprehension that requires discrete reasoning", "venue": "In Proceedings of EMNLP,", "year": 2019 }, { "authors": [ "Ronghang Hu", "Jacob Andreas", "Marcus Rohrbach", "Trevor Darrell", "Kate Saenko" ], "title": "Learning to reason: End-to-end module networks for visual question answering", "venue": "In IEEE International Conference on Computer Vision, ICCV 2017,", "year": 2017 }, { "authors": [ "Ting Huang", "Zhi-Hong Deng", "Gehui Shen", "Xi Chen" ], "title": "A window-based self-attention approach for sentence", "venue": "encoding. Neurocomputing,", "year": 2020 }, { "authors": [ "Jambay Kinley", "Raymond Lin" ], "title": "Nabert+: Improving numerical reasoning in reading comprehension. 2019", "venue": "URL https://github.com/raylin1000/drop-bert", "year": 2019 }, { "authors": [ "Rik Koncel-Kedziorski", "Subhro Roy", "Aida Amini", "Nate Kushman", "Hannaneh Hajishirzi" ], "title": "MAWPS: A math word problem repository", "venue": "In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2016 }, { "authors": [ "Chen Liang", "Jonathan Berant", "Quoc Le", "Kenneth D Forbus", "Ni Lao" ], "title": "Neural symbolic machines: Learning semantic parsers on freebase with weak supervision", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2017 }, { "authors": [ "Chen Liang", "Mohammad Norouzi", "Jonathan Berant", "Quoc V Le", "Ni Lao" ], "title": "Memory augmented policy optimization for program synthesis and semantic parsing", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Arvind Neelakantan", "Quoc V. Le", "Martı́n Abadi", "Andrew McCallum", "Dario Amodei" ], "title": "Learning a natural language interface with neural programmer", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Qiu Ran", "Yankai Lin", "Peng Li", "Jie Zhou", "Zhiyuan Liu" ], "title": "NumNet: Machine reading comprehension with numerical reasoning", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Nils Reimers", "Iryna Gurevych" ], "title": "Sentence-bert: Sentence embeddings using siamese bert-networks", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Subhro Roy", "Dan Roth" ], "title": "Solving general arithmetic word problems", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "Amrita Saha", "Ghulam Ahmed Ansari", "Abhishek Laddha", "Karthik Sankaranarayanan", "Soumen Chakrabarti" ], "title": "Complex program induction for querying knowledge bases in the absence of gold programs", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Adam Santoro", "David Raposo", "David G.T. Barrett", "Mateusz Malinowski", "Razvan Pascanu", "Peter W. Battaglia", "Tim Lillicrap" ], "title": "A simple neural network module for relational reasoning", "venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017,", "year": 2017 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization. volume", "venue": "Proceedings of Machine Learning Research, pp. 1889–1897,", "year": 2015 }, { "authors": [ "Sanjay Subramanian", "Ben Bogin", "Nitish Gupta", "Tomer Wolfson", "Sameer Singh", "Jonathan Berant", "Matt Gardner" ], "title": "Obtaining Faithful Interpretations from Compositional Neural Networks", "venue": "In Association for Computational Linguistics (ACL),", "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "R.J. Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine Learning,", "year": 1992 }, { "authors": [ "Adams Wei Yu", "David Dohan", "Quoc Le", "Thang Luong", "Rui Zhao", "Kai Chen" ], "title": "Fast and accurate reading comprehension by combining self-attention and convolution", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Victor Zhong", "Caiming Xiong", "Richard Socher" ], "title": "Seq2sql: Generating structured queries from natural language using reinforcement learning, 2017", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "End-to-end neural models have proven to be powerful tools for an expansive set of language and vision problems by effectively emulating the input-output behavior. However, many real problems like Question Answering (QA) or Dialog need more interpretable models that can incorporate explicit reasoning in the inference. In this work, we focus on the most generic form of numerical reasoning over text, encompassed by the reasoning-based MRC framework. A particularly challenging setting for this task is where the answers are numerical in nature as in the popular MRC dataset, DROP (Dua et al., 2019). Figure 1 shows the intricacies involved in the task, (i) passage and query language understanding, (ii) contextual understanding of the passage date and numbers, and (iii) application of quantitative reasoning (e.g., max, not) over dates and numbers to reach the final numerical answer.\nThree broad genres of models have proven successful on the DROP numerical reasoning task. First, large-scale pretrained language models like GenBERT (Geva et al., 2020) uses a monolithic Transformer architecture and decodes numerical answers digit-by-digit. Though they deliver mediocre performance when trained only on the target data, their competency is derived from pretraining on massive synthetic data augmented with explicit supervision of the gold numerical reasoning. Second kind of models are the reasoning-free hybrid models like NumNet (Ran et al., 2019), NAQANet (Dua et al., 2019), NABERT+ (Kinley & Lin, 2019) and MTMSN (Hu et al., 2019), NeRd (Chen et al., 2020). They explicitly incorporate numerical computations in the standard extractive QA pipeline by learning a multi-type answer predictor over different reasoning types (e.g., max/min, diff/sum, count, negate) and directly predicting the corresponding numerical expression, instead of learning to reason. This is facilitated by exhaustively precomputing all possible outcomes of discrete operations and augmenting the training data with the reasoning-type supervision and numerical expressions that lead to the correct answer. Lastly, the most relevant class of models to consider for this work are the modular networks for reasoning. Neural Module Networks (NMN) (Gupta et al., 2020) is the first explicit reasoning based QA model which parses the query into a specialized program and executes it step-wise over learnable reasoning modules. However, to do so, apart from the exhaustive precomputation of all discrete operations, it also needs more fine-grained supervision of the gold\nprogram and the gold program execution, obtained heuristically, by leveraging the abundance of templatized queries in DROP. While being more pragmatic and richer at interpretability, both modular and hybrid networks are also tightly coupled with the additional supervision. For instance, the hybrid models cannot learn without it, and while NMN is the first to enable learning from QA pair alone, it still needs more finer-grained supervision for at least a part of the training data. With this, it manages to supercede the SoTA models NABERT and MTMSN on a carefully chosen subset of DROP using the supervision. However, NMN generalizes poorly to more open-ended settings where such supervision is not easy to handcraft. Need for symbolic reasoning. One striking characteristic of the modular methods is to avoid discrete reasoning by employing only learnable modules with an exhaustively precomputed space of outputs. While they perform well on DROP, their modeling complexity grows arbitrarily with more complex non-linear numerical operations (e.g., exp, log, cos). Contrarily, symbolic modular networks that execute the discrete operations are possibly more robust or pragmatic in this respect by remaining unaffected by the operation complexity. Such discrete reasoning has indeed been incorporated for simpler, well-structured tasks like math word problems (Koncel-Kedziorski et al., 2016) or KB/TableQA (Zhong et al., 2017; Liang et al., 2018; Saha et al., 2019), with Deep Reinforcement Learning (RL) for end-to-end training. MRC however needs a more generalized framework of modular neural networks involving more fuzzy reasoning over noisy entities extracted from open-ended passages. In view of this, we propose a Weakly-Supervised Neuro-Symbolic Module Network (WNSMN) • A first attempt at numerical reasoning based MRC, trained with answers as the sole supervision; • Based on a generalized framework of dependency parsing of queries into noisy heuristic programs; • End-to-end training of neuro-symbolic reasoning modules in a RL framework with discrete rewards; To concretely compare WNSMN with contemporary NMN, consider the example in Figure 1. In comparison to our generalized query-parsing, NMN parses the query into a program form (MAX(FILTER(FIND(‘Carpenter’), ‘goal’)), which is step-wise executed by different learnable modules with exhaustively precomputed output set. To train the network, it employs various forms of strong supervision such as gold program operations and gold query-span attention at each step of the program and gold execution i.e., supervision of the passage numbers (23, 26, 42) to execute MAX operation on. While NMN can only handle the 6 reasoning categories that the supervision was tailored to, WNSMN focuses on the full DROP with numerical answers (called DROP-num) that involves more diverse reasoning on more open-ended questions. We empirically compare WNSMN on DROP-num with the SoTA NMN and GenBERT that allow learning with partial or no strong supervision. Our results showcase that the proposed WNSMN achieves 32% better accuracy than NMN in absence of at least one or more types of supervision and performs 8% better than GenBERT when the latter is fine-tuned only on DROP in a comparable setup, without additional synthetic data having explicit supervision." }, { "heading": "2 MODEL: WEAKLY SUPERVISED NEURO-SYMBOLIC MODULE NETWORK", "text": "We now describe our proposed WNSMN that learns to infer the answer based on weak supervision of the QA pair by generating the program form of the query and executing it through explicit reasoning. Parsing Query into Programs To keep the framework generic, we use a simplified representation of the Stanford dependency parse tree (Chen & Manning, 2014) of the query to get a generalized program (Appendix A.5). First, a node is constructed for the subtree rooted at each child of the root by merging its descendants in the original word order. Next an edge is added from the left-most node (which we call the root clause) to every other node. Then by traversing left to right, each node is organized into a step of a program having a linear flow. For example, the program obtained in Figure\n1 is X1 = (‘which is the longest’); X2 = (‘goal by Carpenter’, X1); Answer = Discrete-Reasoning(‘which is the longest’, X2). Each program step consists of two types of arguments (i) Query Span Argument obtained from the corresponding node, indicates the query segment referred to, in that program step e.g., ‘goal by Carpenter’ in Step 2 (ii) Reference Argument(s) obtained from the incoming edges to that node, refers to the previous steps of the program that the current one depends on e.g., X1 in Step 2. Next, a final step of the program is added, which has the reference argument as the leaf node(s) obtained in the above manner and the query span argument as the root-clause. This step is specifically responsible for handling the discrete operation, enabled by the root-clause which is often indicative of the kind of discrete reasoning involved (e.g., max). However this being a noisy heuristic, the QA model needs to be robust to such noise and additionally rely on the full query representation in order to predict the discrete operation. For simplicity we limit the number of reference arguments to 2." }, { "heading": "2.1 PROGRAM EXECUTION", "text": "Our proposed WNSMN learns to execute the program over the passage in three steps. In the preprocessing step, it identifies numbers and dates from the passage, and maintains them as separate canonicalized entity-lists along with their mention locations. Next, it learns an entity-specific crossattention model to rank the entities w.r.t. their query-relevance (§2.1.1), and then samples relevant entities as discrete arguments (§2.1.2) and executes appropriate discrete operations on them to reach the answer. An RL framework (§2.1.3) trains it end-to-end with the answer as the sole supervision." }, { "heading": "2.1.1 ENTITY-SPECIFIC CROSS ATTENTION FOR INFORMATION EXTRACTION", "text": "To rank the query-relevant passage entities, we model the passage, program and entities jointly.\nModeling interaction between program and passage This module (Figure 2, left) learns to associate query span arguments of the program with the passage. For this, similar to NMN, we use a BERT-base pretrained encoder (Devlin et al., 2018) to get contextualized token embeddings of the passage and query span argument of each program step, respectively denoted by Pk and Qk for the k’th program step. Based on it, we learn a similarity matrix S 2 l⇥n⇥m between the program and passage, where l, n, and m respectively are the program length and query span argument and passage length (in tokens). Each Sk 2 n⇥m represents the affinity over the passage tokens for the k’th program argument and is defined as Sk(i, j) = wT [Qki;Pkj ;Qki Pkj ], where w is a learnable parameter and is element-wise multiplication. From this, an attention map Ak is computed over the passage tokens for the k’th program argument as Ak(i, j) = softmaxj(Sk(i, j)) =\nexp(Sk(i,j))P j exp(Sk(i,j)) . Similarly, for the i’th token of the k’th program argument the cumulative attention aki w.r.t. the passage is aki = softmaxi( P j Sk(i, j)). A linear combination of the attention map Ak(i, ·)\nweighted by aki gives the expected passage attention for the k’th step, ↵̄k = P\ni akiAk(i, ·) 2 m. Span-level smoothed attention. To facilitate information spotting and extraction over contiguous spans of text, we regularize the passage attention so that the attention on a passage token is high if the attention over its neighbors is so. We achieve this by adopting a heuristic smoothing technique (Huang et al., 2020), taking a sliding window of different lengths ! = {1, 2, . . . 10} over the passage,\nand replacing the token-level attention with the attention averaged over the window. This results in 10 different attention maps over the passage for the k’th step of the program: {↵̄!k |! 2 {1, 2,. . . , 10}}. Soft span prediction. This network takes a multi-scaled (Gupta et al., 2020) version of ↵̄!k , by multiplying the attention map with |s| different scaling factors (s = {1, 2, 5, 10}), yielding a |s|dimensional representation for each passage token, i.e., ↵̄!k 2 m⇥|s|. This is then passed through a L-layered stacked self-attention transformer block (Vaswani et al., 2017), which encodes it to m⇥ d dimension, followed by a linear layer of dimension d⇥ 1, to obtain the span prediction logits: ↵!k = Linear(Transformer(MultiScaling(↵̄ ! k )) 2 m. Further the span prediction logits at each program step (say k) is additively combined with those from the previous steps referenced in the current one, through the reference argument (ref(k)) at step k, i.e., ↵!k = ↵ ! k + P k02ref(k) ↵ ! k0 .\nModeling interaction between program and number/date entities This module (Figure 2, right) facilitates an entity-based information spotting capability, that is, given a passage mention of a number/date entity relevant to the query, the model should be able to attend to the neighborhood around it. To do this, for each program step, we first compute a passage tokens to number tokens attention map Anum 2 l⇥m⇥N , where N is the number of unique number entities. Note that this attention map is different for each program step as the contextual BERT encoding of the passage tokens (Pk) is coupled with the program’s span argument of that step. At the k-th step, the row Anumk (i, ·) denotes the probability distribution over the N unique number tokens w.r.t. the i-th passage token. The attention maps are obtained by a softmax normalization of each row of the corresponding passage tokens to number tokens similarity matrix, Snumk 2 m⇥N for k = {1 . . . l}, where the elements of Snumk are computed as S num k (i, j) = P T kiWnPknj with Wn 2 d⇥d being a learnable projection matrix and nj being the passage location of the j-th number token. These similarity scores are additively aggregated over all mentions of the same number entity in the passage. The relation between program and entities is then modeled as ⌧!k = softmax( P i ↵ ! kiA num k (i, ·)) 2\nN , which gives the expected distribution over the N number tokens for the k-th program step and using ! as the smoothing window size. The final stacked attention map obtained for the different windows is T numk = {⌧!k |! 2 {1, 2, . . . 10}}. Similarly, for each program step k, we also compute a separate stacked attention map T datek over the unique date tokens, parameterized by a different Wd. A critical requirement for a meaningful attention over entities is to incorporate information extraction capability in the number and date attention maps Anum and Adate, by enabling the model to attend over the neighborhood of the relevant entity mentions. This is achieved by minimizing the unsupervised auxiliary losses Lnumaux and Ldateaux in the training objective, which impose an inductive bias over the number and date entities, similar to Gupta et al. (2020). Its purpose is to ensure that the passage attention is densely distributed inside the neighborhood of ± ⌦ (a hyperparameter, e.g., 10) of the passage location of the entity mention, without imposing any bias on the attention distribution outside the neighborhood. Consequently, it maximises the log-form of cumulative likelihood of the attention distribution inside the window and the entropy of the attention distribution outside of it.\n(1)Lnumaux = 1\nl\nlX\nk=1\n mX\ni=1\n[ log( NX\nj=1\nnj2[i± ⌦]a num kij )\nNX\nj=1\nnj 62[i± ⌦]a num kij log(a num kij )]\nwhere is indicator function and anumkij = A num k (i, j). Ldateaux for date entities is similarly defined." }, { "heading": "2.1.2 MODELING DISCRETE REASONING", "text": "The model next learns to execute a single step1 of discrete reasoning (Figure 3) based on the final program step. The final step contains (i) root-clause of the query which often indicates the type of discrete operation (e.g., ‘what is the longest’ indicates max, ‘how many goals’ indicates count), and (ii) reference argument indicating the previous program steps the final step depends on. Each previous step (say k) is represented as stacked attention maps T numk and T datek , obtained from §2.1.1. Operator Sampling Network Owing to the noisy nature of the program, the operator network takes as input: (i) BERT’s [CLS] representation for the passage-query pair and LSTM (Hochreiter & Schmidhuber, 1997) encoding (randomly initialized) of the BERT contextual representation of (ii) the root-clause from the final program step and (iii) full query (w.r.t. passage), to make two predictions:\n1This is a reasonable assumption for DROP with a recall of 90% on the training set. However, it does not limit the generalizability of WNSMN, as with standard beam search it is possible to scale to an l-step MDP.\n• Entity-Type Predictor Network, an Exponential Linear Unit (Elu) activated fully-connected layer followed by a softmax that outputs the probabilities of sampling either date or number types.\n• Operator Predictor Network, a similar Elu-activated fully connected layer followed by a softmax which learns a probability distribution over a fixed catalog of 6 numerical and logical operations (count, max, min, sum, diff, negate), each represented with learnable embeddings.\nApart from the diff operator which acts only on two arguments, all other operations can take any arbitrary number of arguments. Also some of these operations can be applied only on numbers (e.g., sum, negate) while others can be applied on both numbers or date (e.g., max, count). Argument Sampling Network This network learns to sample date/number entities as arguments for the sampled discrete operation, given the entity-specific stacked attentions (T numk and T datek ) for each previous step (say, k), that appears in the reference argument of the final program step. In order to allow sampling of fixed or arbitrary number of arguments, the argument sampler learns four types of networks, each modeled with a L-layered stacked self attention based Transformer block (with output dimension d) followed by different non-linear layers embodying their functionality and a softmax normalization to get the corresponding probability of the argument sampling (Figure 3).\n• Sample n 2 {1, 2} Argument Module: softmax(Elu(Lineard⇥n(Transformer(T )))), outputs a distribution over the single entities (n = 1) or a joint distribution over the entity-pairs (n = 2).\n• Counter Module: softmax(Elu(Lineard⇥10(CNN -Encoder(Transformer(T ))))), predicts a distribution over possible count values (2 [1, . . . , 10]) of number of entity arguments to sample.\n• Entity-Ranker Module: softmax(PRelu(Lineard⇥1(Transformer(T )))), learns to rerank the entities and outputs a distribution over all the entities given the stacked attention maps as input.\n• Sample Arbitrary Argument: Multinomial(Entity-Ranked Distribution, Counter Prediction). Depending on the number of arguments needed by the discrete operation and the number of reference arguments in the final program step, the model invokes one of Sample {1, 2, Arbitrary} Argument. For instance, if the sampled operator is diff which needs 2 arguments, and the final step has 1 or 2 reference arguments, then the model respectively invokes either Sample 2 argument or Sample 1 argument on the stacked attention T corresponding to each reference argument. And, for operations needing arbitrary number of arguments, the model invokes the Sampling Arbitrary Argument. For the Arbitrary Argument case, the model first predicts the number of entities c 2 {1, . . . , 10} to sample using the Counter Network, and then samples from the multinomial distribution based on the joint of c-combinations of entities constructed from the output distribution of the Entity Ranker module." }, { "heading": "2.1.3 TRAINING WITH WEAK SUPERVISION IN THE DEEP RL FRAMEWORK", "text": "We use an RL framework to train the model with only discrete binary feedback from the exact match of the gold and predicted numerical answer. In particular, we use the REINFORCE (Williams, 1992) policy gradient method where a stochastic policy comprising a sequence of actions is learned with the goal of maximizing the expected reward. In our case, the discrete operation along with argument sampling constitute the action. However, because of our assumption that a single step of discrete reasoning suffices for most questions in DROP, we further simplify the RL framework to a contextual multi-arm bandit (MAB) problem with a 1-step MDP, i.e., the agent performs only one step action. Despite the simplifying assumption of 1-step MDP, the following characteristics of the problem render it highly challenging: (i) the action space A is exponential in the order of number of operations and argument entities in the passage (averaging to 12K actions for DROP-num); (ii) the extreme reward sparsity owing to the binary feedback is further exacerbated by the presence of spurious rewards, as the same answer can be generated by multiple diverse actions. Note that previous approaches like\nNMN can avoid such spurious supervision because they heuristically obtain additional annotation of the question category, the gold program or gold program execution atleast for some training instances. In our contextual MAB framework, for an input x = (passage(p), query(q)), the context or environment state s (x) is modeled by the entity specific cross attention (§2.1.1, parameterized by ) between the (i) passage (ii) program-form of the query and (iii) extracted passage date/number entities. Given the state s (x), the layout policy (§2.1.2, parameterized by ✓) then learns the query-specific inference layout, i.e., the discrete action sampling policy P✓(a|s (x)) for action a 2 A. The action sampling probability is a product of the probability of sampling entities from the appropriate entity type (P type✓ ), probability of sampling the operator (P op✓ ), and probability of sampling the entity argument(s) (P arg ✓ ), normalized by number of arguments to sample. Therefore, with the learnable context representation s (x) of input x, the end-to-end objective is to jointly learn {✓, } that maximises the expected reward R(x, a) 2 { 1,+1} over the sampled actions (a), based on exact match with the gold answer. To mitigate the learning instability in such sparse confounding reward settings, we intialize with a simpler iterative hard-Expectation Maximization (EM) learning objective, called Iterative Maximal Likelihood (IML) (Liang et al., 2017). With the assumption that the sampled actions are extensive enough to contain the gold answer, IML greedily searches for the good actions by fixing the policy parameters, and then maximises the likelihood of the best action that led to the highest reward. We define good actions (Agood) as those that result in the gold answer itself and take a conservative approach of defining best among them as simply the most likely one according to the current policy.\n(2)JIML(✓, ) = X\nx\nmax a2Agood logP✓, (a|x)\nAfter the IML initialization, we switch to REINFORCE as the learning objective after a few epochs, where the goal is to maximise the expected reward (JRL(✓, ) = P x EP✓, (a|x)R(x, a)) as\n(3)r(✓, )JRL = X\nx\nX a2A P✓, (a|x)(R(x, a) B(x))r✓, (logP✓, (a|x))\nwhere B(x) is simply the average (baseline) reward obtained by the policy for that instance x. Further, in order to mitigate overfitting, in addition to L2-regularization and dropout, we also add entropy based regularization over the argument sampling distribution, in each of the sampling networks." }, { "heading": "3 EXPERIMENTS", "text": "We now empirically compare the exact-match performance of WNSMN with SoTA baselines on versions of DROP dataset and also examine how it fares in comparison to strong supervised skylines. The Primary Baselines for WNSMN are the explicit reasoning based NMN (Gupta et al., 2020) which uses additional strong supervision and the BERT based language model GenBERT (Geva et al., 2020) that does not embody any reasoning and autoregressively generates numeric answer tokens. As the Primary Dataset we use DROP-num, the subset of DROP with numerical answers. This subset contains 45K and 5.8K instances respectively from the standard DROP train and development sets. Originally, NMN was showcased on a very specific subset of DROP, restricted to the 6 reasoning-types it could handle, out of which three (count, date-difference, extract-number) have numeric answers. This subset comprises 20K training and 1.8K development instances, out of which only 10K and 800 instances respectively have numerical answers. We further evaluate on this numerical subset, referred to as DROP-Pruned-num. In both the cases, the training data was randomly split into 70%:30% for train and internal validation and the standard DROP development set was treated as the Test set.\nFigure 4 shows the t-SNE plot of pretrained Sentence-BERT (Reimers & Gurevych, 2019) encoding of all questions in DROPnum-Test and also the DROP-Pruned-num-Test subset with different colors (red, green, yellow) representing different types. Not only are the DROP-num questions more diverse than the carefully chosen DROP-Pruned-num subset, the latter also forms well-separated clusters corresponding to the three reasoning types. Additionally, the average perplexity (using nltk) of the DROP-Pruned-num and DROP-num questions was found to be 3.9 and 10.65 respectively, further indicating the comparatively open-ended nature of the former. For the primary baselines NMN and GenBERT, we report the performance on in-house trained models on the respective datasets, using\nthe code open-sourced by the authors. The remaining results, taken from Geva et al. (2020), Kinley & Lin (2019), and Ran et al. (2019); refer to models trained on the full DROP dataset. All models use the same pretrained BERT-base. Also note that a primary requirement of all models other than GenBERT and WNSMN i.e., for NMN, MTMSN, NABERT, NAQANET, NumNet, is the exhaustive enumeration of the output space of all possible discrete operations. This simplifies the QA task to a classification setting, thus alleviating the need for discrete reasoning in the inference processs.\nTable 1 presents our primary results on DROP-num, comparing the performance of WNSMN (accuracy of the top-1 sampled action by the RL agent) with various ablations of NMN (provided in the authors’ implementation) by removing atleast one of Program, Execution, and Query Attention supervision (Appendix A.4.1) and GenBERT models with pretrained BERT that are finetuned on DROP or DROP-num (denoted as GenBERT and GenBERT-num). For a fair comparison with our weakly supervised model, we do not treat NMN with all forms of supervision or GenBERT model pretrained with additional synthetic numerical and textual data as comparable baselines. Note that these GenBERT variants indeed enjoy strong reasoning supervision in terms of gold arithmetic expressions provided in these auxiliary datasets. NMN’s performance is abysmally poor, indeed a drastic degradation in comparison to its performance on the pruned DROP subset reported by Gupta et al. (2020) and in our subsequent experiments in Table 2. This can be attributed to their limitation in handling more diverse classes of reasoning and open-ended queries in DROP-num, further exacerbated by the lack of one or more types of strong supervision.2 Our earlier analysis on the complexity of the questions in the subset and full DROPnum further quantify the relative difficulty level of the latter. On the\nother hand, GenBERT delivers a mediocre performance, while GenBERT-num degrades additionally by 4%, as learning from numerical answers alone further curbs the language modeling ability. Our model performs significantly better than both these baselines, surpassing GenBERT by 8% and the NMN baseline by around 32%. This showcases the significance of incorporating explicit reasoning in neural models in comparison to the vanila large scale LMs like GenBERT. It also establishes the generalizability of such reasoning based models to more open-ended forms of QA, in comparison to contemporary modular networks like NMN, owing to its ability to handle both learnable and discrete modules in an end-to-end manner.\nNext, in Table 2, we compare the performance of the proposed WNSMN with the same NMN variants (as in Table 1) on DROP-Pruned-num. Some of the salient observations are: (i) WNSMN in fact reaches a performance quite close to the strongly supervised NMN variant (first row), and is able to attain at least an improvement margin of 4% over all other variants obtained by removing one or more types of supervision. This is despite all variants of NMN additionally enjoying the exhaustive precompution of the output space of possible numerical answers; (ii) WNSMN suffers only in the case of extract-number type operations (e.g., max,min) that involve a more complex process of sampling arbitrary number of arguments (iii) Performance drop of NMN is not very large when all or none of the strong supervision is present, possi-\nbly because of the limited diversity over reasoning types and query language; and (iv) Query-Attention supervision infact adversely affects NMN’s performance, in absence of the program and execution supervision or both, possibly owing to an undesirable biasing effect. However when both supervisions are available, query-attention is able to improve the model performance by 5%. Further, we believe the test set of 800 instances is too small to get an unbiased reflection of the model’s performances.\n2Both the results and limitations of NMN in Table1 and 2 were confirmed by the authors of NMN as well.\nIn Table 3, we additionally inspect recall over the top-k actions sampled by WNSMN to estimate how it fares in comparison to the strongly supervised skylines: (i) NMN with all forms of strong supervision; (ii) GenBERT variants +ND, +TD and +ND+TD further pretrained on synthetic Numerical and Textual Data and both; (iii) reasoning-free hybrid models like MTMSN (Hu et al., 2019) and NumNet (Ran et al., 2019), NAQANet (Dua et al., 2019) and NABERT, NABERT+ (Kinley & Lin, 2019). Note that both NumNet and NAQANet do not use pretrained BERT. MTMSN achieves SoTA performance through a supervised framework of training specialized predictors for each reasoning type to predict the numerical expression directly instead of learning to reason. While top-1 performance of WNSMN (in Table 1) is 4% worser than NABERT, Recall@top-2 is equivalent to the strongly supervised NMN, top-5 and top-10 is comparable to NABERT+, NumNet and GenBERT models +ND, +TD and top-20 nearly achieves SoTA. Such promising recall over the top-k actions suggests that more sophisticated RL algorithms with better exploration strategies can possibly bridge this performance gap.\n4 ANALYSIS & FUTURE WORK\nPerformance Analysis Despite the notorious instabilities of\nMore Stable RL Framework The training trend in Figure 5(a) shows early saturation and the module-wise performance indicates overfitting despite the regularization tricks in §2.1.3 and Appendix A.6. While more stable RL algorithms like Actor-Critic, Trust Region Policy Optimization (Schulman et al., 2015) or Memory Augmented Policy Optimization (Liang et al., 2018) can mitigate these issues, we leave them for future exploration. Also, though this work’s objective was to train module networks with weak supervision, the sparse confounding rewards in the exponential action space indeed render the RL training quite challenging. One practical future direction to bridge the performance gap would be to pretrain with strong supervision on at least a subset of reasoning categories or on more constrained forms of synthetic questions, similar to GenBERT. Such a setting would require inspection and evaluation of generalizability of the RL model to unknown reasoning types or more open-ended questions." }, { "heading": "5 RELATED WORK", "text": "In this section we briefly compare our proposed WNSMN to the two closest genre of models that have proven quite successful on DROP 3 i) reasoning free hybrid models NumNet, NAQANet, NABERT, NABERT+, MTMSN, and NeRd ii) modular network for reasoning NMN. Their main distinction with WNSMN is that in order to address the challenges of weak supervision, they obtain program annotation from the QA pairs through i) various heuristic parsing of the templatized queries in DROP to get supervision of the reasoning type (max/min, diff/sum, count, negate). ii) exhaustive search over all possible discrete operations to get supervision of the arguments in the reasoning.\nSuch heuristic supervision makes the learning problem significantly simpler in the following ways\n• These models enjoy supervision of specialized program that have explicit information of the type of reasoning to apply for a question e.g., SUM(10,12) • A simplistic (contextual BERT-like) reader model to read query related information from the passage trained with direct supervision of the query span arguments at each step of the program • A programmer model that can be directly trained to decode the specialized programs • Executing numerical functions (e.g., difference, count, max, min) either by i) training purely neural\nmodules in a strong supervised setting using the annotated programs or by ii) performing the actual discrete operation as a post processing step on the model’s predicted program. For each of these previous works, it is possible to directly apply the learning objective on the space of decoded program, without having to deal with the discrete answer or any non-differentiability.\nHowever, such heuristic techniques of program annotation or exhaustive search is not practical as the language of questions or the space of discrete operations become more complex. Hence WNSMN learns in the challenging weak-supervised setting without any additional annotation through\n• A noisy symbolic query decomposition that is oblivious to the reasoning type and simply based on generic text parsing techniques • An entity specific cross attention model extracting passage information relevant to each step of the decomposed query and learning an attention distribution over the entities of each type • Learning to apply discrete reasoning by employing neural modules that learn to sample the operation and the entity arguments • Leveraging a combination of neural and discrete modules when executing the discrete operation, instead of using only neural modules which need strong supervision of the programs for learning the functionality • Fundamentally different learning strategy by incorporating inductive bias through auxiliary losses and Iterative Maximal Likelihood for a more conservative initialization followed by REINFORCE\nThese reasoning-free hybrid models are not comparable with WNSMN because of their inability to learn in absence of any heuristic program annotation. Instead of learning to reason based on only the final answer supervision, they reduce the task to learning to decode the program, based on heuristic program annotation. NMN is the only reasoning based model that employ various auxiliary losses to learn even in absence of any additional supervision, similar to us.\nTo our knowledge WNSMN is the first work on modular networks for fuzzy reasoning over text in RC framework, to handle the challenging cold start problem of the weak supervised setting without needing any additional specialized supervision of heuristic programs." }, { "heading": "6 CONCLUSION", "text": "In this work, we presented Weakly Supervised Neuro-Symbolic Module Network for numerical reasoning based MRC based on a generalized framework of query parsing to noisy heuristic programs. It trains both neural and discrete reasoning modules end-to-end in a Deep RL framework with only discrete reward based on exact answer match. Our empirical analysis on the numerical-answer only subset of DROP showcases significant performance improvement of the proposed model over SoTA NMNs and Transformer based language model GenBERT, when trained in comparable weakly supervised settings. While, to our knowledge, this is the first effort towards training modular networks for fuzzy reasoning over RC in a weakly-supervised setting, there is significant scope of improvement, such as employing more sophisticated RL framework or by leveraging the pretraining of reasoning.\n3A more detailed related work section is presented in the Appendix A.4" } ]
2,020
null
SP:28475d91bb10fb0a3a8add77cca7505a839e145d
[ "This paper proposes a novel lambda layer to capture long-range interactions by transforming available contexts into linear functions, termed lambdas and applying these linear functions to each input separately. The proposed Lambda Network achieves good performances on ImageNet Classification, COCO object detection and instance segmentation tasks. The proposed lambda convolution is much more dense than the attention-based layer thus reducing parameters and complexity. However there are still several weaknesses in this paper. 1) Generalization of the proposed lambda convolution layer. For example, how about the performance of the lambda layer when combined with the lighter convolutional networks, e.g. mobilenet ? How about the performance when much deeper networks for the highest performance? 2)The source code is suggested to be released for more details. 3) Check the typos in the paper. ", "This paper presents an efficient method to model long-range interaction. The proposed lambda layer removes the nonlinearity of the original attention operation and makes the matrix multiplication independent of the context, hence skipping expensive computation and storage of large attention maps. Two kinds of lambda functions in lambda layer, i.e., content lambda and position lambda, allows the model to capture both dense content and long-range interaction. In addition, the lambda layer can be further extended to working with local context and to being more efficient by docomposing a query into multiple short ones. Its effectivess has been demonstrated on extensive experiments on different backbone network architectures and tasks. Its speed-accuracy tradeoff perform very favorably against SOTA methods." ]
We present lambda layers – an alternative framework to self-attention – for capturing long-range interactions between an input and structured contextual information (e.g. a pixel surrounded by other pixels). Lambda layers capture such interactions by transforming available contexts into linear functions, termed lambdas, and applying these linear functions to each input separately. Similar to linear attention, lambda layers bypass expensive attention maps, but in contrast, they model both content and position-based interactions which enables their application to large structured inputs such as images. The resulting neural network architectures, LambdaNetworks, significantly outperform their convolutional and attentional counterparts on ImageNet classification, COCO object detection and instance segmentation, while being more computationally efficient. Additionally, we design LambdaResNets, a family of hybrid architectures across different scales, that considerably improves the speed-accuracy tradeoff of image classification models. LambdaResNets reach excellent accuracies on ImageNet while being 3.2 4.4x faster than the popular EfficientNets on modern machine learning accelerators. In large-scale semi-supervised training with an additinal 130M pseudo-labeled images, LambdaResNets achieve up to 86.7% ImageNet accuracy while being 9.5x faster than EfficientNet NoisyStudent and 9x faster than a Vision Transformer with comparable accuracies1.
[ { "affiliations": [], "name": "Irwan Bello" } ]
[ { "authors": [ "Jimmy Ba", "Geoffrey Hinton", "Volodymyr Mnih", "Joel Z. Leibo", "Catalin Ionescu" ], "title": "Using fast weights to attend to the recent past", "venue": null, "year": 2016 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Irwan Bello", "Hieu Pham", "Quoc V. Le", "Mohammad Norouzi", "Samy Bengio" ], "title": "Neural combinatorial optimization with reinforcement learning. 2016", "venue": "URL http://arxiv.org/abs/1611", "year": 2016 }, { "authors": [ "Irwan Bello", "Barret Zoph", "Ashish Vaswani", "Jonathon Shlens", "Quoc V. Le" ], "title": "Attention augmented convolutional networks. CoRR, abs/1904.09925, 2019", "venue": null, "year": 1904 }, { "authors": [ "Irwan Bello", "William Fedus", "Xianzhi Du", "Ekin D. Cubuk", "Aravind Srinivas", "Tsung-Yi Lin", "Jonathon Shlens", "Barret Zoph" ], "title": "Revisiting resnets: Improved training methodologies and scaling", "venue": null, "year": 2021 }, { "authors": [ "Iz Beltagy", "Matthew E. Peters", "Arman Cohan" ], "title": "Longformer: The long-document transformer", "venue": null, "year": 2020 }, { "authors": [ "Denny Britz", "Melody Y. Guan", "Minh-Thang Luong" ], "title": "Efficient attention using a fixed-size memory representation", "venue": "CoRR, abs/1707.00110,", "year": 2017 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": null, "year": 2019 }, { "authors": [ "Nicolas Carion", "Francisco Massa", "Gabriel Synnaeve", "Nicolas Usunier", "Alexander Kirillov", "Sergey Zagoruyko" ], "title": "End-to-end object detection with transformers", "venue": null, "year": 2020 }, { "authors": [ "Mark Chen", "Alec Radford", "Rewon Child", "Jeff Wu", "Heewoo Jun", "Prafulla Dhariwal", "David Luan", "Ilya Sutskever" ], "title": "Generative pretraining from pixels. 2020a. URL https://openai.com/ blog/image-gpt", "venue": null, "year": 2020 }, { "authors": [ "Yen-Chun Chen", "Linjie Li", "Licheng Yu", "Ahmed El Kholy", "Faisal Ahmed", "Zhe Gan", "Yu Cheng", "Jingjing Liu" ], "title": "Uniter: Universal image-text representation learning", "venue": "A2-nets: Double attention networks. CoRR,", "year": 2018 }, { "authors": [ "Rewon Child", "Scott Gray", "Alec Radford", "Sutskever Ilya" ], "title": "Generating long sequences with sparse transformers", "venue": "arXiv preprint arXiv:1904.10509", "year": 1904 }, { "authors": [ "Krzysztof Choromanski", "Valerii Likhosherstov", "David Dohan", "Xingyou Song", "Andreea Gane", "Tamas Sarlos", "Peter Hawkins", "Jared Davis", "Afroz Mohiuddin", "Lukasz Kaiser", "David Belanger", "Lucy Colwell", "Adrian Weller" ], "title": "Rethinking attention with performers", "venue": null, "year": 2020 }, { "authors": [ "Jean-Baptiste Cordonnier", "Andreas Loukas", "Martin Jaggi" ], "title": "On the relationship between selfattention and convolutional layers. 2019", "venue": "URL http://arxiv.org/abs/1911.03584", "year": 1911 }, { "authors": [ "Ekin D. Cubuk", "Barret Zoph", "Jonathon Shlens", "Quoc V. Le" ], "title": "Randaugment: Practical automated data augmentation with a reduced search", "venue": null, "year": 2019 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "Jaime Carbonell", "Quoc Le", "Ruslan Salakhutdinov" ], "title": "Transformer-xl: Attentive language models beyond a fixed-length context", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Alexandre de Brébisson", "Pascal Vincent" ], "title": "A cheap linear attention mechanism with fast lookups and fixed-size representations", "venue": null, "year": 2016 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2009 }, { "authors": [ "Alexey Dosovitskiy", "Lucas Beyer", "Alexander Kolesnikov", "Dirk Weissenborn", "Xiaohua Zhai", "Thomas Unterthiner", "Mostafa Dehghani", "Matthias Minderer", "Georg Heigold", "Sylvain Gelly", "Jakob Uszkoreit", "Neil Houlsby" ], "title": "An image is worth 16x16 words: Transformers for image recognition", "venue": null, "year": 2020 }, { "authors": [ "William Fedus", "Barret Zoph", "Noam Shazeer" ], "title": "Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity", "venue": null, "year": 2021 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Georgia Gkioxari", "Piotr Dollár", "Ross Girshick" ], "title": "Mask r-cnn", "venue": "IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Tong He", "Zhi Zhang", "Hang Zhang", "Zhongyue Zhang", "Junyuan Xie", "Mu Li" ], "title": "Bag of tricks for image classification with convolutional neural networks. 2018", "venue": null, "year": 2018 }, { "authors": [ "Jonathan Ho", "Nal Kalchbrenner", "Dirk Weissenborn", "Tim Salimans" ], "title": "Axial attention in multidimensional transformers", "venue": "arXiv preprint arXiv:1912.12180,", "year": 2019 }, { "authors": [ "Andrew Howard", "Mark Sandler", "Grace Chu", "Liang-Chieh Chen", "Bo Chen", "Mingxing Tan", "Weijun Wang", "Yukun Zhu", "Ruoming Pang", "Vijay Vasudevan", "Quoc V. Le", "Adam Hartwig" ], "title": "Searching for mobilenetv3", "venue": null, "year": 2019 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Han Hu", "Jiayuan Gu", "Zheng Zhang", "Jifeng Dai", "Yichen Wei" ], "title": "Relation networks for object detection", "venue": null, "year": 2018 }, { "authors": [ "Han Hu", "Zheng Zhang", "Zhenda Xie", "Stephen Lin" ], "title": "Local relation networks for image recognition", "venue": "arXiv preprint arXiv:1904.11491,", "year": 2019 }, { "authors": [ "Jie Hu", "Li Shen", "Samuel Albanie", "Gang Sun", "Andrea Vedaldi" ], "title": "Gather-excite: Exploiting feature context in convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Gao Huang", "Yu Sun", "Zhuang Liu", "Daniel Sedra", "Kilian Weinberger" ], "title": "Deep networks with stochastic depth", "venue": null, "year": 2016 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Sizikov", "Matthew Snelham", "Jed Souter", "Dan Steinberg", "Andy Swing", "Mercedes Tan", "Gregory Thorson", "Bo Tian", "Horia Toma", "Erick Tuttle", "Vijay Vasudevan", "Richard Walter", "Walter Wang", "Eric Wilcox", "Doe Hyun Yoon" ], "title": "In-datacenter performance analysis of a tensor processing unit", "venue": "SIGARCH Comput. Archit. News,", "year": 2017 }, { "authors": [ "Angelos Katharopoulos", "Apoorv Vyas", "Nikolaos Pappas", "François Fleuret" ], "title": "Transformers are rnns: Fast autoregressive transformers with linear attention", "venue": null, "year": 2020 }, { "authors": [ "Nikita Kitaev", "Lukasz Kaiser", "Anselm Levskaya" ], "title": "Reformer: The efficient transformer", "venue": "arXiv preprint arXiv:2001.04451,", "year": 2020 }, { "authors": [ "Alexander Kolesnikov", "Lucas Beyer", "Xiaohua Zhai", "Joan Puigcerver", "Jessica Yung", "Sylvain Gelly", "Neil Houlsby" ], "title": "Big transfer (bit): General visual representation learning, 2020", "venue": null, "year": 2020 }, { "authors": [ "Jungkyu Lee", "Taeryun Won", "Tae Kwan Lee", "Hyemin Lee", "Geonmo Gu", "Kiho Hong" ], "title": "Compounding the performance improvements of assembled techniques in a convolutional neural network, 2020", "venue": null, "year": 2020 }, { "authors": [ "Liunian Harold Li", "Mark Yatskar", "Da Yin", "Cho-Jui Hsieh", "Kai-Wei Chang" ], "title": "Visualbert: A simple and performant baseline for vision and language. 2019", "venue": null, "year": 2019 }, { "authors": [ "Xingyu Liao", "Lingxiao He", "Zhouwang Yang", "Chi Zhang" ], "title": "Video-based person re-identification via 3d convolutional networks and non-local attention", "venue": null, "year": 2019 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In European Conference on Computer Vision,", "year": 2014 }, { "authors": [ "Francesco Locatello", "Dirk Weissenborn", "Thomas Unterthiner", "Aravindh Mahendran", "Georg Heigold", "Jakob Uszkoreit", "Alexey Dosovitskiy", "Thomas Kipf" ], "title": "Object-centric learning with slot attention", "venue": null, "year": 2020 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "SGDR: Stochastic gradient descent with warm restarts", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Jiasen Lu", "Dhruv Batra", "Devi Parikh", "Stefan Lee" ], "title": "Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language", "venue": null, "year": 2019 }, { "authors": [ "John Nickolls", "William J Dally" ], "title": "The gpu computing era", "venue": "IEEE micro,", "year": 2010 }, { "authors": [ "Jongchan Park", "Sanghyun Woo", "Joon-Young Lee", "In So Kweon" ], "title": "Bam: bottleneck attention module", "venue": "In British Machine Vision Conference,", "year": 2018 }, { "authors": [ "Ethan Perez", "Florian Strub", "Harm de Vries", "Vincent Dumoulin", "Aaron C. Courville" ], "title": "Film: Visual reasoning with a general conditioning", "venue": "layer. CoRR,", "year": 2017 }, { "authors": [ "Alec Radford", "Jong Wook Kim", "Chris Hallacy", "Aditya Ramesh", "Gabriel Goh", "Sandhini Agarwal", "Girish Sastry", "Amanda Askell", "Pamela Mishkin", "Jack Clark", "Gretchen Krueger", "Ilya Sutskever" ], "title": "Learning transferable visual models from natural language supervision", "venue": null, "year": 2021 }, { "authors": [ "Prajit Ramachandran", "Niki Parmar", "Ashish Vaswani", "Irwan Bello", "Anselm Levskaya", "Jonathon Shlens" ], "title": "Stand-alone self-attention in vision models", "venue": "CoRR, abs/1906.05909,", "year": 2019 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Peter Shaw", "Jakob Uszkoreit", "Ashish Vaswani" ], "title": "Self-attention with relative position representations", "venue": "arXiv preprint arXiv:1803.02155,", "year": 2018 }, { "authors": [ "Noam Shazeer" ], "title": "Fast transformer decoding: One write-head is all you need", "venue": null, "year": 2019 }, { "authors": [ "Noam Shazeer", "Azalia Mirhoseini", "Krzysztof Maziarz", "Andy Davis", "Quoc V. Le", "Geoffrey E. Hinton", "Jeff Dean" ], "title": "Outrageously large neural networks: The sparsely-gated mixture-of-experts", "venue": "layer. CoRR,", "year": 2017 }, { "authors": [ "Zhuoran Shen", "Mingyuan Zhang", "Shuai Yi", "Junjie Yan", "Haiyu Zhao" ], "title": "Efficient attention: Selfattention with linear complexities", "venue": "CoRR, abs/1812.01243,", "year": 2018 }, { "authors": [ "Zhuoran Shen", "Irwan Bello", "Raviteja Vemulapalli", "Xuhui Jia", "Ching-Hui Chen" ], "title": "Global selfattention networks for image recognition, 2020", "venue": null, "year": 2020 }, { "authors": [ "Aravind Srinivas", "Tsung-Yi Lin", "Niki Parmar", "Jonathon Shlens", "Pieter Abbeel", "Ashish Vaswani" ], "title": "Bottleneck transformers for visual recognition", "venue": null, "year": 2021 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": "Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Chen Sun", "Austin Myers", "Carl Vondrick", "Kevin Murphy", "Cordelia Schmid" ], "title": "Videobert: A joint model for video and language representation learning", "venue": null, "year": 2019 }, { "authors": [ "Mingxing Tan", "Quoc V. Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "CoRR, abs/1905.11946,", "year": 2019 }, { "authors": [ "Yi Tay", "Mostafa Dehghani", "Dara Bahri", "Donald Metzler" ], "title": "Efficient transformers: A survey", "venue": null, "year": 2020 }, { "authors": [ "Hugo Touvron", "Matthieu Cord", "Matthijs Douze", "Francisco Massa", "Alexandre Sablayrolles", "Hervé Jégou" ], "title": "Training data-efficient image transformers & distillation through attention", "venue": null, "year": 2021 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Huiyu Wang", "Yukun Zhu", "Bradley Green", "Hartwig Adam", "Alan Yuille", "Liang-Chieh Chen" ], "title": "Axial-deeplab: Stand-alone axial-attention for panoptic segmentation", "venue": null, "year": 2020 }, { "authors": [ "Sinong Wang", "Belinda Z. Li", "Madian Khabsa", "Han Fang", "Hao Ma" ], "title": "Linformer: Self-attention with linear complexity. 2020b", "venue": null, "year": 2020 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Sanghyun Woo", "Jongchan Park", "Joon-Young Lee", "In So Kweon" ], "title": "Cbam: Convolutional block attention module", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Bichen Wu", "Chenfeng Xu", "Xiaoliang Dai", "Alvin Wan", "Peizhao Zhang", "Zhicheng Yan", "Masayoshi Tomizuka", "Joseph Gonzalez", "Kurt Keutzer", "Peter Vajda" ], "title": "Visual transformers: Token-based image representation and processing for computer", "venue": null, "year": 2020 }, { "authors": [ "Qizhe Xie", "Minh-Thang Luong", "Eduard Hovy", "Quoc V. Le" ], "title": "Self-training with noisy student improves imagenet classification", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Yunyang Xiong", "Zhanpeng Zeng", "Rudrasis Chakraborty", "Mingxing Tan", "Glenn Fung", "Yin Li", "Vikas Singh" ], "title": "Nyströmformer: A nyström-based algorithm for approximating self-attention", "venue": null, "year": 2021 }, { "authors": [ "Kelvin Xu", "Jimmy Ba", "Ryan Kiros", "Kyunghyun Cho", "Aaron Courville", "Ruslan Salakhudinov", "Rich Zemel", "Yoshua Bengio" ], "title": "Show, attend and tell: Neural image caption generation with visual attention", "venue": "Proceedings of Machine Learning Research,", "year": 2015 }, { "authors": [ "Han Zhang", "Ian Goodfellow", "Dimitris Metaxas", "Augustus Odena" ], "title": "Self-attention generative adversarial networks. 2019", "venue": null, "year": 2019 }, { "authors": [ "Hengshuang Zhao", "Jiaya Jia", "Vladlen Koltun" ], "title": "Exploring self-attention for image recognition", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Barret Zoph", "Quoc V. Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Kitaev et al", "2020) See Tay" ], "title": "2020) for a review. Their implementations can be rather complex, sometimes require low-level kernel implementations to get computational benefits or may rely on specific assumptions on the shape of the inputs (e.g., axial attention). In contrast, lambda layers are simple to implement for both global and local contexts using simple einsum and convolution primitives and capture dense", "venue": null, "year": 2020 }, { "authors": [ "Britz" ], "title": "They are recently enjoying a resurgence of popularity with many works modifying the popular Transformer architecture for sequential processing applications (Katharopoulos et al., 2020", "venue": "Wang et al., 2020b; Choromanski et al.,", "year": 2018 }, { "authors": [ "Katharopoulos" ], "title": "Multiple choices for the feature function", "venue": null, "year": 2018 }, { "authors": [ "2019 Ramachandran et al", "2019 Cordonnier et al", "2020 Zhao et al", "2020 Wu et al", "Dosovitskiy" ], "title": "2020); object detection and object-centric tasks (Wang et al., 2018", "venue": "(Bello et al.,", "year": 2020 }, { "authors": [ "Lu" ], "title": "The first use of self-attention in vision dates back to the non-local block (Wang et al., 2018), which added a single-head global self-attention residual in the low resolution stages of a ConvNet for longrange dependency modeling. The non-local block has proven useful to complement convolutions but cannot be used as a stand-alone layer as it does not model position-based interactions", "venue": null, "year": 2021 }, { "authors": [ "Bello" ], "title": "Global relative attention replaces convolutions at low", "venue": null, "year": 2019 }, { "authors": [ "Srinivas" ], "title": "2021), rather than concatenating convolutional feature maps, propose to use a stride of 1 in the last stage of the ResNet architecture for improved performance. Local/axial relative attention replaces convolutions at high resolution. The large memory footprint of global attention was quickly solved by multiple works which proposed to limit the size of the attention contexts such as local attention (Ramachandran et", "venue": null, "year": 2019 }, { "authors": [ "Wang et al", "2020a", "Shen" ], "title": "2020) (See Section D.2). Such approaches enable using attention at higher resolution and facilitate fully-attentional models but can be slow due to the use of specialized attention patterns. Scaling trumps inductive bias Concurrently to this work, ViT (Dosovitskiy et al., 2020", "venue": "(Ho et al.,", "year": 2020 }, { "authors": [ "D (He" ], "title": "2018) and additionally replace the max pooling layer in the stem by a strided", "venue": null, "year": 2018 }, { "authors": [ "Bello" ], "title": "LambdaResNets use the block allocations from He et al", "venue": null, "year": 2021 }, { "authors": [ "Le" ], "title": "2021), the weiht decay is reduced to 4e-5", "venue": null, "year": 2019 }, { "authors": [ "Xie" ], "title": "We use the same dataset of 130M filtered and balanced JFT images", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Modeling long-range dependencies in data is a central problem in machine learning. Selfattention (Bahdanau et al., 2015; Vaswani et al., 2017) has emerged as a popular approach to do so, but the costly memory requirement of self-attention hinders its application to long sequences and multidimensional data such as images2. Linear (or efficient) attention mechanisms (Katharopoulos et al., 2020; Choromanski et al., 2020) offer a scalable remedy for high memory usage but fail to model internal data structure, such as relative distances between pixels or edge relations between nodes in a graph.\nThis work addresses both issues. We propose lambda layers which model long-range interactions between a query and a structured set of context elements at a reduced memory cost. Lambda layers transform each available context into a linear function, termed a lambda, which is then directly applied to the corresponding query. Whereas self-attention defines a similarity kernel between the query and the context elements, a lambda layer instead summarizes contextual information into a fixed-size linear function (i.e. a matrix), thus bypassing the need for memory-intensive attention maps. This difference is illustrated in Figure 1.\nLambda layers are versatile and can be implemented to model both content-based and position-based interactions in global, local or masked contexts. The resulting neural networks, LambdaNetworks, are computationally efficient, model long-range dependencies at a small memory cost and can therefore be applied to large structured inputs such as high resolution images.\n1An updated version of this paper can be found on arXiv. 2For example, applying a single multi-head attention layer to a batch of 128 64x64 input images with 8\nheads requires 64GB of memory, which is prohibitive in practice.\nWe evaluate LambdaNetworks on computer vision tasks where works using self-attention are hindered by large memory costs (Wang et al., 2018; Bello et al., 2019), suffer impractical implementations (Ramachandran et al., 2019), or require vast amounts of data (Dosovitskiy et al., 2020). In our experiments spanning ImageNet classification, COCO object detection and instance segmentation, LambdaNetworks significantly outperform their convolutional and attentional counterparts, while being more computationally efficient and faster than the latter. We summarize our contributions:\n• Lambda layers: a class of layers, that model content-based and position-based interactions without materializing attention maps. Lambda layers offer a unifying view of channel, spatial and linear attention (Appendix D.4). Some of our observations, such as the computational benefits of a multi-query formulation, extend to linear attention. Lambda layers are easily implemented with einsum operations and convolution kernels, operations with efficient implementations on modern machine learning accelerators.\n• Lambda layers significantly outperform their convolution and attention counterparts on the ImageNet classification task while being more computationally efficient. For example, simply replacing the 3x3 convolutions in the bottleneck blocks of the ResNet-50 architecture (He et al., 2016) with lambda layers yields a +1.5% top-1 ImageNet accuracy improvement while reducing parameters by 40% (Section 5.1).\n• Lambda layers achieve considerable computational benefits, both in latency and memory requirements, over multiple self-attention alternatives, including local and axial attention (Ramachandran et al., 2019; Wang et al., 2020a). When used in a ResNet-50 architecture at image resolution 224, lambda layers reduce memory consumption by ∼200x compared to global attention (∼7x compared to axial attention) while being ∼3.7x faster than local attention (Section 5.2).\n• A study of hybrid convolution-lambda models as a means to maximize the speed-accuracy tradeoff (Section 5.3). Hybrid designs that first employ convolutions at the highest resolution and lambda layers in intermediate to low resolutions achieve the best speed-accuracy tradeoff.\n• LambdaResNets: a family of hybrids based on the training and scaling strategies recommended in Bello et al. (2021). LambdaResNets achieve up to a 4.4x speed-up over EfficientNets on ImageNet, while being more memory-efficient. LambdaResNets can also be designed for parameter or flops efficiency. For example, a LambdaResNet with 42M parameters achieves 84.3% top-1 ImageNet accuracy at image resolution 320 (Section E.4).\n• In large-scale semi-supervised training with an additional 130M pseudo-labeled images, LambdaResNets achieve up to 86.7% top-1 ImageNet accuracy while being 9.5x faster than EfficientNet NoisyStudent (Xie et al., 2020) and 9x faster than a Vision Transformer (Dosovitskiy et al., 2020) with comparable accuracies (Section 5.3).\n• An evaluation of LambdaResNets on COCO object detection and instance segmentation using Mask-RCNN (He et al., 2017). LambdaResNet backbones yield consistent gains across all metrics on both tasks (e.g. +1.8% mAP improvement for detecting small objects)." }, { "heading": "2 MODELING LONG-RANGE INTERACTIONS", "text": "In this section, we formally define queries, contexts and interactions. Starting from first principles, we motivate keys and relative position embeddings as a requirement for capturing structured interactions between queries and their contexts. We then show that lambda layers arise as an alternative to attention mechanisms for capturing long-range interactions.\nNotation. We denote scalars, vectors and tensors using lower-case, bold lower-case and bold upper-case letters, e.g., n, x and X . We denote |n| the cardinality of a set whose elements are indexed by n. We denote xn the n-th row of X . We denote xij the |ij| elements of X . When possible, we adopt the terminology of self-attention to ease readability and highlight differences." }, { "heading": "2.1 MOTIVATING QUERIES, KEYS, POSITION EMBEDDINGS AND VALUES", "text": "Defining queries and contexts. Let Q = {(qn, n)} and C = {(cm,m)} denote structured collections of vectors, respectively referred to as the queries and the context. Each query (qn, n) is characterized by its content qn ∈ R|k| and position n. Similarly, each context element (cm,m) is characterized by its content cm and its position m in the context. The (n,m) pair may refer to any pairwise relation between structured elements, e.g. relative distances between pixels or edges between nodes in a graph.\nDefining interactions. We consider the general problem of mapping a query (qn, n) to an output vector yn ∈ R|v| given the context C with a function F : ((qn, n), C) 7→ yn. Such a function may act as a layer in a neural network when processing structured inputs.\nWe refer to (qn, cm) interactions as content-based and (qn, (n,m)) interactions as position-based. We note that while absolute positional information is sometimes directly added to the query (or context element) content3, we consider this type of interaction to be content-based as it ignores the relation (n,m) between the query and context element positions.\nIntroducing keys and relative position embeddings to capture long-range interactions. In the context of deep learning, we prioritize fast batched linear operations and use dot-product operations as our interactions. This motivates introducing vectors that can interact with the queries via a dotproduct operation and therefore have the same dimension as the queries. In particular, content-based interactions (qn, cm) require a |k|-dimensional vector that depends on cm, commonly referred to as the key km. Conversely, position-based interactions (qn, (n,m)) require a relative position embedding enm ∈ R|k| (Shaw et al., 2018). As the query/key depth |k| and context spatial dimension |m| are not in the output yn ∈ R|v|, these dimensions need to be contracted as part of the layer computations. Therefore\nEvery layer capturing long-range interactions can be characterized based on whether it contracts (1) the query depth or (2) the context positions first." }, { "heading": "2.2 ATTENTION VS LAMBDA LAYERS.", "text": "(1) Attention layers. Contracting the query depth first creates a similarity kernel (the attention map) between the query and context elements and is known as the attention operation. As the number of context positions |m| grows larger and the input and output dimensions |k| and |v| remain fixed, one may hypothesize that computing attention maps become wasteful, given that the layer output is a vector of comparatively small dimension |v| |m|.\n(2) Lambda layers. Instead, it may be more efficient to simply map each query to its output as yn = F ((qn, n), C) = λ(C, n)(qn) for some linear function λ(C, n) : R|k| → R|v|. In this\n3This approach is often used in natural language processing tasks (Vaswani et al., 2017) but has had limited success in the visual domain where relative position information between pixels is crucial (Bello et al., 2019).\nscenario, the context is aggregated into a fixed-size linear function λn = λ(C, n). Each λn acts as a small linear function4 that exists independently of the context (once computed) and is discarded after being applied to its associated query qn." }, { "heading": "3 LAMBDA LAYERS", "text": "" }, { "heading": "3.1 LAMBDA LAYER: TRANSFORMING CONTEXTS INTO LINEAR FUNCTIONS.", "text": "A lambda layer takes the inputs X ∈ R|n|×din and the context C ∈ R|m|×dc as input and generates linear function lambdas that are then applied to the queries, yielding outputs Y ∈ R|n|×dout . Without loss of generality, we assume din = dc = dout = d. As is the case with self -attention, we we may have C = X . In the rest of this paper, we focus on a specific instance of a lambda layer and show that it captures long-range content and position-based interactions without materializing attention maps. Figure 2 presents the computational graph of the lambda layer.\nWe first describe the lambda layer when applied to a single query (qn, n).\nGenerating the contextual lambda function. We wish to generate a linear function R|k| → R|v|, i.e. a matrix λn ∈ R|k|×|v|. The lambda layer first computes keys K and values V by linearly projecting the context, and keys are normalized across context positions via a softmax operation yielding normalized keys K̄. The λn matrix is obtained by using the normalized keys K̄ and position embeddings En to aggregate the values V as\nλn = ∑ m (k̄m + enm)v T m = K̄ T V︸ ︷︷ ︸\ncontent lambda + ETnV︸ ︷︷ ︸ position lambda ∈ R|k|×|v| (1)\nwhere we also define the content lambda λc and position lambda λpn.\n• The content lambda λc is shared across all query positions n and is invariant to permutation of the context elements. It encodes how to transform the query qn solely based on the context content. • The position lambda λpn depends on the query position n via the position embeddingEn. It\nencodes how to transform the query qn based on the context elements cm and their relative positions to the query (n,m).\nApplying lambda to its query. The query qn ∈ R|k| is obtained from the input xn via a learned linear projection and the output of the lambda layer is obtained as\nyn = λ T nqn = (λ c + λpn) Tqn ∈ R|v|. (2)\n4This mechanism is reminiscent of functional programming and λ-calculus which motivates the lambda terminology.\nInterpretation of lambda layers. The columns of the λn ∈ R|k|×|v| matrix can be viewed as a fixed-size set of |k| contextual features. These contextual features are aggregated based on the context’s content (content-based interactions) and structure (position-based interactions). Applying the lambda then dynamically distributes these contextual features based on the query to produce the output as yn = ∑ k qnkλnk. This process captures content and position-based interactions without producing attention maps and can be viewed as an efficient relative attention mechanism.\nNormalization. One may modify Equations 1 and 2 to include non-linearities or normalization operations. Our experiments indicate that applying batch normalization (Ioffe & Szegedy, 2015) after computing the queries and the values is helpful." }, { "heading": "3.2 A MULTI-QUERY FORMULATION TO REDUCE COMPLEXITY.", "text": "Complexity analysis. For a batch of |b| examples, each containing |n| inputs, the number of arithmetic operations and memory footprint required to apply our lambda layer are respectively Θ(bnmkv) and Θ(knm+ bnkv). We still have a quadratic memory footprint with respect to the input length due to the enm relative position embeddings. However this quadratic term does not scale with the batch size as is the case with the attention operation which produces per-example attention maps. In practice, the hyperparameter |k| is set to a small value (such as |k|=16) and we can process large batches of large inputs in cases where attention cannot (see Table 4). Additionally, position embeddings can be shared across lambda layers to keep their Θ(knm) memory footprint constant - whereas the memory footprint of attention maps scales with the number of layers5.\nMulti-query lambda layers reduce time and space complexities. Recall that the lambda layer maps inputs xn ∈ Rd to outputs yn ∈ Rd. As presented in Equation 2, this implies that |v|=d. Small values of |v| may therefore act as a bottleneck on the feature vector yn but larger output dimensions |v| can incur an excessively large computational cost given our Θ(bnmkv) and Θ(knm + bnkv) time and space complexities.\nWe propose to decouple the time and space complexities of our lambda layer from the output dimension d. Rather than imposing |v|=d, we create |h| queries {qhn}, apply the same lambda λn to each query qhn, and concatenate the outputs as yn = concat(λnq 1 n, · · · ,λnq |h| n ). We now have |v|=d/|h|, which reduces complexity by a factor of |h|. The number of heads |h| controls the size of the lambdas λn ∈ R|k|×|d|/|h| relative to the total size of the queries qn ∈ R|hk|. We refer to this operation as a multi-query lambda layer and present an implementation using einsum6 in Figure 3. The lambda layer is robust to |k| and |h| hyperparameter choices (see Appendix E.1), which enables flexibility in controlling its complexity. We use |h|=4 in most experi-\n5Attention maps typically need to be stored for back-propagation (Kitaev et al., 2020). 6The einsum operation denotes general contractions between tensors of arbitrary dimensions. It is numerically equivalent to broadcasting its inputs to share the union of their dimensions, multiplying element-wise and summing across all dimensions not specified in the output.\nments. We note that while this resembles the multi-head or multi-query (Shazeer, 2019)7 attention formulation, the motivation is different. Using multiple queries in the attention operation increases representational power and complexity. In contrast, using multiple queries in the lambda layer decreases complexity and representational power (ignoring the additional queries).\nExtending the multi-query formulation to linear attention. Finally, we point that our analysis extends to linear attention which can be viewed as a content-only lambda layer (see Appendix D.3 for a detailed discussion). We anticipate that the multi-query formulation can also bring computational benefits to linear attention mechanisms." }, { "heading": "3.3 MAKING LAMBDA LAYERS TRANSLATION EQUIVARIANT.", "text": "Using relative position embeddings enm enables making explicit assumptions about the structure of the context. In particular, translation equivariance (i.e. the property that shifting the inputs results in an equivalent shift of the outputs) is a strong inductive bias in many learning scenarios. We obtain translation equivariance in position interactions by ensuring that the position embeddings satisfy enm = et(n)t(m) for any translation t. In practice, we define a tensor of relative position embeddings R ∈ R|r|×|k|, where r indexes the possible relative positions for all (n,m) pairs, and reindex8 it into E ∈ R|n|×|m|×|k| such that enm = rr(n,m)." }, { "heading": "3.4 LAMBDA CONVOLUTION: LOCAL CONTEXTS ON THE GRID.", "text": "Despite the benefits of long-range interactions, locality remains a strong inductive bias in many tasks. Using global contexts may prove noisy or computationally excessive. It may therefore be useful to restrict the scope of position interactions to a local neighborhood around the query position n as is the case for local self-attention and convolutions. This can be done by zeroing out the relative embeddings for context positions m outside of the desired scope. However, this strategy remains costly for large values of |m| since the computations still occur - they are only being zeroed out. Lambda convolution In the case where the context is arranged in a multidimensional grid, we can equivalently compute positional lambdas from local contexts by using a regular convolution. We term this operation the lambda convolution. A n-dimensional lambda convolution can be implemented using an n-d depthwise convolution with channel multiplier or (n+1)-d convolution that treats the v dimension in V as an extra spatial dimension. We present both implementations in Appendix C.1.\nAs the computations are now restricted to a local scope, the lambda convolution obtains linear time and memory complexities with respect to the input length9. The lambda convolution is readily\n7 (Shazeer, 2019) proposes a multi-query formulation to speed-up attention-based decoding. 8We refer the reader to the code for more details. 9FLOPs (time complexity) is not necessarily a good proxy for latency on TPUs/GPUs. Eventhough the lambda convolution has linear time/space complexities, it can be slower than than the global lambda layer in practice, especially for large convolution scope sizes. See Table 4 for an example.\nusable with additional functionalities such as dilation and striding and enjoys optimized implementations on specialized hardware accelerators (Nickolls & Dally, 2010; Jouppi et al., 2017). This is in stark contrast to implementations of local self-attention that require materializing feature patches of overlapping query and context blocks (Parmar et al., 2018; Ramachandran et al., 2019), increasing memory consumption and latency (see Table 4)." }, { "heading": "4 RELATED WORK", "text": "Table 2 reviews alternatives for capturing long-range interactions and contrasts them with the proposed multi-query lambda layer. We discuss related works in details in the Appendix D. Channel and linear attention The lambda abstraction, i.e. transforming available contexts into linear functions that are applied to queries, is quite general and therefore encompasses many previous works. Closest to our work are channel and linear attention mechanisms (Hu et al., 2018c; Katharopoulos et al., 2020; Choromanski et al., 2020). Such mechanisms also capture long-range interactions without materializing attention maps and can be viewed as specific instances of a contentonly lambda layer. Lambda layers formalize and extend such approaches to consider both contentbased and position-based interactions, enabling their use as a stand-alone layer on highly structured data such as images. Rather than attempting to closely approximate an attention kernel as is the case with linear attention, we focus on the efficient design of contextual lambda functions and repurpose a multi-query formulation (Shazeer, 2019) to further reduce computational costs. Self-attention in the visual domain In contrast to natural language processing tasks where it is now the de-facto standard, self-attention has enjoyed steady but slower adoption in the visual domain (Wang et al., 2018; Bello et al., 2019; Ramachandran et al., 2019; Carion et al., 2020). Concurrently to this work, Dosovitskiy et al. (2020) achieve a strong 88.6% accuracy on ImageNet by pre-training a Transformer on sequences of image patches on a large-scale dataset of 300M images." }, { "heading": "5 EXPERIMENTS", "text": "In subsequent experiments, we evaluate lambda layers on standard computer vision benchmarks: ImageNet classification (Deng et al., 2009), COCO object detection and instance segmentation (Lin et al., 2014). The visual domain is well-suited to showcase the flexibility of lambda layers since (1) the memory footprint of self-attention becomes problematic for high-resolution imagery and (2) images are highly structured, making position-based interactions crucial. LambdaResNets We construct LambdaResNets by replacing the 3x3 convolutions in the bottleneck blocks of the ResNet architecture (He et al., 2016). When replacing all such convolutions, we simply denote the name of the layer being tested (e.g. conv + channel attention or lambda layer). We denote LambdaResNets the family of hybrid architectures described in Table 19 (Appendix F.2). Unless specified otherwise, all lambda layers use |k|=16, |h|=4 with a scope size of |m|=23x23 and are implemented as in Figure 3. Additional experiments and details can be found in the Appendix." }, { "heading": "5.1 LAMBDA LAYERS OUTPERFORM CONVOLUTIONS AND ATTENTION LAYERS", "text": "We first consider the standard ResNet-50 architecture with input image size 224x224. In Table 3, we compare the lambda layer against (a) the standard convolution (i.e. the baseline ResNet-50) (b) channel attention (squeeze-and-excitation) and (c) multiple self-attention variants. The lambda layer strongly outperforms all baselines at a fraction of the parameter cost and notably obtains a +0.8% improvement over channel attention." }, { "heading": "5.2 COMPUTATIONAL BENEFITS OF LAMBDA LAYERS OVER SELF-ATTENTION", "text": "In Table 4, we compare lambda layers against self-attention and present throughputs, memory complexities and ImageNet accuracies. Our results highlight the weaknesses of self-attention: selfattention cannot model global interactions due to large memory costs, axial self-attention is still memory expensive and local self-attention is prohibitively slow. In contrast, the lambda layer can capture global interactions on high-resolution images and obtains a +1.0% improvement over local self-attention while being almost 3x faster10. Additionally, positional embeddings can be shared\n10Latencies for local self-attention were provided privately by Ramachandran et al. (2019) based on an implementation that relies on query blocks and overlapping memory blocks (Parmar et al., 2018). Specialized attention kernels may greatly speed up local self-attention, making it a promising avenue for future research.\nacross lambda layers to further reduce memory requirements, at a minimal degradation cost. Finally, the lambda convolution has linear memory complexity, which becomes practical for very large images as seen in detection or segmentation. We also find that the lambda layer outperforms local self-attention when controlling for the scope size11 (78.1% vs 77.4% for |m|=7x7), suggesting that the benefits of the lambda layer go beyond improved speed and scalability." }, { "heading": "5.3 HYBRIDS IMPROVE THE SPEED-ACCURACY TRADEOFF OF IMAGE CLASSIFICATION", "text": "Studying hybrid architectures. In spite of the memory savings compared to self-attention, capturing global contexts with the lambda layer still incurs a quadratic time complexity (Table 2), which remains costly at high resolution. In Appendix 5.3, we study hybrid designs that use standard convolutions to capture local contexts and lambda layers to capture global contexts. We find that such convolution-lambda hybrids have increased representational power at a negligible decrease in throughput compared to their purely convolutional counterparts.\nLambdaResNets significantly improve the speed-accuracy tradeoff of ImageNet classification. We design a family of hybrids based on our study of hybrid architectures and the scaling/training strategies from Bello et al. (2021) (Section F.2). Figure 4 presents the speed-accuracy Pareto curve of LambdaResNets compared to EfficientNets (Tan & Le, 2019) on TPUv3 hardware. In order to isolate the benefits of lambda layers, we additionally compare against the same architectures when replacing lambda layers by (1) standard 3x3 convolutions (denoted ResNet-RS wo/ SE) and (2) 3x3\n11Note that the content-based lambda still captures global interactions.\nconvolutions with squeeze-and-excitation (denoted ResNet-RS w/ SE). All architectures are trained for 350 epochs using the same regularization methods and evaluated at the same resolution they are trained at. LambdaResNets outperform the baselines across all scales on the speed-accuracy trade-off.\nScaling to larger datasets with pseudo-labels We train LambdaResNets in a semi-supervised learning setting using 130M pseudo-labeled images from the JFT dataset, as done for training the EfficientNet-NoisyStudent checkpoints (Xie et al., 2020). Table 5 compares the throughputs and ImageNet accuracies of a representative set of models with similar accuracies when trained using the JFT dataset. LambdaResNet-152, trained and evaluated at image size 288, achieves a strong 86.7% top-1 ImageNet accuracy while being more parameter-efficient and 9.5x faster than the EfficientNetNoisyStudent checkpoint with the same accuracy." }, { "heading": "6 CONCLUSION", "text": "We propose a new class of layers, termed lambda layers, which provide a scalable framework for capturing structured interactions between inputs and their contexts. Lambda layers summarize available contexts into fixed-size linear functions, termed lambdas, that are directly applied to their associated queries. The resulting neural networks, LambdaNetworks, are computationally efficient and capture long-range dependencies at a small memory cost, enabling their application to large structured inputs such as high-resolution images. Extensive experiments on computer vision tasks showcase their versatility and superiority over convolutional and attentional networks. We introduce LambdaResNets, a family of hybrid LambdaNetworks which reach excellent ImageNet accuracies and achieve up to 9.5x speed-ups over the popular EfficientNets and Vision Transformers, significantly improving the speed-accuracy tradeoff of image classification models." }, { "heading": "ACKNOWLEDGMENTS", "text": "The author would like to thank Barret Zoph and William Fedus for endless discussions, fruitful suggestions and careful revisions; Jonathon Shlens, Mike Mozer, Prajit Ramachandran, Ashish Vaswani, Quoc Le, Neil Housby, Jakob Uszkoreit, Margaret Li, Krzysztof Choromanski for many insightful comments; Hedvig Rausing for the antarctic infographics; Zolan Brinnes for the OST; Andrew Brock, Sheng Li for assistance with profiling EfficientNets; Adam Kraft, Thang Luong and Hieu Pham for assistance with the semi-supervised experiments and the Google Brain team for useful discussions on the paper." }, { "heading": "CONTENTS", "text": "" }, { "heading": "1 Introduction 1", "text": "" }, { "heading": "2 Modeling Long-Range Interactions 3", "text": "2.1 Motivating queries, keys, position embeddings and values . . . . . . . . . . . . . . 3\n2.2 Attention vs lambda layers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3" }, { "heading": "3 Lambda Layers 4", "text": "3.1 Lambda layer: transforming contexts into linear functions. . . . . . . . . . . . . . 4\n3.2 A multi-query formulation to reduce complexity. . . . . . . . . . . . . . . . . . . 5\n3.3 Making lambda layers translation equivariant. . . . . . . . . . . . . . . . . . . . . 6\n3.4 Lambda convolution: local contexts on the grid. . . . . . . . . . . . . . . . . . . . 6" }, { "heading": "4 Related Work 7", "text": "" }, { "heading": "5 Experiments 7", "text": "5.1 Lambda layers outperform convolutions and attention layers . . . . . . . . . . . . 7\n5.2 Computational benefits of lambda layers over self-attention . . . . . . . . . . . . 7\n5.3 Hybrids improve the speed-accuracy tradeoff of image classification . . . . . . . . 8" }, { "heading": "6 Conclusion 9", "text": "" }, { "heading": "A Discussion 16", "text": "A.1 General discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16\nA.2 Extending lambda layers to other modalities . . . . . . . . . . . . . . . . . . . . . 16" }, { "heading": "B Practical Modeling Recommendations 17", "text": "" }, { "heading": "C Additional Variants 18", "text": "C.1 Complete code with lambda convolution . . . . . . . . . . . . . . . . . . . . . . . 18\nC.2 Generating lambdas from masked contexts . . . . . . . . . . . . . . . . . . . . . . 18\nC.3 Multi-head vs multi-query lambda layers . . . . . . . . . . . . . . . . . . . . . . . 18\nC.4 Adding expressivity with an extra dimension . . . . . . . . . . . . . . . . . . . . . 19" }, { "heading": "D Additional Related Work 20", "text": "D.1 Softmax attention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20\nD.2 Sparse attention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21\nD.3 Linear attention: connections and differences . . . . . . . . . . . . . . . . . . . . 21\nD.4 Casting channel and spatial attention as lambda layers. . . . . . . . . . . . . . . . 22\nD.5 Self-Attention in the visual domain . . . . . . . . . . . . . . . . . . . . . . . . . 22\nD.6 HyperNetworks, expert models and context-dependent weights . . . . . . . . . . . 23" }, { "heading": "E Additional Experiments 24", "text": "E.1 Ablation study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24\nE.2 Hybrid models study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25\nE.3 Object detection and instance segmentation results . . . . . . . . . . . . . . . . . 26\nE.4 Parameter and FLOPs efficiency results . . . . . . . . . . . . . . . . . . . . . . . 27" }, { "heading": "F Experimental Details 29", "text": "F.1 Detailed LambdaResNets results . . . . . . . . . . . . . . . . . . . . . . . . . . . 29\nF.2 Architectural details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29\nF.3 Training details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30" }, { "heading": "A DISCUSSION", "text": "" }, { "heading": "A.1 GENERAL DISCUSSION", "text": "How do lambda layers compare to the attention operation? Lambda layers scale favorably compared to self-attention. Vanilla Transformers using self-attention have Θ(blhn2) memory footprint, whereas LambdaNetworks have Θ(lkn2) memory footprint (or Θ(kn2) when sharing positional embeddings across layers). This enables the use of lambda layers at higher-resolution and on larger batch sizes. Additionally, the lambda convolution enjoys a simpler and faster implementation than its local self-attention counterpart. Finally, our ImageNet experiments show that lambda layers outperforms self-attention, demonstrating that the benefits of lambda layers go beyond improved speed and scalability.\nHow are lambda layers different than linear attention mechanisms? Lambda layers generalize and extend linear attention formulations to capture position-based interactions, which is crucial for modeling highly structured inputs such as images (see Table 9 in Appendix E.1). As the aim is not to approximate an attention kernel, lambda layers allow for more flexible non-linearities and normalizations which we also find beneficial (see Table 11 in Appendix E.1). Finally, we propose multi-query lambda layers as a means to reduce complexity compared to the multi-head (or single-head) formulation typically used in linear attention works. Appendix D.3 presents a detailed discussion of linear attention.\nHow to best use lambda layers in the visual domain? The improved scalability, speed and ease of implementation of lambda layers compared to global or local attention makes them a strong candidate for use in the visual domain. Our ablations demonstrate that lambda layers are most beneficial in the intermediate and low-resolution stages of vision architectures when optimizing for the speed-accuracy tradeoff. It is also possible to design architectures that rely exclusively on lambda layers which can be more parameter and flops efficient. We discuss practical modeling recommendations in Appendix B." }, { "heading": "A.2 EXTENDING LAMBDA LAYERS TO OTHER MODALITIES", "text": "While this work focuses on static image recognition, we note that lambda layers may be instantiated to model structured interactions on structures as diverse as graphs, time series, spatial lattices, etc. We anticipate that lambda layers will be helpful in more modalities, including multimodal tasks. We discuss masked contexts and auto-regressive tasks in the Appendix C.2.\nLambda layers can be instantiated on other tasks simply by adapting the choice of structural/position embeddings to the task of interest and following the pseudo-code presented in Figure 3. The choice of embeddings dictates the memory costs of the lambda layer. The assumption underlying the Θ(knm) space complexity of the lambda layer (Section 3.2) is that all examples in the batch share the same structure, i.e. relative position embeddings have shape k × n×m. This assumption does not hold when the data structure is different across examples (e.g. graphs with variable edge relations between nodes), in which case embeddings have shape b× k × n×m. In such cases, the lambda layer has Θ(bknm) space complexity, similar to self-attention." }, { "heading": "B PRACTICAL MODELING RECOMMENDATIONS", "text": "I want to make it faster on TPUs/GPUs... Hybrid models reach a better speed-accuracy tradeoff. Global contexts can be computationally wasteful, especially in the early high resolution layers where features lack semantic information, and can be replaced by lambda convolutions with smaller scopes (e.g. |m|=5x5 or 7x7) or the standard 3x3 convolution. Additionally, using a hybrid can require less tuning when starting from a working model/training setup.\nI want to make to minimize FLOPS (e.g. embedded applications)... Consider a hybrid with inverted bottlenecks, as done in Section E.4.2. To further reduce FLOPS, prefer lambda convolutions with smaller scopes (e.g. |m|=5x5 or 7x7).\nI encounter memory issues... Memory footprint can be reduced by sharing position embeddings across layers (especially layers with the highest resolution). Using the lambda convolution is more memory efficient. Reducing the query depth |k| or increasing the number of heads |h| also decreases memory consumption.\nI’m experiencing instability... We found it important to initialize the γ parameter in the last batchnorm layer of the ResNet’s bottleneck blocks to 0 (this is the default in most codebases). Normalizing the keys (i.e. with the softmax) along the context’s length is important. Early experiments which employed 2 lambda layers sequentially in the same residual block were unstable, suggesting that using 2 lambda layers in sequence should be avoided.\nWhich implementation of the lambda convolution should I use? In our experiments using Tensorflow 1.x on TPUv3 hardware, we found both the n-d depthwise and (n+1)-d convolution implementations to have similar speed. We point out that this can vary across software/hardware stacks.\nWhat if my task doesn’t require position-based interactions? Computational costs in the lambda layer are dominated by position-based interactions. If your task doesn’t require them, you can try the content-only lambda layer or any other linear attention mechanism. We recommend using the multi-query formulation (as opposed to the usual multi-head) and scaling other dimensions of the model." }, { "heading": "C ADDITIONAL VARIANTS", "text": "" }, { "heading": "C.1 COMPLETE CODE WITH LAMBDA CONVOLUTION", "text": "" }, { "heading": "C.2 GENERATING LAMBDAS FROM MASKED CONTEXTS", "text": "In some applications, such as denoising tasks or auto-regressive training, it is necessary to restrict interactions to a sub-context Cn ⊂ C when generating λn for query position n. For example, parallel auto-regressive training requires masking the future to ensure that the output yn only depends on past context positions m < n. Self-attention achieves this by zeroing out the irrelevant attention weights anm′ = 0 ∀m′ /∈ Cn, thus guaranteeing that yn = ∑ m anmvm only depends on Cn.\nSimilarly, one can block interactions between queries and masked context positions when generating lambdas by applying a mask before summing the contributions of context positions. As long as the mask is shared across all elements in the batch, computing masked lambdas does not require materializing per-example attention maps and the complexities are the same as for global context case. See Figure 6 for an implementation." }, { "heading": "C.3 MULTI-HEAD VS MULTI-QUERY LAMBDA LAYERS", "text": "In this section, we motivate using a multi-query formulation as opposed to the usual multi-head formulation used in self-attention. Figure 7 presents the implementation of a multi-head lambda layer. Table 6 compares complexities for multi-head and multi-query lambda layers. Using a multi-query formulation reduces computations by a factor of |h| (the number of queries per lambda) compared to the multi-head formulation. We also found in early experimentation that multi-query lambdas yield a better speed-accuracy trade-off. Additionally, the multi-head lambda layer does not enjoy a simple local implementation as the lambda convolution." }, { "heading": "C.4 ADDING EXPRESSIVITY WITH AN EXTRA DIMENSION", "text": "We briefly experiment with a variant that enables increasing the cost of computing the lambdas while keeping the cost of applying them constant. This is achieved by introducing an additional dimension, termed the intra-depth with corresponding hyperparameter |u|, in keys, position embeddings and values. Each key (or positional embedding) is now a |k| × |u| matrix instead of a |k|-dimensional vector. Similarly, each value is now a |v| × |u| matrix instead of a |v|-dimensional vector. The lambdas are obtained via summing over context positions and the intra-depth position |u| and have |k| × |v| shape similar to the default case. See Figure 8 for an implementation and Table 7 for the complexities. Experiments (see Appendix E.1) demonstrate that this variant results in accuracy improvements but we find that using |u|=1 (i.e. the default case) is optimal when controlling for speed on modern machine learning accelerators." }, { "heading": "D ADDITIONAL RELATED WORK", "text": "In this section, we review the attention operation and related works on improving its scalability. We discuss connections between lambda layers and channel, spatial or linear attention mechanisms and show how they can be cast as less flexible specific instances of lambda layers. We conclude with a brief review of self-attention in the visual domain and discuss connections with expert models." }, { "heading": "D.1 SOFTMAX ATTENTION", "text": "Softmax attention Softmax-attention produces a distribution over the context for each query qn as an = softmax(Kqn) ∈ R|m| where the keys K are obtained from the context C. The attention distribution an is then used to form a linear combination of values V obtained from the context as yn = V Tan = ∑\nm anmvm ∈ R|v|. As we take a weighted sum of the values12, we transform the query qn into the output yn and discard its attention distribution an. This operation captures content-based interactions, but not position-based interactions.\nRelative attention In order to model position-based interactions, relative attention (Shaw et al., 2018) introduces a learned matrix of |m| positional embeddings En ∈ R|m|×|k| and computes the attention distribution as an = softmax((K +En)qn) ∈ R|m|. The attention distribution now also depends on the query position n relative to positions of context elements m. Relative attention therefore captures both content-based and position-based interactions.\n12Sometimes the attention operation is instead used to point to specific context elements (Vinyals et al., 2015; Bello et al., 2016), which is not supported by lambda layers." }, { "heading": "D.2 SPARSE ATTENTION", "text": "A significant challenge in applying (relative) attention to large inputs comes from the quadratic Θ(|bnm|) memory footprint required to store attention maps. Many recent works therefore propose to impose specific patterns to the attention maps as a means to reduce the context size |m| and consequently the memory footprint of the attention operation. These approaches include:\n• local attention patterns (Dai et al., 2019; Parmar et al., 2018; Ramachandran et al., 2019) • axial attention patterns (Ho et al., 2019; Wang et al., 2020a; Shen et al., 2020) • static sparse attention patterns (Child et al.; Beltagy et al., 2020) • dynamic sparse attention patterns (Kitaev et al., 2020)\nSee Tay et al. (2020) for a review. Their implementations can be rather complex, sometimes require low-level kernel implementations to get computational benefits or may rely on specific assumptions on the shape of the inputs (e.g., axial attention). In contrast, lambda layers are simple to implement for both global and local contexts using simple einsum and convolution primitives and capture dense content and position-based interactions with no assumptions on the input shape." }, { "heading": "D.3 LINEAR ATTENTION: CONNECTIONS AND DIFFERENCES", "text": "Another approach to reduce computational requirements of attention mechanisms consists in approximating the attention operation in linear space and time complexity, which is referred to as linear (or efficient) attention. Linear attention mechanisms date back to de Brébisson & Vincent (2016); Britz et al. (2017) and were later introduced in the visual domain by Chen et al. (2018); Shen et al. (2018). They are recently enjoying a resurgence of popularity with many works modifying the popular Transformer architecture for sequential processing applications (Katharopoulos et al., 2020; Wang et al., 2020b; Choromanski et al., 2020; Xiong et al., 2021).\nLinear attention via kernel factorization Linear attention is typically obtained by reinterpreting attention as a similarity kernel and leveraging a low-rank kernel factorization as\nAttention(Q,K,V ) = softmax(QKT )V ∼ φ(Q)(φ(KT )V ) (3)\nfor some feature function φ. Computing φ(KT )V ∈ R|k|×|v| first bypasses the need to materialize the attention maps φ(Q)φ(KT ) and the operation therefore has linear complexity with respect to the input length |n|. Multiple choices for the feature function φ have been proposed. For example, Katharopoulos et al. (2020) use φ(x) = elu(x) + 1, while Choromanski et al. (2020) use positive orthogonal random features to approximate the original softmax attention kernel. In the visual domain, both Chen et al. (2018) and Shen et al. (2018) use φ(x) = softmax(x). This choice is made to guarantee that the rows of the (non-materialized) attention maps φ(Q)φ(K)T sum to 1 as is the case in the regular attention operation.\nWe discuss the main differences between lambda layers and linear attention mechanisms.\n1) Lambda layers extend linear attention to also consider position-based interactions. The kernel approximation from Equation 3 can be rewritten for a single query qn as\nyn = (φ(K) TV )Tφ(qn) (4)\nwhich resembles the output of the content lambda ycn = (λ c)Tqn = (K̄ TV )Tqn from Equation 1. Lambda layers extend linear attention mechanisms to also consider position-based interactions as\nyn = λ T nqn = (λ c + λpn) Tqn = ((K̄ +En) TV )Tqn (5)\nIn the above equation, computing the position (or content) lambda has Θ(bmkv) time complexity. As the position lambdas are not shared across query positions n, this cost is repeated for all |n| queries, leading to a total time complexity Θ(bnmkv). Unlike linear attention mechanisms, lambda layers have quadratic time complexity with respect to the input length (in the global context case) because they consider position-based interactions.\n2) Lambda layers do not necessarily attempt to approximate an attention kernel. While approximations of the attention kernel are theoretically motivated, we argue that they may be unnecessarily restrictive. For example, the kernel approximation in Equation 3 requires the same feature function φ on both Q and K and precludes the use of more flexible non-linearities and normalization schemes. In contrast, lambda layers do not attempt to approximate an attention kernel. This simplifies their design and allows for more flexible non-linearity and normalization schemes, which we find useful in our ablations (See Table 11 in Appendix E.1). Considering the position embeddings independently of the keys notably enables a simple and efficient local implementation with the lambda convolution. Approximating the relative attention kernel would require normalizing the position embeddings with the keys (i.e., φ(K + En) instead of φ(K) + En), which cannot be implemented in the local context case with a convolution.\n3) The lambda abstraction reveals the computational benefits of the multi-query formulation. Finally, this work proposes to abstract the K̄TV andETnV matrices as linear functions (the content and position lambdas) that are directly applied to the queries. The lambda abstraction reveals the benefits of multi-query formulation (as opposed to the traditional multi-head attention formulation) as a means to reduce computational costs." }, { "heading": "D.4 CASTING CHANNEL AND SPATIAL ATTENTION AS LAMBDA LAYERS.", "text": "We show that the lambda abstraction generalizes channel and spatial attention mechanisms, both of which can be viewed as specific instances of lambda layers. This observation is consistent with our experiments which demonstrate that lambda layers outperform both channel and spatial attention while being more computationally efficient.\nChannel attention Channel attention mechanisms, such as Squeeze-and-Excitation (SE) (Hu et al., 2018c;b) and FiLM layers (Perez et al., 2017), recalibrate features via cross-channel interactions by aggregating signals from the entire feature map. In particular, the SE operation can be written as ynk = wkqnk wherewk is the excitation weight for channel k in the query qn. This can be viewed as using a diagonal lambda which is shared across query positions λn = diag(w1 · · ·w|k|). Channel attention mechanisms have proven useful to complement convolutions but cannot be used as a stand-alone layer as they discard spatial information.\nSpatial attention Conversely, spatial attention mechanisms, reweigh each position based on signals aggregated from all channels (Xu et al., 2015; Park et al., 2018; Woo et al., 2018). These mechanisms can be written as ynk = wnqnk where wn is the attention weight for position n in the input query Q. This can be viewed as using (position-dependent) scalar lambdas λn = wnI where I is the identity matrix. Spatial attention has also proven helpful to complement convolutions but cannot be used as a stand-alone layer as it discards channel information." }, { "heading": "D.5 SELF-ATTENTION IN THE VISUAL DOMAIN", "text": "Self-attention has been used in a myriad of tasks in the visual domain. These include image classification (Bello et al., 2019; Ramachandran et al., 2019; Cordonnier et al., 2019; Zhao et al., 2020; Wu et al., 2020; Dosovitskiy et al., 2020); object detection and object-centric tasks (Wang et al., 2018; Hu et al., 2018a; Carion et al., 2020; Locatello et al., 2020); video tasks (Sun et al., 2019; Liao et al., 2019); autoregressive/adversarial generative modeling (Parmar et al., 2018; Zhang et al., 2019; Brock et al., 2019; Chen et al., 2020a) and multi-modal text-vision tasks (Chen et al., 2020b; Lu et al., 2019; Li et al., 2019; Radford et al., 2021)\nThe first use of self-attention in vision dates back to the non-local block (Wang et al., 2018), which added a single-head global self-attention residual in the low resolution stages of a ConvNet for longrange dependency modeling. The non-local block has proven useful to complement convolutions but cannot be used as a stand-alone layer as it does not model position-based interactions.\nGlobal relative attention replaces convolutions at low resolution. Bello et al. (2019) introduced a 2d relative attention mechanism that proved competitive as a replacement to convolutions but gives even stronger results when used to concatenate convolutional features with self-attention features. The spatial convolutions in the bottleneck block of the ResNet architecture were replaced with a\nglobal multi-head self-attention mechanism with 2d relative position embeddings. Due to the large memory constraints of global attention, this operation was restricted to low resolution feature maps and the proposed architecture was a conv-transformer hybrid.\nA similar hybrid design has recently been revisited by Srinivas et al. (2021) using modern training and scaling techniques. Srinivas et al. (2021), rather than concatenating convolutional feature maps, propose to use a stride of 1 in the last stage of the ResNet architecture for improved performance.\nLocal/axial relative attention replaces convolutions at high resolution. The large memory footprint of global attention was quickly solved by multiple works which proposed to limit the size of the attention contexts such as local attention (Ramachandran et al., 2019; Hu et al., 2019) and axial attention (Ho et al., 2019; Wang et al., 2020a; Shen et al., 2020) (See Section D.2). Such approaches enable using attention at higher resolution and facilitate fully-attentional models but can be slow due to the use of specialized attention patterns.\nScaling trumps inductive bias Concurrently to this work, ViT (Dosovitskiy et al., 2020) propose to simply apply attention on pixel patches (as opposed to individual pixels) as a remedy to large memory requirements. While patch-based attention does not maintain accurate positional information or translation equivariance, the loss of inductive bias is recovered by pre-training on large-scale datasets (e.g. 300M images). Most remarkably, ViT achieves close to state-of-the-art accuracy when fine-tuned on the ImageNet dataset, while requiring less training compute that convolutional alternatives (Kolesnikov et al., 2020; Xie et al., 2020). This result has reinvigorated interest in using self-attention in the visual domain with multiple follow-up works already building upon this approach (Touvron et al., 2021)13. In spite of the impressive image classification results, concerns remain as to whether the patch-based approach can scale to larger images and transfer to tasks that require precise localization such as detection.\nWe stress that reducing memory by working with pixel patches is orthogonal to the specific operation used and that lambda layers (or linear attention) can also be used on pixel patches." }, { "heading": "D.6 HYPERNETWORKS, EXPERT MODELS AND CONTEXT-DEPENDENT WEIGHTS", "text": "LambdaNetworks generate their own computations, i.e. lambdas such that yn = λTnqn. As such, they can alternatively be viewed as an extension of HyperNetworks (Ha et al., 2016) that dynamically generate their computations based on structured contextual information. The concept of generating context-dependent weights is also related to fast weights (Ba et al., 2016).\nLastly, LambdaNetworks share some connections with sparsely-activated expert models (Shazeer et al., 2017; Fedus et al., 2021). Whereas sparsely-activated expert models select the computation (i.e. the lambda) from a bank of weights based on the input query, LambdaNetworks generate their computations based on contextual information.\n13Most follow-up works advertise improvements over ViT on smaller datasets which is not the intended purpose of ViT." }, { "heading": "E ADDITIONAL EXPERIMENTS", "text": "" }, { "heading": "E.1 ABLATION STUDY", "text": "We perform several ablations and validate the importance of positional interactions, long-range interactions and flexible normalization schemes. Unless specified otherwise, all experimental results in this section report ImageNet accuracies obtained by training a LambdaNetwork architecture that replaces the spatial convolutions in the ResNet-50 with lambda layers.\nVarying query depth, number of heads and intra-depth. Table 8 presents the impact of the query depth |k|, number of heads |h| and intra depth |u| on performance (See Appendix C.4 for a presentation of the intra-depth |u|). Our experiments indicate that the lambda layer outperforms convolutional and attentional baselines for a wide range of hyperparameters, demonstrating the robustness of the method.\nContent vs position interactions Table 9 presents the relative importance of content-based and position-based interactions on the ImageNet classification task. We find that position-based interactions are crucial to reach high accuracies, while content-based interactions only bring marginal improvements over position-based interactions14.\n14This observation is challenged by concurrent work (Dosovitskiy et al., 2020) which demonstrates that content-based interactions can be sufficient for image classification when pre-training on large scale datasets (e.g. 300M images).\nImportance of scope size The small memory footprint of LambdaNetworks enables considering global contexts, even at relatively high resolution. Table 10 presents flops counts and top-1 ImageNet accuracies when varying scope sizes in a LambdaNetwork architecture. We find benefits from using larger scopes, with a plateau around |m|=15x15, which validates the importance of longer range interactions compared to the usual 3x3 spatial convolutions used in the ResNet architecture. In our main experiments, we choose |m|=23x23 as the default to account for experiments that use larger image sizes.\nNormalization Table 11 ablates normalization operations in the design of the lambda layer. We find that normalizing the keys is crucial for performance and that other normalization functions besides the softmax can be considered. Applying batch normalization to the queries and values is also helpful." }, { "heading": "E.2 HYBRID MODELS STUDY", "text": "In this section, we study hybrid designs that use standard convolutions to capture local contexts and lambda layers to capture global contexts.15\nWhere are lambda layers most useful? Table 12 presents the throughputs and accuracies of hybrid LambdaNetwork architectures as a function of the location of convolutions and lambda layers in a ResNet-50 architecture. We observe that lambda layers are most helpful in the last two stages (commonly referred to as c4 and c5) when considering their speed-accuracy tradeoff. We refer to architectures that replaces 3x3 convolutions in the last 2 stages of the ResNet with lambda layers as LambdaResNet-C4.\nFurther pushing the speed-accuracy Pareto frontier. In Table 13, we further study how throughput and accuracy are impacted by the number of lambda layers in the c4 stage. Our results reveal that most benefits from lambda layers can be obtained by (a) replacing a few 3x3 convolutions with lambda layers in the c4 stage and (b) replacing all 3x3 convolutions in c5. The resulting hybrid LambdaResNets architectures have increased representational power at a virtually negligible decrease in throughput compared to their vanilla ResNet counterparts. Table 19 presents the detailed block configurations and placement of lambda layers for our family of LambdaResNets.\n15We could alternatively use the lambda convolution to capture local contexts.\nComparing hybrid lambda vs attention models. The memory savings of lambda layers compared to attention are less significant in the aforementioned hybrid design, since the operations occur at lower resolution. Therefore, it is natural to ask whether lambda layers still have benefits over self-attention when considering hybrid designs. We consider our largest hybrid as an example (see Table 19). LambdaResNet-420 is trained on 320x320 inputs, employs 8 lambda layers in c4 and can fit 32 examples per TPU-v3 core. This adds up to a cost of 38.4MB for lambda layers (4.8MB if sharing positional embeddings), whereas using attention layers instead would incur 0.625GB. The increase might not be significant in practice and it will be interesting to carefully benchmark the hybrid attention variants16. We point that experiments from Table 4 suggest that the benefits of lambda layers go beyond improved scalability and stress that the memory savings are more pronounced for tasks that require larger inputs such as object detection." }, { "heading": "E.3 OBJECT DETECTION AND INSTANCE SEGMENTATION RESULTS", "text": "In Table 14, we evaluate LambdaResNets as a backbone in Mask-RCNN (He et al., 2017) on the COCO object detection and instance segmentation tasks. Using lambda layers yields consistent gains across all object sizes, especially the small objects which are the hardest to locate. This indicates that lambda layers are also competitive for more complex visual tasks that require localization information.\n16We will benchmark such architectures in a future version of this draft." }, { "heading": "E.4 PARAMETER AND FLOPS EFFICIENCY RESULTS", "text": "" }, { "heading": "E.4.1 COMPUTATIONAL EFFICIENCY COMPARISONS TO LARGE EFFICIENTNETS", "text": "In Table 15 and Table 16, we showcase the parameter and flops-efficiency of LambdaNetworks. We find that LambdaResNet-C4 which replaces the 3x3 convolutions in the last 2 stages of the ResNet architecture, where they incur the highest parameter costs, improves upon parameter and flops efficiency of large EfficientNets. These results are significant because EfficientNets were specifically designed by neural architecture search (Zoph & Le, 2017) to minimize computational costs using highly computationally efficient depthwise convolutions (Tan & Le, 2019)." }, { "heading": "E.4.2 LAMBDA LAYERS IN A RESOURCE CONSTRAINED SCENARIO", "text": "Lastly, we briefly study lambda layers in a resource-constrained scenario using the MobileNetv2 architecture (Sandler et al., 2018). MobileNets (Howard et al., 2017; Sandler et al., 2018; Howard et al., 2019) employ lightweight inverted bottleneck blocks which consist of the following sequence: 1) a pointwise convolution for expanding the number of channels, 2) a depthwise convolution for spatial mixing and 3) a final pointwise convolution for channel mixing. The use of a depthwise convolution (as opposed to a regular convolution) reduces parameters and flops, making inverted bottlenecks particularly well-suited for embedded applications.\nLightweight lambda block. We construct a lightweight lambda block as follows. We replace the depthwise convolution in the inverted bottleneck with a lambda convolution with small scope size |m|=5x5, query depth |k|=32, number of heads |h|=4. We also change the first pointwise\nconvolution to output the same number of channels (instead of increasing the number of channels) to further reduce computations.\nAdding lambda layers in MobileNetv2. We wish to assess whether lambda layers can improve the flops-accuracy (or parameter-accuracy) tradeoff of mobilenet architectures. We experiment with a simple strategy of replacing a few inverted bottlenecks with our proposed lightweight lambda block, so that the resulting architectures have similar computational demands as their baselines. A simple procedure of replacing the 10-th and 16-th inverted bottleneck blocks with lightweight lambda blocks in the MobileNet-v2 architecture reduces parameters and flops by ∼10% while improving ImageNet accuracy by 0.6%. This suggest that lambda layers may be well suited for use in resource constrained scenarios such as embedded vision applications (Howard et al., 2017; Sandler et al., 2018; Howard et al., 2019)." }, { "heading": "F EXPERIMENTAL DETAILS", "text": "" }, { "heading": "F.1 DETAILED LAMBDARESNETS RESULTS", "text": "" }, { "heading": "F.2 ARCHITECTURAL DETAILS", "text": "Lambda layer implementation details Unless specified otherwise, all lambda layers use query depth |k|=16, |h|=4 heads and intra-depth |u|=1. The position lambdas are generated with local contexts of size |m|=23x23 and the content lambdas with the global context using the einsum implementation as described in Figure 3. Local positional lambdas can be implemented interchangeably with the lambda convolution or by using the global einsum implementation and masking the position embeddings outside of the local contexts (Figure 5). The latter can be faster but has higher FLOPS and memory footprint due to the Θ(knm) term (see Table 2). In our experiments, we use the convolution implementation only for input length |n| > 852 or intra-depth |u| > 1. When the intra-depth is increased to |u| >1, we switch to the convolution implementation and reduce the scope size to |m|=7x7 to reduce flops. Positional embeddings are initialized at random using the unit normal distribution N (0, 1). We use fan-in initialization for the linear projections in the lambda layer. The projections to compute K and V are initialized at random with the N (0, |d|−1/2) distribution. The projection to compute Q is initialized at random with the N (0, |kd|−1/2) distribution (this is similar to the scaled dotproduct attention mechanism, except that the scaling is absorbed in the projection). We apply batch normalization onQ and V and the keysK are normalized via a softmax operation.\nResNets. We use the ResNet-v1 implementation and initialize the γ parameter in the last batch normalization (Ioffe & Szegedy, 2015) layer of the bottleneck blocks to 0. Squeeze-and-Excitation layers employ a squeeze ratio of 4. Similarly to ResNet-RS (Bello et al., 2021), we use the ResNetD (He et al., 2018) and additionally replace the max pooling layer in the stem by a strided 3x3 convolution. Our block allocation and scaling strategy (i.e. selected resolution as a function of model depth) also follow closely the scaling recommendations from ResNet-RS (Bello et al., 2021).\nLambdaResNets. We construct our LambdaResNets by replacing the spatial 3x3 convolutions in the bottleneck blocks of the ResNet-RS architectures by our proposed lambda layer, with the exception of the stem which is left unchanged. We apply 3x3 average-pooling with stride 2 after the lambda layers to downsample in place of the strided convolution. Lambda layers are uniformly spaced in the c4 stage and all bottlenecks in c5 use lambda layers. Table 19 presents the exact block configuration and the location of the lambda layers for our hybrid LambdaResNets. We do not use squeeze-and-excitation in the bottleneck blocks that employ a lambda layer instead of the standard 3x3 convolution." }, { "heading": "F.3 TRAINING DETAILS", "text": "ImageNet training setups. We consider two training setups for the ImageNet classification task. The 90 epochs training setup trains models for 90 epochs using standard preprocessing and allows for fair comparisons with classic works. The 350 epochs training setup trains models for 350 epochs using improved data augmentation and regularization and is closer to training methodologies used in modern works with state-of-the-art accuracies.\nSupervised ImageNet 90 epochs training setup with vanilla ResNet. In the 90 epoch setup, we use the vanilla ResNet for fair comparison with prior works. We used the default hyperparameters as found in official implementations without doing additional tuning. All networks are trained endto-end for 90 epochs via backpropagation using SGD with momentum 0.9. The batch size B is 4096 distributed across 32 TPUv3 cores (Jouppi et al., 2017) and the weight decay is set to 1e-4. The learning rate is scaled linearly from 0 to 0.1B/256 for 5 epochs and then decayed using the cosine schedule (Loshchilov & Hutter, 2017). We use batch normalization with decay 0.9999 and exponential moving average with weight 0.9999 over trainable parameters and a label smoothing of 0.1. The input image size is set to 224x224. We use standard training data augmentation (random crops and horizontal flip with 50% probability).\nMost works compared against in Table 3 use a similar training setup and also replace the 3x3 spatial convolutions in the ResNet architecture by their proposed methods. We note that Ramachandran et al. (2019) train for longer (130 epochs instead of 90) but do not use label smoothing which could confound our comparisons.\nSupervised ImageNet 350 epochs training setup. Higher accuracies on ImageNet are commonly obtained by training longer with increased augmentation and regularization (Lee et al., 2020; Tan & Le, 2019). Similarly to Bello et al. (2021), the weiht decay is reduced to 4e-5 and we employ RandAugment (Cubuk et al., 2019) with 2 layers, dropout (Srivastava et al., 2014) and stochastic depth (Huang et al., 2016). See Table 20 for exact hyperparameters. All architectures are trained for 350 epochs with a batch size B of 4096 or 2048 distributed across 32 or 64 TPUv3 cores, depending on memory constraints.\nWe tuned our models using a held-out validation set comprising ∼2% of the ImageNet training set (20 shards out of 1024). We perform early stopping on the held-out validation set for the largest models, starting with LambdaResNet-350 at resolution 288x288, and simply report the final accuracies for the smaller models.\nSemi-supervised learning with pseudo-labels. Our training setup closely follows the experimental setup from Xie et al. (2020). We use the same dataset of 130M filtered and balanced JFT images with pseudo-labels generated by an EfficientNet-L2 model with 88.4% ImageNet accuracy. Hyperparameters are the same as for the supervised ImageNet 350 epochs experiments.\nLatency measurements. Figure 4 reports training latencies (i.e. time per training step) to process a batch of 1024 images on 8 TPUv3 cores using mixed precision training (ı.e bfloat16 activations). Training latency is originally measured on 8 TPUv3 cores, starting with a total batch size of 1024 (i.e. 128 per core) and dividing the batch size by 2 until it fits in memory. We then report the normalized latencies in Figure 4. For example, if latency was measured with a batch size of 512\n(instead of 1024), we normalize the reported latency by multiplying the measured latency by 2. Table 4, Table 12 and Table 13 report inference throughput on 8 TPUv3 cores using full precision (i.e. float32 activations). Latency for ViT (Dosovitskiy et al., 2020) was privately communicated by the authors.\nFLOPS count. We do not count zeroed out flops when computing positional lambdas with the einsum implementation from Figure 3. Flops count is highly dependent on the scope size which is rather large by default (|m|=23x23). In Table 10, we show that it is possible to significantly reduce the scope size and therefore FLOPS at a minimal degradation in performance.\nCOCO object detection. We employ the architecture from the improved ImageNet training setup as the backbone in the Mask-RCNN architecture. All models are trained on 1024x1024 images from scratch for 130k steps with a batch size of 256 distributed across 128 TPUv3 cores with synchronized batch normalization. We apply multi-scale jitter of [0.1, 2.0] during training. The learning rate is warmed up for 1000 steps from 0 to 0.32 and divided by 10 at steps 90, 95 and 97.5% of training. The weight decay is set to 4e-5.\nMobilenet training setup. All mobilenet architectures are trained for 350 epochs on Imagenet with standard preprocessing at 224x224 resolution. We use the same hyperparameters as Howard et al. (2019). More specifically, we use RMSProp with 0.9 momentum and a batch size of 4096 split across 32 TPUv3 cores. The learning rate is warmed up linearly to 0.1 and then multiplied by 0.99 every 3 epochs. We use a weight decay 1e-5 and dropout with drop probability of 0.2" } ]
2,021
LAMBDANETWORKS: MODELING LONG-RANGE INTERACTIONS WITHOUT ATTENTION
SP:dc61f3b946fd4ff24d64e8a34483dd2bd0b1b333
[ "The paper deals with the problem of simultaneously learning node embeddings and detecting communities on graphs. Although both tasks are particularly important while analyzing networks, most of the proposed approaches address them independently. The paper proposes a generative model, called VECODER, that aims to jointly learn overlapping communities and node representations. The proposed model follows a variational formulation which assumes that the node embeddings are generated from a prior distribution; this can be used to control how community embeddings are sampled. This leads to an encoder-decoder architecture, where the decoder ensures that similar (i.e., connected) nodes will obtain similar embeddings. The proposed model has been empirically evaluated on three tasks (overlapping and non-overlapping community detection, and node classification), and the performance has been compared against various baseline models.", "This paper aims to learn node representations of graph to jointly satisfy node embedding properties and community detection property. Node embedding must preserve proximities guaranteeing that adjacent nodes are closer than others. Community detection must promote more similar clustering assignments to adjacent nodes than others. These two problems have been tackled separately or simultaneously but with maintaining two different node representations. The authors claim that the proposed VECoDeR is capable of learning a single community-aware node representation per node, which is jointly effective in both scenarios." ]
In this paper, we study how to simultaneously learn two highly correlated tasks of graph analysis, i.e., community detection and node representation learning. We propose an efficient generative model called VECODER for jointly learning Variational Embeddings for Community Detection and node Representation. VECODER assumes that every node can be a member of one or more communities. The node embeddings are learned in such a way that connected nodes are not only “closer” to each other but also share similar community assignments. A joint learning framework leverages community-aware node embeddings for better community detection. We demonstrate on several graph datasets that VECODER effectively outperforms many competitive baselines on all three tasks i.e. node classification, overlapping community detection and non-overlapping community detection. We also show that VECODER is computationally efficient and has quite robust performance with varying hyperparameters.
[]
[ { "authors": [ "Yong-Yeol Ahn", "James P Bagrow", "Sune Lehmann" ], "title": "Link communities reveal multiscale complexity", "venue": "in networks. nature,", "year": 2010 }, { "authors": [ "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking", "venue": "arXiv preprint arXiv:1707.03815,", "year": 2017 }, { "authors": [ "Shaosheng Cao", "Wei Lu", "Qiongkai Xu" ], "title": "Grarep: Learning graph representations with global structural information", "venue": "In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management,", "year": 2015 }, { "authors": [ "Sandro Cavallari", "Vincent W Zheng", "Hongyun Cai", "Kevin Chen-Chuan Chang", "Erik Cambria" ], "title": "Learning community embedding with community detection and node embedding on graphs", "venue": "In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management,", "year": 2017 }, { "authors": [ "Imre Derényi", "Gergely Palla", "Tamás Vicsek" ], "title": "Clique percolation in random networks", "venue": "Physical review letters,", "year": 2005 }, { "authors": [ "Carl Doersch" ], "title": "Tutorial on variational autoencoders", "venue": "arXiv preprint arXiv:1606.05908,", "year": 2016 }, { "authors": [ "Rong-En Fan", "Kai-Wei Chang", "Cho-Jui Hsieh", "Xiang-Rui Wang", "Chih-Jen Lin" ], "title": "Liblinear: A library for large linear classification", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Sheng Gao", "Ludovic Denoyer", "Patrick Gallinari" ], "title": "Temporal link prediction by integrating content and structure information", "venue": "In Proceedings of the 20th ACM international conference on Information and knowledge management,", "year": 2011 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "arXiv preprint arXiv:1704.01212,", "year": 2017 }, { "authors": [ "Prem K Gopalan", "David M Blei" ], "title": "Efficient discovery of overlapping communities in massive networks", "venue": "Proceedings of the National Academy of Sciences,", "year": 2013 }, { "authors": [ "Aditya Grover", "Jure Leskovec" ], "title": "node2vec: Scalable feature learning for networks", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "arXiv preprint arXiv:1611.01144,", "year": 2016 }, { "authors": [ "Yuting Jia", "Qinqin Zhang", "Weinan Zhang", "Xinbing Wang" ], "title": "Communitygan: Community detection with generative adversarial nets", "venue": "In The World Wide Web Conference,", "year": 2019 }, { "authors": [ "Rayyan Ahmad Khan", "Muhammad Umer Anwaar", "Martin Kleinsteuber" ], "title": "Epitomic variational graph autoencoder", "venue": null, "year": 2020 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Variational graph auto-encoders", "venue": "arXiv preprint arXiv:1611.07308,", "year": 2016 }, { "authors": [ "Mark Kozdoba", "Shie Mannor" ], "title": "Community detection via measure space embedding", "venue": "In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2,", "year": 2015 }, { "authors": [ "Jure Leskovec", "Andrej" ], "title": "Krevl. SNAP Datasets: Stanford large network dataset collection", "venue": "http: //snap.stanford.edu/data,", "year": 2014 }, { "authors": [ "Jure Leskovec", "Julian J Mcauley" ], "title": "Learning to discover social circles in ego networks", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Efficient estimation of word representations in vector space", "venue": "arXiv preprint arXiv:1301.3781,", "year": 2013 }, { "authors": [ "Bryan Perozzi", "Rami Al-Rfou", "Steven Skiena" ], "title": "Deepwalk: Online learning of social representations", "venue": "In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2014 }, { "authors": [ "Leonardo FR Ribeiro", "Pedro HP Saverese", "Daniel R Figueiredo" ], "title": "struc2vec: Learning node representations from structural identity", "venue": "In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining,", "year": 2017 }, { "authors": [ "Benedek Rozemberczki", "Ryan Davies", "Rik Sarkar", "Charles Sutton" ], "title": "Gemsec: Graph embedding with self clustering", "venue": "In Proceedings of the 2019 IEEE/ACM international conference on advances in social networks analysis and mining,", "year": 2019 }, { "authors": [ "Fan-Yun Sun", "Meng Qu", "Jordan Hoffmann", "Chin-Wei Huang", "Jian Tang" ], "title": "vgraph: A generative model for joint community detection and node representation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jian Tang", "Meng Qu", "Mingzhe Wang", "Ming Zhang", "Jun Yan", "Qiaozhu Mei" ], "title": "Line: Large-scale information network embedding", "venue": "In Proceedings of the 24th international conference on world wide web,", "year": 2015 }, { "authors": [ "Jiliang Tang", "Charu Aggarwal", "Huan Liu" ], "title": "Node classification in signed social networks", "venue": "In Proceedings of the 2016 SIAM international conference on data mining,", "year": 2016 }, { "authors": [ "Lei Tang", "Huan Liu" ], "title": "Leveraging social media networks for classification", "venue": "Data Mining and Knowledge Discovery,", "year": 2011 }, { "authors": [ "Cunchao Tu", "Xiangkai Zeng", "Hao Wang", "Zhengyan Zhang", "Zhiyuan Liu", "Maosong Sun", "Bo Zhang", "Leyu Lin" ], "title": "A unified framework for community detection and network representation learning", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2018 }, { "authors": [ "Petar Velickovic", "William Fedus", "William L Hamilton", "Pietro Liò", "Yoshua Bengio", "R Devon Hjelm" ], "title": "Deep graph infomax", "venue": null, "year": 2019 }, { "authors": [ "Daixin Wang", "Peng Cui", "Wenwu Zhu" ], "title": "Structural deep network embedding", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Xiao Wang", "Peng Cui", "Jing Wang", "Jian Pei", "Wenwu Zhu", "Shiqiang Yang" ], "title": "Community preserving network embedding", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Jierui Xie", "Stephen Kelley", "Boleslaw K Szymanski" ], "title": "Overlapping community detection in networks: The state-of-the-art and comparative study", "venue": "Acm computing surveys (csur),", "year": 2013 }, { "authors": [ "Jaewon Yang", "Jure Leskovec" ], "title": "Overlapping community detection at scale: a nonnegative matrix factorization approach", "venue": "In Proceedings of the sixth ACM international conference on Web search and data mining,", "year": 2013 }, { "authors": [ "Jaewon Yang", "Jure Leskovec" ], "title": "Defining and evaluating network communities based on groundtruth", "venue": "Knowledge and Information Systems,", "year": 2015 }, { "authors": [ "Jaewon Yang", "Julian McAuley", "Jure Leskovec" ], "title": "Community detection in networks with node attributes", "venue": "IEEE 13th International Conference on Data Mining,", "year": 2013 }, { "authors": [ "Liang Yang", "Xiaochun Cao", "Dongxiao He", "Chuan Wang", "Xiao Wang", "Weixiong Zhang" ], "title": "Modularity based community detection with deep learning", "venue": "In IJCAI,", "year": 2016 }, { "authors": [ "Zhilin Yang", "William Cohen", "Ruslan Salakhudinov" ], "title": "Revisiting semi-supervised learning with graph embeddings", "venue": "In International conference on machine learning,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graphs are flexible data structures that model complex relationships among entities, i.e. data points as nodes and the relations between nodes via edges. One important task in graph analysis is community detection, where the objective is to cluster nodes into multiple groups (communities). Each community is a set of densely connected nodes. The communities can be overlapping or non-overlapping, depending on whether they share some nodes or not. Several algorithmic (Ahn et al., 2010; Derényi et al., 2005) and probabilistic approaches (Gopalan & Blei, 2013; Leskovec & Mcauley, 2012; Wang et al., 2017; Yang et al., 2013) to community detection have been proposed. Another fundamental task in graph analysis is learning the node embeddings. These embeddings can then be used for downstream tasks like graph visualization (Tang et al., 2016; Wang et al., 2016; Gao et al., 2011; Wang et al., 2017) and classification (Cao et al., 2015; Tang et al., 2015).\nIn the literature, these tasks are usually treated separately. Although the standard graph embedding methods capture the basic connectivity, the learning of the node embeddings is independent of community detection. For instance, a simple approach can be to get the node embeddings via DeepWalk (Perozzi et al., 2014) and get community assignments for each node by using k-means or Gaussian mixture model. Looking from the other perspective, methods like Bigclam (Yang & Leskovec (2013)), that focus on finding the community structure in the dataset, perform poorly for node-representation tasks e.g. node classification. This motivates us to study the approaches that jointly learn community-aware node embeddings.\nRecently several approaches, like CNRL (Tu et al., 2018), ComE (Cavallari et al., 2017), vGraph (Sun et al. (2019)) etc, have been proposed to learn the node embeddings and detect communities simultaneously in a unified framework. Several studies have shown that community detection is improved by incorporating the node representation in the learning process (Cao et al., 2015; Kozdoba & Mannor, 2015). The intuition is that the global structure of graphs learned during community detection can provide useful context for node embeddings and vice versa.\nThe joint learning methods (CNRL, ComE and vGraph) learn two embeddings for each node. One node embedding is used for the node representation task. The second node embedding is the “context” embedding of the node which aids in community detection. As CNRL and ComE are based on Skip-Gram (Mikolov et al., 2013) and DeepWalk (Perozzi et al., 2014), they inherit “context” embedding from it for learning the neighbourhood information of the node. vGraph also requires two\nnode embeddings for parameterizing two different distributions. In contrast, we propose learning a single community-aware node representation which is directly used for both tasks. In this way, we not only get rid of an extraneous node embedding but also reduce the computational cost.\nIn this paper, we propose an efficient generative model called VECODER for jointly learning both community detection and node representation. The underlying intuition behind VECODER is that every node can be a member of one or more communities. However, the node embeddings should be learned in such a way that connected nodes are “closer” to each other than unconnected nodes. Moreover, connected nodes should have similar community assignments. Formally, we assume that for i-th node, the node embeddings zi are generated from a prior distribution p(z). Given zi, the community assignments ci are sampled from p(ci|zi), which is parameterized by node and community embeddings. In order to generate an edge (i, j), we sample another node embedding zj from p(z) and respective community assignment cj from p(cj |zj). Afterwards, the node embeddings and the respective community assignments of node pairs are fed to a decoder. The decoder ensures that embeddings of both the nodes and the communities of connected nodes share high similarity. This enables learning such node embeddings that are useful for both community detection and node representation tasks.\nWe validate the effectiveness of our approach on several real-world graph datasets. In Sec. 4, we show empirically that VECODER is able to outperform the baseline methods including the direct competitors on all three tasks i.e. node classification, overlapping community detection and nonoverlapping community detection. Furthermore, we compare the computational cost of training different algorithms. VECODER is up to 40x more time-efficient than its competitors. We also conduct hyperparameter sensitivity analysis which demonstrates the robustness of our approach. Our main contributions are summarized below:\n• We propose an efficient generative model called VECODER for joint community detection and node representation learning.\n• We adopt a novel approach and argue that a single node embedding is sufficient for learning both the representation of the node itself and its context.\n• Training VECODER is extremely time-efficient in comparison to its competitors." }, { "heading": "2 RELATED WORK", "text": "Community Detection. Early community detection algorithms are inspired from clustering algorithms (Xie et al., 2013). For instance, spectral clustering (Tang & Liu, 2011) is applied to the graph Laplacian matrix for extracting the communities. Similarly, several matrix factorization based methods have been proposed to tackle the community detection problem. For example, Bigclam (Yang & Leskovec (2013)) treats the problem as a non-negative matrix factorization (NMF) task. It aims to recover the node-community affiliation matrix and learns the latent factors which represent community affiliations of nodes. Another method CESNA(Yang et al. (2013)) extends Bigclam by modelling the interaction between the network structure and the node attributes. The performance of matrix factorization methods is limited due to the capacity of the bi-linear models. Some generative models, like vGraph (Sun et al., 2019), Circles (Leskovec & Mcauley (2012)) etc, have also been proposed to detect communities in a graph.\nNode Representation Learning. Many successful algorithms which learn node representation in an unsupervised way are based on random walk objectives (Perozzi et al., 2014; Tang et al., 2015; Grover & Leskovec, 2016; Hamilton et al., 2017). Some known issues with random-walk based methods (e.g. DeepWalk, node2vec etc) are: (1) They sacrifice the structural information of the graph by putting over-emphasis on the proximity information (Ribeiro et al., 2017) and (2) great dependence of the performance on hyperparameters (walk-length, number of hops etc) (Perozzi et al., 2014; Grover & Leskovec, 2016). Recently, Gilmer et al. (2017) recently showed that graph convolutions encoder models greatly reduce the need for using the random-walk based training objectives. This is because the graph convolutions enforce that the neighboring nodes have similar representations. Some interesting GCN based approaches include graph autoencoders e.g. GAE and VGAE(Kipf & Welling (2016b)) and DGI(Velickovic et al., 2019).\nJoint community detection and node representation learning. In the literature, several attempts have been made to tackle both these tasks in a single framework. Most of these methods propose\nan alternate optimization process, i.e. learn node embeddings and improve community assignments with them and vice versa (Cavallari et al., 2017; Tu et al., 2018). Some approaches, like CNRL (Tu et al., 2018) and ComE (Cavallari et al., 2017), are inspired from random walk, thus inheriting the shortcomings of random walk. Others, like GEMSEC (Rozemberczki et al. (2019), are limited to the detection of non-overlapping communities. There also exist some generative models like CommunityGAN (Jia et al. (2019)) and vGraph (Sun et al. (2019)) that jointly learn community assignments and node embeddings. Some methods have high computational complexity, i.e. quadratic to the number of nodes in a graph, e.g. M-NMF (Wang et al. (2017)) and DNR (Yang et al., 2016a). CNRL, ComE and vGraph require learning two embeddings for each node for simultaneously tackling the two tasks. Unlike them, VECODER learns a single community-aware node representation which is directly used for both tasks.\nIt is pertinent to highlight that although both vGraph and VECODER adopt a variational approach but the underlying models are quite different. vGraph assumes that each node can be represented as a mixture of multiple communities and is described by a multinomial distribution over communities, whereas VECODER models the node embedding by a single distribution. For a given node, vGraph, first draws a community assignment and then a connected neighbor node is generated based on the assignment. Whereas, VECODER draws the node embedding from prior distribution and then community assignment is conditioned on a single node only. In simple terms, vGraph also needs edge information in the generative process whereas VECODER does not require it. VECODER relies on the decoder to ensure that embeddings of the connected nodes and their communities share high similarity with each other." }, { "heading": "3 METHODOLOGY", "text": "" }, { "heading": "3.1 PROBLEM FORMULATION", "text": "Suppose an undirected graph G = (V, E) with the adjacency matrix A ∈ RN×N and a matrix X ∈ RN×F of F -dimensional node features, N being the number of nodes. Given K as the number of communities, we aim to jointly learn the node embeddings and the community embeddings following a variational approach such that: (1) One or more communities can be assigned to every node and (2) the node embeddings can be used for both community detection and node classification." }, { "heading": "3.2 VARIATIONAL MODEL", "text": "Generative Model: Let us denote the latent node embedding and community assignment for i-th node by the random variables zi ∈ Rd and ci respectively. The generative model is given by:\np(A) = ∫ ∑ c p(Z, c,A)dZ, (1)\nwhere c = [c1, c2, · · · , cN ] and the matrix Z = [z1, z2, · · · , zN ] stacks the node embeddings. The joint distribution in (1) is mathematically expressed as\np(Z, c,A) = p(Z) pθ(c|Z) pθ(A|c,Z), (2) where θ denotes the model parameters. Let us denote elements of A by aij . Following existing approaches (Kipf & Welling, 2016b; Khan et al., 2020), we consider zi to be i.i.d random variables. Furthermore, assuming ci|zi to be i.i.d random variables, the joint distributions in (2) can be factorized as\np(Z) = N∏ i=1 p(zi) (3) pθ(c|Z) = N∏ i=1 pθ(ci|zi) (4)\npθ(A|c,Z) = ∏ i,j pθ(aij |ci, cj , zi, zj), (5)\nwhere Eq. (5) assumes that the edge decoder pθ(aij |ci, cj , zi, zj) depends only on ci, cj , zi and zj .\nInference Model: We aim to learn the model parameters θ such that log(pθ(A)) is maximized. In order to ensure computational tractability, we introduce the approximate posterior\nqφ(Z, c|I) = ∏ i qφ(zi, ci|I) = ∏ i qφ(zi|I)qφ(ci|zi, I), (6)\nwhere I = (A,X) if node features are available, otherwise I = A. We maximize the corresponding ELBO bound (for derivation, refer to the supplementary material), given by\nLELBO ≈ − N∑ i=1 DKL(qφ(zi|I) || p(zi))− N∑ i=1 1 M M∑ m=1 DKL(qφ(ci|z(m)i , I) || pθ(ci|z (m) i ))\n+ ∑\n(i,j)∈E\nE(zi,zj ,ci,cj)∼qφ(zi,zj ,ci,cj |I) { log ( pθ(aij |ci, cj , zi, zj) )} , (7)\nwhere DKL(.||.) represents the KL-divergence between two distributions. The distribution qφ(zi, zj , ci, cj |I) in the third term of Eq. (7) is factorized into two conditionally independent distributions i.e.\nqφ(zi, zj , ci, cj |I) = qφ(zi, ci|I)qφ(zj , cj |I). (8)" }, { "heading": "3.3 DESIGN CHOICES", "text": "In Eq. (3), p(zi) is chosen to be the standard gaussian distribution for all i. The corresponding approximate posterior qφ(zi|I) in Eq. (6), used as node embeddings encoder, is given by\nqφ(zi|I) = N ( µi(I), diag(σ2i(I)) ) . (9)\nThe parameters of qφ(zi|I) can be learnt by any encoder network e.g. graph convolutional network (Kipf & Welling (2016a)), graph attention network ( Veličković et al. (2017)), GraphSAGE (Hamilton et al. (2017)) or even two matrices to learn µi(I) and diag(σ2i(I)). Samples are then generated using reparametrization trick (Doersch (2016)).\nFor parameterizing pθ(ci|zi) in Eq. (4), we introduce community embeddings {g1, · · · , gK}; gk ∈ Rd. The distribution pθ(ci|zi) is then modelled as the softmax of dot products of zi with gk, i.e.\npθ(ci = k|zi) = exp(< zi, gk >) K∑̀ =1 exp(< zi, g` >) . (10)\nThe corresponding approximate posterior qφ(ci = k|zi, I) in Eq. (6) is affected by the node embedding zi as well as the neighborhood. To design this, our intuition is to consider the similarity of gk with the embedding zi as well as with the embeddings of the neighbors of the i-th node. The overall similarity with neighbors is mathematically formulated as the average of the dot products of their embeddings. Afterwards a hyperparameter α is introduced to control the bias between the effect of zi and the set Ni of the neighbors of the i-th node. Finally, a softmax is applied as follows\nqφ(ci = k|zi,G) = exp ( α < zi, gk > +(1− α) 1|Ni| ∑ j∈Ni < zj , gk > )\nK∑̀ =1 exp ( α < zi, g` > +(1− α) 1|Ni| ∑ j∈Ni < zj , g` > ) . (11)\nHence, Eq. (11) ensures that graph structure information is employed to learn community assignments instead of relying on an extraneous node embedding as done in (Sun et al., 2019; Cavallari et al., 2017). Finally, the choice of edge decoder in Eq. (5) is motivated by the intuition that the nodes connected by edges have a high probability of belonging to the same community and vice versa. Therefore we model the edge decoder as:\npθ(aij |ci = `, cj = m, zi, zj) = σ(< zi, gm >) + σ(< zj , g` >)\n2 . (12)\nFor better reconstructing the edges, Eq. (12) makes use of the community embeddings, node embeddings and community assignment information simultaneously. This helps in learning better node representations by leveraging the global information about the graph structure via community detection. On the other hand, this also forces the community assignment information to exploit the local graph structure via node embeddings and edge information." }, { "heading": "3.4 PRACTICAL ASPECTS", "text": "The third term in Eq. (7) is estimated in practice using the samples generated by the approximate posterior. This term is equivalent to the negative of binary cross-entropy (BCE) loss between observed edges and reconstructed edges. Since community assignment follows a categorical distribution, we use Gumbel-softmax (Jang et al. (2016)) for backpropagation of the gradients. As for the second term of Eq. (7), it is also enough to set M = 1, i.e. use only one sample per input node.\nFor inference, non-overlapping community assignment can be obtained for i-th node as\nCi = argmax k∈{1,··· ,K} qφ(ci = k|zi, I). (13)\nTo get overlapping community assignments for i-th node, we can threshold its weighted probability vector at , a hyperparameter, as follows\nCi = { k ∣∣∣∣ qφ(ci = k|zi, I)max ` qφ(ci = `|zi, I) ≥ } , ∈ [0, 1]. (14)" }, { "heading": "3.5 COMPLEXITY", "text": "Computation of dot products for all combinations of node and community embeddings takes O(NKd) time. Solving Eq. (11) further requires calculation of mean of dot products over the neighborhood for every node, which takes O(|E|K) computations overall as we traverse every edge for every community. Finally, we need softmax over all communities for every node in Eq. (10) and Eq. (11) which takes O(NK) time. Eq. (12) takes O(|E|) time for all edges as we have already calculated the dot products. As a result, the overall complexity becomes O(|E|K + NKd). This complexity is quite low compared to other algorithms designed to achieve similar goals (Cavallari et al., 2017; Wang et al., 2017; Yang et al., 2016a)." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATASETS", "text": "We have selected 18 different datasets ranging from 270 to 126,842 edges. For nonoverlapping community detection and node classification, we use 5 the citation datasets (Bojchevski & Günnemann (2017); Yang et al. (2016b)). The remaining datasets (Leskovec & Mcauley (2012); Yang & Leskovec (2015)), used for overlapping community detection, are taken from SNAP repository (Leskovec & Krevl (2014)). Following (Sun et al., 2019), we take 5 biggest ground truth communities for youtube, amazon and dblp. Moreover, we also analyse the case of large number of communities. For this purpose, we prepare two subsets\nof amazon dataset by randomly selecting 500 and 1000 communities from 2000 smallest communities in the amazon dataset." }, { "heading": "4.2 BASELINES", "text": "For overlapping community detection, we compare with the following competitive baselines: MNMF(Wang et al., 2017) learns community membership distribution by using joint non-negative matrix factorization with modularity based regularization. BIGCLAM(Yang & Leskovec (2013)) also formulates community detection as a non-negative matrix factorization (NMF) task. It simultaneously optimizes the model likelihood of observed links and learns the latent factors which represent community affiliations of nodes. CESNA (Yang et al. (2013)) extends BIGCLAM by statistically modelling the interaction between the network structure and the node attributes. Circles (Leskovec & Mcauley (2012)) introduces a generative model for community detection in ego-networks by learning node similarity metrics for every community. SVI (Gopalan & Blei (2013)) formulates membership of nodes in multiple communities by a Bayesian model of networks. vGraph (Sun et al. (2019)) simultaneously learns node embeddings and community assignments by modelling the nodes as being generated from a mixture of communities. vGraph+, a variant further incorporates regularization to weigh local connectivity. ComE (Cavallari et al. (2017)) jointly learns community and node embeddings by using gaussian mixture model formulation. CNRL(Tu et al., 2018) enhances the random walk sequences (generated by DeepWalk, node2vec etc) to jointly learn community and node embeddings. CommunityGAN (ComGAN)is a generative adversarial model for learning node embeddings such that the entries of the embedding vector of each node refer to the membership strength of the node to different communities. Lastly, we compare the results with the communities obtained by applying k-means to the learned embeddings of DGI (Velickovic et al., 2019).\nFor non-overlapping community detection and node classification, in addition to MNMF, DGI, CNRL, CommunityGAN, vGraph and ComE, we compare VECODER with the following baselines: DeepWalk (Perozzi et al. (2014)) makes use of SkipGram (Mikolov et al. (2013)) and truncated random walks on network to learn node embeddings. LINE (Tang et al. (2015)) learns node embeddings while attempting to preserve first and second order proximities of nodes. Node2Vec (Grover & Leskovec (2016)) learns the embeddings using biased random walk while aiming to preserve network neighborhoods of nodes. Graph Autoencoder (GAE)Kipf & Welling (2016b) extends the idea of autoencoders to graph datasets. We also include its variational counterpart i.e. VGAE. GEMSEC is a sequence sampling-based learning model which aims to jointly learn the node embeddings and clustering assignments." }, { "heading": "4.3 SETTINGS", "text": "For overlapping community detection, we learn mean and log-variance matrices of 16- dimensional node embeddings. We set α = 0.9 and = 0.3 in all our experiments. Following Kipf & Welling (2016b), we first pre-train a variational graph autoencoder. We perform gradient descent with Adam optimizer (Kingma & Ba (2014)) and learning rate = 0.01. Community assignments are obtained using Eq. (14). For the baselines, we employ the results reported by Sun et al. (2019). For evaluating the performance, we use F1-score and Jaccard similarity.\nFor non-overlapping community detection, since the default implementations of most the baselines use 128 dimensional embeddings, for we use d = 128 for fair comparison. Eq. (13) is used for community assignments. For vGraph, we use the code provided by the authors. We employ normalized mutual information (NMI) and adjusted random index (ARI) as evaluation metrics.\nFor node classification, we follow the training split used in various previous works (Yang et al., 2016b; Kipf & Welling, 2016a; Velickovic et al., 2019), i.e. 20 nodes per class for training. We train logistic regression using LIBLINEAR (Fan et al. (2008)) solver as our classifier and report the evaluation results on rest of the nodes. For the algorithms that do not use node features, we train the classifier by appending the raw node features with the learnt embeddings. For evaluation, we use F1-macro and F1-micro scores.\nAll the reported results are the average over five runs. Further implementation details can be found in the code: https://anonymous.4open.science/r/1d95bf8f-8ce3-4870-a454-07db463b419f." }, { "heading": "4.4 DISCUSSION OF RESULTS", "text": "In the following, we discuss the results to gain some important insights into the problem.\nTables 2 and 3 summarize the results of the performance comparison for the overlapping community detection task.\nFirst, we note that our proposed method VECODER outperforms the competitive methods on all datasets in terms of Jaccard Similarity. VECODER also outperforms its competitors on 12 out of 13 datasets in terms of F1-score. It is the second best method on the 13th dataset (fb0). These results demonstrate the capability of VECODER to learn multiple community assignments quite well and hence reinforces our intuition behind the design of Eq. (11).\nSecond, we observe that there is no consistent performing algorithm among the competitive methods. That is, excluding VECODER , the best performance is achieved by vGraph/vGraph+ on 5, ComGAN on 4 and ComE on 3 out of 13 datasets in terms of F1-score. A a similar trend can be seen in Jaccard Similarity. Third, it is noted that all the methods which achieve second best performance are solving the task of community detection and node representation learning jointly. This supports our claim that treating the two tasks jointly results in better performance.\nFourth, we observe that vGraph+ results are generally better than vGraph. This is because vGraph+ incorporates a regularization term in the loss function which is based on Jaccard coefficients of connected nodes as edge weights. However, it should be noted that this prepossessing step is computationally expensive for densely connected graphs.\nTab. 4 shows the results on non-overlapping community detection. First, we observe that MNMF, DeepWalk, LINE and Node2Vec provide a good baseline for the task. However, these methods are not able to achieve comparable performance on any dataset relative to the frameworks that treat the two tasks jointly. Second, VECODER consistently outperforms all the competitors in NMI and ARI metrics, except for CiteSeer where it achieves second best ARI. Third, we observe that GCN based models i.e. GAE, VGAE and DGI show competitive performance. That is, they achieve second best performance in all the datasets except CiteSeer. In particular, DGI achieves second best NMI results in 3 out of 5 datasets and 2 out of 5 datsets in terms of ARI. Nonetheless, DGI results are not very competitive in Tab. 2 and Tab. 3, showing that while DGI can be a good choice for learning node embeddings for attributed graphs with non-overlapping communities, it is not the best option for non-attributed graphs or overlapping communities.\nThe results for node classification are presented in Tab. 5. VECODER achieves best F1-micro and F1-macro scores on 4 out of 5 datasets. We also observe that GCN based models i.e. GAE, VGAE and DGI show competitive performance, following the trend in results of Tab. 4. Furthermore, we note that the node classification results of CommunityGan (ComGAN) are quite poor. We think a potential reason behind it is that the node embeddings are constrained to have same dimensions as the number of communities. Hence, different components of the learned node embeddings simply represent the membership strengths of nodes for different communities. The linear classifiers may find it difficult to separate such vectors." }, { "heading": "4.5 HYPERPARAMETER SENSITIVITY", "text": "We study the dependence of VECODER on and α by evaluating on four datasets of different sizes: fb698(N = 61), fb1912(N = 747), amazon1000(N=1540) and youtube(N = 5346). We sweep for = {0.1, 0.2, · · · , 0.9}. For demonstrating effect of α, we fix = 0.3 and sweep for α = {0.1, 0.2, · · · , 0.9}. The average results of five runs for and α are given in Fig. 1a and Fig. 1b respectively. Overall VECODER is quite robust to the change in the values of and α. In case of , we see a general trend of decrease in performance when the threshold is set quite high e.g. > 0.7. This is because the datasets contain overlapping communities and a very high will cause the algorithm to give only the most probable community assignment instead of potentially providing multiple communities per node. However, for a large part of sweep space, the results are almost consistent. When is fixed and α is changed, the results are mostly consistent except when α is set to a low value. Eq. (11) shows that in such a case the node itself is almost neglected and VECODER tends to assign communities based upon neighborhood only, which may cause a decrease in the performance. This effect is most visible in amazon1000 dataset because it has only 1.54 points on average per community i.e. there is a good chance for neighbours of a point of being in different communities. Therefore, only depending upon the neighbors will most likely result in poor results.\nNode2Vec LINE DeepWalk vGraph ComE CNRL VECᴏDᴇR" }, { "heading": "4.6 TRAINING TIME", "text": "Now we compare the training times of different algorithms in Fig. 2. As some of the baselines are more resource intensive than others, we select aws instance type g4dn.4xlarge for fair comparison of training times. For vGraph, we train for 1000 iterations and for VECODER for 1500 iterations. For all other algorithms we use the default parameters as used in section 4.3. We observe that the methods that simply output the node embeddings take relatively less time compared to the algorithms that jointly learn node representations and community assignments e.g VECODER , vGraph and CNRL. Among these algorithms VECODER is the most time efficient. It consistently trains in less time compared to its direct competitors. For instance, it is about 12 times faster than ComE for CiteSeer-full and about 40 times faster compared to vGraph for Cora-full dataset. This provides evidence for lower computational complexity of VECODER in Section 3.5." }, { "heading": "5 CONCLUSION", "text": "We propose a scalable generative method VECODER to simultaneously perform community detection and node representation learning. Our novel approach learns a single community-aware node embedding for both the representation of the node and its context. VECODER is scalable due to its low complexity, i.e. O(|E|K + NKd). The experiments on several graph datasets show that VECODER consistently outperforms all the competitive baselines on node classification, overlapping community detection and non-overlapping community detection tasks. Moreover, training the VECODER is highly time-efficient than its competitors." } ]
2,020
null
SP:774027f8c53b842fa8ef0569dc1c9b2eaa82872b
[ "This paper extends the variational deep embedding VaDE model (a VAE-based clustering method) to integrate pairwise constraints between objects, i.e., must-link and cannot-link. The constraints are integrated a priori as a condition. That is, the prior over the cluster labels is conditioned on the constraints. The whole model, referred to as Constrained VaDE (CVaDE), takes the form of a conditional VAE tailored for constrained clustering. Experiments are curried out on various real-world datasets, and the proposed method is compared to VaDE as well as to recent and classical constrained clustering methods. ", "This work proposes CVaDE which is an extension of variational based deep clustering model (VaDE) with additional incorporation of prior clustering preferences as supervision. These priors guide the underlying clustering process towards a user-desirable partitioning of input data. The priors are provided in the form of pairwise constraints indicating which pair of samples belongs to same or different class. Clustering process is modelled using variational Bayes in which the clustering constraints are incorporated into prior probabilities with varying degree of uncertainty. The empirical results shows that in comparison to unconstrained clustering the small amount of pairwise constraints significantly improves clustering performance. Further, it demonstrates CVaDE's robustness to noise, generation capability as well as successful incorporation of different desirable preferences to drive clustering performance towards completely different partitioning." ]
Clustering with constraints has gained significant attention in the field of semisupervised machine learning as it can leverage partial prior information on a growing amount of unlabelled data. Following recent advances in deep generative models, we derive a novel probabilistic approach to constrained clustering that can be trained efficiently in the framework of stochastic gradient variational Bayes. In contrast to existing approaches, our model (CVaDE) uncovers the underlying distribution of the data conditioned on prior clustering preferences, expressed as pairwise constraints. The inclusion of such constraints allows the user to guide the clustering process towards a desirable partition of the data by indicating which samples should or should not belong to the same class. We provide extensive experiments to demonstrate that CVaDE shows superior clustering performances and robustness compared to state-of-the-art deep constrained clustering methods in a variety of data sets. We further demonstrate the usefulness of our approach on challenging real-world medical applications and face image generation.
[]
[ { "authors": [ "S. Basu", "I. Davidson", "K. Wagstaff" ], "title": "Constrained clustering: Advances in algorithms, theory, and applications", "venue": null, "year": 2008 }, { "authors": [ "Mikhail Bilenko", "S. Basu", "R. Mooney" ], "title": "Integrating constraints and metric learning in semisupervised clustering", "venue": "In ICML ’04,", "year": 2004 }, { "authors": [ "Marcelo Blatt", "Shai Wiseman", "Eytan Domany" ], "title": "Superparamagnetic clustering of data", "venue": "Phys. Rev. Lett.,", "year": 1996 }, { "authors": [ "G. Chen" ], "title": "Deep transductive semi-supervised maximum margin clustering", "venue": "ArXiv, abs/1501.06237,", "year": 2015 }, { "authors": [ "Nat Dilokthanakul", "Pedro A.M. Mediano", "Marta Garnelo", "Matthew C.H. Lee", "Hugh Salimbeni", "Kai Arulkumaran", "Murray Shanahan" ], "title": "Deep unsupervised clustering with gaussian mixture variational autoencoders", "venue": null, "year": 2016 }, { "authors": [ "S. Fogel", "Hadar Averbuch-Elor", "D. Cohen-Or", "J. Goldberger" ], "title": "Clustering-driven deep embedding with pairwise constraints", "venue": "IEEE computer graphics and applications,", "year": 2019 }, { "authors": [ "Yen-Chang Hsu", "Z. Kira" ], "title": "Neural network-based clustering using pairwise constraints", "venue": "ArXiv, abs/1511.06321,", "year": 2015 }, { "authors": [ "Zhuxi Jiang", "Yin Zheng", "Huachun Tan", "Bangsheng Tang", "Hanning Zhou" ], "title": "Variational deep embedding: An unsupervised and generative approach to clustering", "venue": null, "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Diederik P. Kingma", "Shakir Mohamed", "Danilo Jimenez Rezende", "M. Welling" ], "title": "Semi-supervised learning with deep generative models", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "A. Kopf", "Vincent Fortuin", "Vignesh Ram Somnath", "M. Claassen" ], "title": "Mixture-of-experts variational autoencoder for clustering and generating from similarity-based", "venue": "representations. ArXiv,", "year": 2019 }, { "authors": [ "T. Lange", "Martin H.C. Law", "Anil K. Jain", "J. Buhmann" ], "title": "Learning with constrained and unlabelled data", "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition", "year": 2005 }, { "authors": [ "Martin H.C. Law", "A. Topchy", "Anil K. Jain" ], "title": "Clustering with soft and group constraints", "venue": "In SSPR/SPR,", "year": 2004 }, { "authors": [ "Martin H.C. Law", "A. Topchy", "Anil K. Jain" ], "title": "Model-based clustering with probabilistic constraints", "venue": "In SDM,", "year": 2005 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "CJ Burges" ], "title": "Mnist handwritten digit database", "venue": "ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist,", "year": 2010 }, { "authors": [ "David D. Lewis", "Yiming Yang", "Tony G. Rose", "Fan Li" ], "title": "Rcv1: A new benchmark collection for text categorization research", "venue": "J. Mach. Learn. Res.,", "year": 2004 }, { "authors": [ "Xiaopeng Li", "Zhourong Chen", "Leonard K.M. Poon", "N.L. Zhang" ], "title": "Learning latent superstructures in variational autoencoders for deep multidimensional clustering", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Z. Lu", "T. Leen" ], "title": "Semi-supervised learning with penalized probabilistic clustering", "venue": "In NIPS,", "year": 2004 }, { "authors": [ "Laura Manduchi", "M. Hüser", "G. Rätsch", "Vincent Fortuin" ], "title": "Variational psom: Deep probabilistic clustering with self-organizing", "venue": "maps. ArXiv,", "year": 2019 }, { "authors": [ "Erxue Min", "Xifeng Guo", "Q. Liu", "G. Zhang", "Jianjing Cui", "Jun Long" ], "title": "A survey of clustering with deep learning: From the perspective of network architecture", "venue": "IEEE Access,", "year": 2018 }, { "authors": [ "Yazhou Ren", "Kangrong Hu", "Xinyi Dai", "Lili Pan", "Steven C.H. Hoi", "Zenglin Xu" ], "title": "Semi-supervised deep embedded clustering", "venue": null, "year": 2019 }, { "authors": [ "Danilo Jimenez Rezende", "S. Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In ICML,", "year": 2014 }, { "authors": [ "N. Shental", "Aharon Bar-Hillel", "T. Hertz", "D. Weinshall" ], "title": "Computing gaussian mixture models with em using equivalence constraints", "venue": "In NIPS,", "year": 2003 }, { "authors": [ "Ankita Shukla", "Gullal Singh Cheema", "Saket Anand" ], "title": "Semi-supervised clustering with neural networks", "venue": "arXiv: Learning,", "year": 2018 }, { "authors": [ "M. Smieja", "Lukasz Struski", "Mário A.T. Figueiredo" ], "title": "A classification-based approach to semisupervised clustering with pairwise constraints", "venue": "Neural networks : the official journal of the International Neural Network Society,", "year": 2020 }, { "authors": [ "Allan Stisen", "Henrik Blunck", "Sourav Bhattacharya", "Thor Siiger Prentow", "Mikkel Baun Kjærgaard", "Anind Dey", "Tobias Sonne", "Mads Møller Jensen" ], "title": "Smart devices are different: Assessing and mitigatingmobile sensing heterogeneities for activity recognition", "venue": "In Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems,", "year": 2015 }, { "authors": [ "K. Wagstaff", "Claire Cardie" ], "title": "Clustering with instance-level constraints", "venue": "In AAAI/IAAI,", "year": 2000 }, { "authors": [ "K. Wagstaff", "Claire Cardie", "S. Rogers", "S. Schrödl" ], "title": "Constrained k-means clustering with background knowledge", "venue": "In ICML,", "year": 2001 }, { "authors": [ "F.Y. Wu" ], "title": "The potts model", "venue": "Rev. Mod. Phys.,", "year": 1982 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017", "venue": null, "year": 2017 }, { "authors": [ "Junyuan Xie", "Ross Girshick", "Ali Farhadi" ], "title": "Unsupervised deep embedding for clustering analysis", "venue": "Proceedings of Machine Learning Research,", "year": 2016 }, { "authors": [ "L. Yang", "N. Cheung", "J. Li", "Jun Fang" ], "title": "Deep clustering by gaussian mixture variational autoencoders with graph embedding", "venue": "IEEE/CVF International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "C. Zhang", "Judith Bütepage", "H. Kjellström", "S. Mandt" ], "title": "Advances in variational inference", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2019 }, { "authors": [ "H. Zhang", "S. Basu", "I. Davidson" ], "title": "A framework for deep constrained clustering - algorithms and advances", "venue": "In ECML/PKDD,", "year": 2019 }, { "authors": [ "Jeffrey Zhang", "Sravani Gajjala", "Pulkit Agrawal", "Geoffrey Tison", "Laura Hallock", "Lauren Beussink", "Mats Lassen", "Eugene Fan", "Mandar Aras", "ChaRandle Jordan", "Kirsten Fleischmann", "Michelle Melisko", "Atif Qasim", "Sanjiv Shah", "Ruzena Bajcsy", "Rahul Deo" ], "title": "Fully automated echocardiogram interpretation in clinical practice: Feasibility and diagnostic", "venue": "accuracy. Circulation,", "year": 2018 }, { "authors": [ "Zhifei Zhang", "Yang Song", "H. Qi" ], "title": "Age progression/regression by conditional adversarial autoencoder", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The ever-growing amount of data and the time cost associated with its labeling has made clustering a relevant task in the field of machine learning. Yet, in many cases, a fully unsupervised clustering algorithm might naturally find a solution which is not consistent with the domain knowledge (Basu et al., 2008). In medicine, for example, clustering could be driven by unwanted bias, such as the type of machine used to record the data, rather than more informative features. Moreover, practitioners often have access to prior information about the types of clusters that are sought, and a principled method to guide the algorithm towards a desirable configuration is then needed. Constrained clustering, therefore has a long history in machine learning as it enforces desirable clustering properties by incorporating domain knowledge, in the form of constraints, into the clustering objective.\nFollowing recent advances in deep clustering, constrained clustering algorithms have been recently used in combination with deep neural networks (DNN) to favor a better representation of highdimensional data sets. The methods proposed so far mainly extend some of the most widely used deep clustering algorithms, such as DEC (Xie et al., 2016), to include a variety of loss functions that force the clustering process to be consistent with the given constraints (Ren et al., 2019; Shukla et al., 2018; Zhang et al., 2019b). Although they perform well, none of the above methods model the data generative process. As a result, they can neither uncover the underlying structure of the data, nor control the strength of the clustering preferences, nor generate new samples (Min et al., 2018).\nTo address the above issues, we propose a novel probabilistic approach to constrained clustering, the Constrained Variational Deep Embedding (CVaDE), that uncovers the underlying data distribution conditioned on domain knowledge, expressed in the form of pairwise constraints. Our method extends previous work in unsupervised variational deep clustering (Jiang et al., 2017; Dilokthanakul et al., 2016) to incorporate clustering preferences as Bayesian prior probabilities with varying degrees of uncertainty. This allows systematical reasoning about parameter uncertainty (Zhang et al., 2019a), thereby enabling the ability to perform Bayesian model validation, outlier detection and data generation. By integrating prior information in the generative process of the data, our model can guide the clustering process towards the configuration sought by the practitioners.\nOur main contributions are as follows: (i) We propose a constrained clustering method (CVaDE) to incorporate given clustering preferences, with varying degrees of certainty, within the Variational\nAuto-Encoder (VAE) framework. (ii) We provide a thorough empirical assessment of our model. In particular, we show that (a) a small fraction of prior information remarkably increases the performance of CVaDE compared to unsupervised variational clustering methods, (b) our model shows superior clustering performance compared to state-of-the-art deep constrained clustering models on a wide range of data sets and, (c) our model proves to be robust against noise as it can easily incorporate the uncertainty of the given constraints. (iii) We show that our model can drive the clustering performance towards different desirable configurations, depending on the constraints used, and it successfully generates new samples on challenging real-world image data." }, { "heading": "2 THEORETICAL BACKGROUND & RELATED WORK", "text": "Constrained Clustering. A constrained clustering problem differs from the classical clustering scenario as the user has access to some pre-existing knowledge about the desired partition of the data. The constraints are usually expressed as pairwise constraints (Wagstaff & Cardie, 2000), consisting of must-links and cannot-links, which indicate whether two samples are believed to belong to the same cluster or to different clusters. Such pairwise relations contain less information than the labels used in classification tasks but are usually easier to obtain. Traditional clustering methods have been then extended to enforce pairwise constraints (Lange et al., 2005). COP-KMEANS (Wagstaff et al., 2001) and MPCK-mean (Bilenko et al., 2004) adapted the well-known K-means algorithm, while several methods proposed a constrained version of the Gaussian Mixture Models (Shental et al., 2003; Law et al., 2004; 2005). Among them, penalized probabilistic clustering (PPC, Lu & Leen (2004)) is most related to our work as it expresses the pairwise constraints as Bayesian priors over the assignment of data points to clusters. However, PPC, as well as the previous models, shows poor performance and high computational complexity on high-dimensional and large-scale data sets.\nDeep Constrained Clustering. To overcome the limitations of the above models, constrained clustering algorithms have been lately used in combination with DNNs. Hsu & Kira (2015) train a DNN to minimize the Kullback-Leibler (KL) divergence between similar pairs of samples, while Chen (2015) performs semi-supervised maximum margin clustering of the learned features on a DNN. More recently, many extensions of the widely used DEC model (Xie et al., 2016) have been proposed to include a variety of loss functions to enforce pairwise constraints. Among them, SDEC (Ren et al., 2019) includes a distance loss function that forces the data points with a must-link to be close in the latent space and vice-versa. C-IDEC (Zhang et al., 2019b), uses, instead, a KL divergence loss, extending the work of Shukla et al. (2018). Other works have focused on discriminative clustering methods by self-generating pairwise constraints from either Siamese networks or KNN graphs (Smieja et al., 2020) (Fogel et al., 2019). As none of the approaches proposed so far is based on generative models, the above methods fail to uncover the underlying data distribution. Additionally, DEC-based architectures rely on heavy pretraining of the autoencoder, resulting in no theoretical guarantee that the learned latent space is indeed suitable for clustering (Min et al., 2018).\nVAE-based deep clustering. Many models have been proposed in the literature to perform unsupervised clustering through deep generative models (Li et al., 2019; Yang et al., 2019; Manduchi et al., 2019; Kopf et al., 2019). Among them, the Variational Deep Embedding (VaDE, Jiang et al. (2017)) and the Gaussian Mixture Variational Autoencoder (GMM-VAE, Dilokthanakul et al. (2016)) propose a variant of the VAE (Kingma & Welling (2014); Rezende et al. (2014)) in which the prior is a Gaussian Mixture distribution. With this assumption, they construct an inference model that can be directly optimised in the framework of stochastic gradient variational Bayes. However, variational deep clustering methods, such as the VaDE, cannot incorporate domain knowledge and clustering preferences. Even though a semi-supervised version on the VAE has been proposed by Kingma et al. (2014), the latter cannot be naturally applied to clustering. For this reason, we aim at extending the above methods to incorporate clustering preferences in the form of constraints, modeled as Bayesian priors, to guide the clustering process towards a desirable configuration." }, { "heading": "3 CONSTRAINED VARIATIONAL DEEP EMBEDDING", "text": "In the following section, we propose a novel constrained clustering model (CVaDE) to incorporate clustering preferences, with varying degree of certainty, in a VAE-based deep clustering setting. In particular, we use the VaDE (Jiang et al., 2017) generative assumptions of the data, conditioned on\nthe domain knowledge. We then illustrate how our model can be trained efficiently in the framework of stochastic gradient variational Bayes by optimizing the Conditional Variational Lower Bound. Additionally, we define concrete prior formulations to incorporate our preferences, with a focus on pairwise constraints." }, { "heading": "3.1 THE GENERATIVE ASSUMPTIONS", "text": "Let us consider a data set X = {xi}Ni=1 consisting of N samples with xi ∈ RM that we wish to cluster into K groups according to some prior information encoded as G. For example, we may know a priori that certain samples should be clustered together with different degree of certainty. HenceG encodes both our prior knowledge on the data set and the degree of confidence. We assume the data is generated from a random process consisting of three steps. First, the cluster assignments c = {ci}Ni=1, with ci ∈ {1, . . . ,K}, are sampled from a distribution conditioned on the prior information, c ∼ p(c|G). Next, for each cluster assignment ci, a continuous latent embedding, zi ∈ RD, is sampled from a Gaussian distribution, whose mean and variance depend on the selected cluster ci. Finally, the sample xi is generated from a distribution conditioned on zi. Given ci, the generative process can be summarized as:\nzi ∼ p(zi|ci) = N (zi|µci ,σ2ciI) (1) xi ∼ pθ(xi|zi) = { N (xi|µxi ,σ2xiI) with [µxi ,σ 2 xi ] = f(zi;θ) if xi is real-valued\nBer(µxi) with µxi = f(zi;θ) if xi is binary (2)\nwhere µci and σ 2 ci are mean and variance of the Gaussian distribution corresponding to cluster ci in the latent space and the function f(z;θ) is a neural network, called decoder, parametrized by θ. Without prior information, that is when p(c|G) = p(c) = ∏ i p(ci) = ∏ i Cat(ci|π), the cluster assignments are independent and identical distributed as they follow a categorical distribution with mixing parameters π. In that case, the generative assumptions described above are equal to those of Jiang et al. (2017) and the parameters of the model can be learned using the unsupervised VaDE method (see Appendix C). In the following, we explore the case when p(c|G) 6= p(c)." }, { "heading": "3.2 CONDITIONAL VARIATIONAL LOWER BOUND", "text": "Given the data generative assumptions illustrated in Sec. 3.1, the objective is to infer the parameters π, µc, σ2c and θ which better explain the data X given prior information on the cluster assignments G. We achieve this by maximizing the marginal log-likelihood conditioned onG, that is:\nlog p(X|G) = log ∫ Z ∑ c p(X,Z, c|G), (3)\nwhere Z = {zi}Ni=1 is the collection of the latent embeddings corresponding to the data setX . The conditional joint probability is derived from Eq 1/2 and can be factorized as:\np(X,Z, c|G) = pθ(X|Z)p(Z|c)p(c|G) = p(c|G) N∏ i=1 pθ(xi|zi)p(zi|ci). (4)\nSince the conditional log-likelihood is intractable, we derive a lower bound of the log marginal conditional probability of the data, which we call Conditional ELBO (C-ELBO, LC):\nLC(X|G) = Eqφ(Z,c|X) log p(X,Z, c|G) qφ(Z, c|X)\n(5)\nSimilarly to Jiang et al. (2017) and Dilokthanakul et al. (2016), we employ the following amortized mean-field variational distribution:\nqφ(Z, c|X) = qφ(Z|X)p(c|Z) = N∏ i=1 qφ(zi|xi)p(ci|zi) with p(ci|zi) = p(zi|ci)p(ci)∑ k p(zi|k)p(k) , (6)\nwhere qφ(zi|xi) is a Gaussian distribution with mean µ(xi) and variance σ2(xi)I which are the outputs of a neural network, called encoder, parametrized by φ and p(ci = k) is denoted as p(k) for simplicity. It is important to note that, in this formulation, the variational distribution does not depend on G. This approximation is used to retain a mean-field variational distribution if the cluster assignments, conditioned on the prior information, are not independent (Sec 3.4.1), that is when p(c|G) 6= ∏ i p(ci|G)." }, { "heading": "3.3 ROLE OF THE PRIOR INFORMATION", "text": "To highlight how the prior informationG influences the clustering objective, we rewrite Eq. 5 as: LC(X|G) = Eqφ(Z|X) [log pθ(X|Z)]−DKL(qφ(Z, c|X)‖p(Z, c|G)). (7) The first term is called the reconstruction term, similarly to the VAE. The second term, on the other hand, is the Kullback-Leibler (KL) divergence between the variational posterior and the Constrained Gaussian Mixture prior. By maximizing the C-ELBO, the variational posterior mimics the true conditional probability of the latent embeddings and the cluster assignments. This results in enforcing the latent embeddings to follow a Gaussian mixture which agrees on the clustering preferences.\nUsing Eq. 6, the Conditional ELBO can be further factorized as: LC(X|G) =Ep(c|Z)[log p(c|G)] + Eqφ(Z|X)[log pθ(X|Z)]\n+ Eqφ(Z|X)p(c|Z)[log p(Z|c)]− Eqφ(Z,c|X)[log qφ(Z, c|X)], (8)\nwhere the last three terms are not affected byG and they can be rewritten using the SGVB estimator and the reparameterization trick (Kingma & Welling, 2014) to be trained efficiently using stochastic gradient descent (see Appendix B). The first term, on the other hand, is investigated in Sec 3.4." }, { "heading": "3.4 CONDITIONAL PRIOR PROBABILITY", "text": "We incorporate our clustering preference through the conditional probability p(c|G). In particular, we construct the conditional prior probability to be:\np(c|G) = ∏ i πcigi(c)∑\nc ∏ j πcjgj(c) = 1 Ω(π) ∏ i πcigi(c), (9)\nwhereπ = {πk}Kk=1 are the weights associated to each cluster, ci is the cluster assignment of sample xi, Ω(π) is the normalization factor and gi(c) is a weighting function that assumes large values if ci agrees with our belief with respect to c and low values otherwise." }, { "heading": "3.4.1 PAIRWISE CONSTRAINTS", "text": "We hereby focus on expressing the conditional prior distribution in the context of pairwise constrained clustering and we do so by adapting the work of Lu & Leen (2004) in our variational framework. The weighting function gi(c) is then defined as:\ngi(c) = ∏ j 6=i exp ( Wi,jδcicj ) , (10)\nwhere δ is the Kronecker δ-function andW ∈ RN×N is a symmetric matrix containing the pairwise preferences and confidence. In particular, Wi,j = 0 if we have no prior information on samples xi andxj ,Wi,j > 0 if there is a must-link constraint (the two samples should be clustered together) and Wi,j < 0 if there is a cannot-link constraint (the two samples should not be clustered together). The value |Wi,j | ∈ [0,∞) reflects the degree of certainty in the constraint. For example, ifWi,j −→ −∞ then xi and xj must be assigned to different clusters otherwise p(c|G) −→ 0 (hard constraint). On the other hand, smaller values indicate a soft preference as they admit some degree of freedom in the model. An heuristic to select |Wi,j | is presented in Sec 4. Interestingly, the probability p(c|G) with pairwise constraints can be seen as the posterior of the superparamagnetic clustering method (Blatt et al., 1996), with loss function given by a fully connected Potts model (Wu, 1982). Differently from our method, Blatt et al. (1996) cluster the data according to the pairwise correlation functions Ep(c|G)δcicj that are estimated with MCMC methods.\nFinally, we incorporate the conditional prior distribution in Eq. 8. The first term can be written as:\nEp(c|Z) [log p(c|G)] = Ep(c|Z) log 1\nΩ(π) ∏ i πci ∏ j 6=i exp ( Wi,jδcicj ) (11)\n= − log Ω(π) + ∑ i Ep(ci|zi) log πci + ∑ i,j 6=i Ep(ci|zi)Ep(cj |zj)Wi,jδcicj (12)\n= − log Ω(π) + ∑ i ∑ k p(ci = k|zi) log πci + ∑ i,j 6=i ∑ k p(ci = k|zi)p(cj = k|zj)Wi,j . (13)\nMaximizing Eq. 13 w.r.t. π poses computational problems due to the normalization factor Ω(π). Crude approximations are investigated in (Basu et al., 2008), however we choose to fix the parameter πk = 1/K to make z uniformly distributed in the latent space, as in previous works (Dilokthanakul et al., 2016). By doing so, the normalization factor can be treated as a constant. The analysis of different approaches to learn the weights π is left for future work. The C-ELBO with a pairwise constrained prior can then be optimized using Monte Carlo sampling and stochastic gradient descent." }, { "heading": "3.4.2 FURTHER POSSIBLE CONSTRAINTS", "text": "Given the flexibility of our general framework, different types of constraints can be included in the formulation of the weighting functions gi(c). In particular, we can perform semi-supervised learning by setting the prior information as:\ngi(c) = g(ci) = exp (Wi,k) with W ∈ RN×K (14)\nwhereWi,k indicates whether the sample xi should be assigned to cluster k.\nAdditionally, one could also include triple-constraints by modifying the weighting function to be: gi(c) = ∏ j,k 6=i exp ( Wi,j,kδcicjck ) with W ∈ RN×N×N symmetric. (15)\nwhereWi,j,k = 0 if we do not have any prior information,Wi,j,k > 0 indicates that the samples xi, xj and xk should be clustered together and Wi,j,k < 0 if they should belong to different clusters. The analysis of these different constraints formulation is outside the scope of our work but they may represent interesting directions for future work." }, { "heading": "4 EXPERIMENTS", "text": "In the following, we provide a thorough empirical assessment of our proposed method (CVaDE) with pairwise constraints using a wide range of data sets. First, we evaluate our model’s performance compared to both unsupervised variational deep clustering methods and state-of-the-art constrained clustering methods. Then, we present extensive evidence of the ability of our model to handle noisy constraint information. We additionally perform experiments on a medical application to prove that our model can reach different desirable partitions of the data, depending on the constraints used, even with real-world, noisy data. Finally, we show that CVaDE successfully generates new data, using the learned generative process of Sec 3.1, on a challenging face image data set.\nBaselines & implementation details. As baselines, we include the traditional constrained Kmeans (MPCK-means, Bilenko et al. (2004)) and two recent deep constrained clustering methods based on DEC (SDEC, Ren et al. (2019), and C-IDEC, Zhang et al. (2019b)) as they achieve stateof-the-art performance in constrained clustering. We also compare our model to the unsupervised variational deep clustering method VaDE (Jiang et al., 2017). To implement our model, we were careful in maintaining a fair comparison with the baselines. In particular, we adopted the same encoder and decoder feed-forward architecture among all methods with four layers of 500, 500, 2000, D units respectively, where D = 10 unless stated otherwise. The VAE is pretrained for 10 epochs while the DEC-based baselines need a more complex layer-wise pretraining of the autoencoder which involves 50 epochs of pretraining for each layer and 100 epochs of pretraining as finetuning. The pairwise constraints are chosen randomly within the training set by sampling two data points and assigning a must-link if they have the same label and a cannot-link otherwise. Unless stated otherwise, the values of |Wi,j | are set to 104 for all data sets, and N pairwise constraints are used for both our model and the constrained clustering baselines, where N is the number of samples of the considered data set. Note thatN pairwise constraints can be obtained by using only √ N labeled data points. To allow for fast iteration we simplify the last term of Eq. 13 by allowing the search of pairwise constraints to be performed only inside the considered batch. We observed empirically that, with a batch size of 1024, this approximation does not affect the clustering performance. Further details on the other hyper-parameters setting can be found in the Appendix E.1.\nConstrained clustering. We test the clustering performance of our model compared with the baselines on four different data sets: MNIST (LeCun et al., 2010), Fashion MNIST (Xiao et al., 2017),\nReuters (Xie et al., 2016) and HHAR (Stisen et al., 2015) (see Appendix A). Note that we preprocessed the Reuters data by computing the tf-idf features on the 2000 most frequent words on a random subset of 10000 documents and by selecting 4 root categories (Xie et al., 2016). Accuracy and Normalized Mutual Information (NMI) are used as evaluation metrics. The results are shown in Table 1. We observe that our model reaches state-of-the-art performance in the Fashion MNIST and Reuters data sets. On MNIST and HHAR data sets, on the other hand, it has comparable clustering performance with C-IDEC, however it is generally more stable accross different data sets.\nConstrained clustering with noisy labels. In real-world applications it is often the case that the additional information comes from different sources with different noise levels. As an example, pairwise annotations could be obtained by both very experienced domain experts and by lessexperienced users. Hence, the ability to integrate constraints with different degrees of certainty into the clustering algorithm is of significant practical importance. In this experiment, we consider the case in which the given pairwise constraints have three different noise levels, q ∈ {0.1, 0.2, 0.3}, where q determines the fraction of pairwise constraints with flipped signs (that is, when a mustlink is turned into a cannot-link and vice-versa). In Fig. 1 we show the clustering performance of our model compared to the strongest baseline derived from the previous section, C-IDEC. For all data sets, we decrease the value of the pairwise confidence of our method using the heuristic |Wi,j | = α log ( 1−q q ) with α = 3500. Also, we use grid search to choose the hyper-parameters of C-IDEC for the different noise levels (in particular we set the penalty weight to 0.1, 0.005, and 0.001 respectively). CVaDE clearly achieves better performance in terms of accuracy and NMI on all three noise levels for all data sets. In particular, the higher the noise level the greater the performance difference. We can conclude that our model is more robust than its main competitor. Additionally, C-IDEC cannot model different noise levels within the same data set, while our model can easily include different source of information with different degree of uncertainty.\nHeart Echo. We evaluate the effectiveness of our model in a real-world application by using infant heart echo cardiogram videos. The data set consists of 305 infant echo cardiogram videos from 65 patient visits obtained from a large University Children’s Hospital. Each visit may consist of videos taken from several different angles (called view), denoted by [LA, KAKL, KAPAP, KAAP, 4CV]. We cropped the videos by isolating the cone of the echo cardiogram, we resized them to 64 × 64 pixels and split them into individual frames obtaining a total of N = 20000 images. We focused on investigating two constrained clustering problems using this data set. First, we cluster the echo video frames by view. This is a relevant task, as in many datasets, heart echo videos are not explicitly labeled (Zhang et al., 2018). Then, we cluster the echo video frames by infant maturity at birth, following the WHO definition of premature birth categories (”Preterm”). We believe that these two clustering tasks demonstrate that our model admits a degree of control in choosing the underlying structure of the learned clusters. As the experiments demonstrate, by providing different pairwise constraints, it is possible to guide the clustering process towards a preferred configuration, depending on what the practitioners are seeking in the data. For both experiments, we compare the performance of our method, CVaDE, with the unsupervised VaDE method and the C-IDEC. Additionally, we include a variant of both our method and VaDE in\nwhich we use convolutional layers (CNN-CVaDE and CNN-VaDE), for details on the implementation we refer to the Appendix E.2. The results are shown in Table 2. The CVaDE model outperforms both baselines by a significant margin in both accuracy and NMI in all clustering experiments. This demonstrates that the addition of domain knowledge is particularly effective for medical purposes. Additionally, we observe that C-IDEC performs poorly on real-world noisy data. We believe this is due to the heavy pretraining of the autoencoder, required by DEC-based methods, as it does not always enforce that the learned latent space is suitable for clustering. Additionally, we investigate the relationship between the number of constraints and clustering performance for the view detection task. We observed that with only 5000 pairwise constraints, which could be obtained with less than 80 labels, our model achieves results comparable to those of Table 2 (see Appendix D.3 for details).\nFace Image Generation We evaluate the generative capabilities of our model using the UTKFace dataset (Zhang et al., 2017). This dataset contains over 20000 images of male and female faces, aged from 1 to 118 years old, with multiple ethnicities represented. We use a convolutional network for the VAE (the implementation details are described in the Appendix E.3). As a first task, we cluster the data using the gender prior information, in the form of 2N pairwise constraints, which requires labels for 0.7% of the data set. Fig. 2a/2b shows the PCA decomposition of the embedding space of both CVaDE and the unsupervised VaDE method. As a second task, we select a sub-sample of individuals between 18 and 50 years of age (approx. 11000 samples), and cluster by ethnicity (White, Black, Indian, Asian) using 2N pairwise constraints, Fig. 2c/2d. We believe that one possible reason for biases in machine learning models currently is due to under-representation of various ethnic groups in training data sets. The constrained CVaDE method could be used to identify such imbalances, while requiring only a relatively small number of labeled training samples. This would allow easy identification of such instances of bias in training data sets at a relatively low cost. Furthermore, the ability to generate samples from these detected clusters potentially allows automatic data set augmentation. With the inclusion of domain knowledge, we observe a neat division of the\nselected clusters in the embedding space in both tasks, conversely the unsupervised approach is not able to distinguish any feature of interest. Finally, using the multivariate Gaussian distributions of each cluster in the learned embedded space, we test the generative capabilities of CVaDE by first recovering the mean face of each cluster, and then generating several more faces from each cluster. Figure 3 shows these generated samples. As can be observed, the ethnicities present in the data set are represented well by the mean face. Furthermore, the sampled faces all correspond to the respective cluster, and have a good amount of variation. The quality of generated samples could be improved by using higher resolution training samples or different CNN architectures." }, { "heading": "5 CONCLUSION", "text": "In this work, we present a novel constrained deep clustering method, CVaDE, that incorporates clustering preferences in the form of pairwise constraints, with varying degrees of certainty. In contrast to existing constrained deep clustering approaches, CVaDE uncovers the underlying distribution of the data, resulting in the ability to generate new samples, perform Bayesian model validation and outlier detection. With the integration of domain knowledge, we show that our model can drive the clustering algorithm towards the partitions of the data sought by the practitioners, achieving stateof-the-art constrained clustering performance in real-world and complex data sets. Additionally, our model proves to be robust to noisy constraints as it can efficiently include uncertainty into the clustering preferences. As a result, the proposed model can be applied to a variety of applications where the difficulty of obtaining labelled data prevents the use of fully supervised algorithms." }, { "heading": "APPENDIX", "text": "" }, { "heading": "A DATA SETS", "text": "The data sets used in the experiments are the followings:\n• MNIST: It consists of 70000 handwritten digits. The images are centered and of size 28 by 28 pixels. We reshaped each image to a 784- dimensional vector (LeCun et al., 2010).\n• Fashion MNIST: A data set of Zalando’s article images consisting of a training set of 60,000 examples and a test set of 10,000 examples (Xiao et al., 2017).\n• HHAR: The Heterogeneity Human Activity Recognition (HHAR) dataset contains 10299 sensor records from smart phones and smart watches. All samples are partitioned into 6 categories of human activities and each sample is of 561 dimensions (Stisen et al., 2015).\n• Reuters: It contains 810000 English news stories (Lewis et al., 2004). Following the work of Xie et al. (2016), we used 4 root categories: corporate/industrial, government/social, markets, and economics as labels and discarded all documents with multiple labels, which results in a 685071-article dataset. We computed tf-idf features on the 2000 most frequent words to represent all articles. A random subset of 10000 documents is then sampled.\n• Newborn echo cardiograms: The dataset consists of 305 infant echo cardiogram videos from 65 patient visits obtained from a large children hospital. Each visit may consist of videos taken from several different angles, denoted by [LA, KAKL, KAPAP, KAAP, 4CV]. We cropped the videos by isolating the cone of the echo cardiogram, we resized them to 64x64 pixels and split them into individual frames obtaining a total of N = 20000 images.\n• UTKFace: This dataset contains over 20000 images of male and female face of individuals from 1 to 118 years old, with multiple ethnicities represented (Zhang et al., 2017)." }, { "heading": "B CONDITIONAL ELBO DERIVATIONS", "text": "In this section, we provide the detailed derivation of the Conditional ELBO with pairwise constraints and we describe how it can be trained efficiently with Monte Carlo sampling. Specifically, the CELBO, LC(X|G), is the upper bound of the marginal log-likelihood conditioned onG:\nlog p(X|G) = log ∫ Z ∑ c p(X,Z, c|G) ≥ Eqφ(Z,c|X) log p(X,Z, c|G) qφ(Z, c|X) = LC(X|G). (16)\nBy plugging in the marginal log-likelihood of Eq. 3 and the variational distribution of Eq. 6, the C-ELBO can be rewritten as:\nLC(X|G) = Eqφ(Z,c|X)[log pθ(X|Z)p(Z|c)p(c|G)]− Eqφ(Z,c|X)[log qφ(Z, c|X)] (17) = Eqφ(Z|X)[log pθ(X|Z)] + Eqφ(Z|X)p(c|Z)[log p(Z|c)]\n+ Ep(c|Z)[log p(c|G)]− Eqφ(Z|X)[log qφ(Z|X)]− Ep(c|Z)[log p(c|Z)] (18)\nwhere we used the fact that the variational distribution can be factorized as qφ(Z, c|X) = qφ(Z|X)p(c|Z). Given that qφ(Z|X)p(c|Z) = ∏ i qφ(zi|xi)p(ci|zi) and using Eq 13, we can further factorize Eq 18:\nLC(X|G) = N∑ i=1 [ Eqφ(zi|xi)[log pθ(xi|zi)] + Eqφ(zi|X)p(c|zi)[log p(zi|ci)]\n+ Ep(ci|zi) log πci + ∑ i,j 6=i Ep(ci|zi)Ep(cj |zj)Wi,jδcicj\n− Eqφ(zi|xi)[log q(zi|xi)]− Ep(ci|zi)[log q(ci|zi)] ]\n(19)\nAs p(ci|zi) is discrete, Ep(ci|zi)[·] = ∑ k p(ci = k|zi)[·], the above equation can then be written as:\nLC(X|G) = N∑ i=1 [ Eqφ(zi|xi)[log pθ(xi|zi)] + Eqφ(zi|xi) [∑ k p(ci = k|zi) log p(zi|ci = k) ]\n+ ∑ i ∑ k p(ci = k|zi) log πci + ∑ i,j 6=i ∑ k p(ci = k|zi)p(cj = k|zj)Wi,j\n− Eqφ(zi|xi)[log q(zi|xi)]− ∑ k p(ci = k|zi)[log p(ci = k|zi)] ] (20)\nUsing the SGVB estimator, we can approximate the above equation as:\nLC(X|G) = N∑ i=1 1 L L∑ l=1 [ log pθ(xi|z(l)i ) + ∑ k p(ci = k|z(l)i ) log p(z (l) i |ci = k)\n+ ∑ i ∑ k p(ci = k|z(l)i ) log πci + ∑ i,j 6=i ∑ k p(ci = k|zi)p(cj = k|zj)Wi,j\n− log qφ(z(l)i |xi)− ∑ k p(ci = k|z(l)i ) log p(ci = k|z (l) i ) ] ,\n(21)\nwhere L is the number of Monte Carlo samples in the SGVB estimator and it is set to L = 1 in all experiments.\nC VARIATIONAL DEEP EMBEDDING\nThe Variational Deep Embedding method, VaDE (Jiang et al., 2017), assumes an observed sample xi is generated by the following generative process:\nci ∼ Cat(1/K) zi ∼ p(zi|ci) = N (zi|µci ,σ2ciI) (22)\nxi ∼ p(xi|zi) = { N (z|µxi ,σ2xiI) with [µxi ,σ 2 xi ] = f(zi;θ) if xi is real-valued\nBer(µxi) with µxi = f(zi;θ) if xi is binary (23)\nwhere K is the predefined number of clusters µc,σ2c are mean and variance of the Gaussian distribution corresponding to cluster c in the latent space and the function f(z;θ) is a neural network, called decoder, parametrized by θ, similarly to a VAE. To infer both the parameters of the Gaussian mixture model and the decoder, the VaDE maximises the likelihood of the data X, that is:\nlog p(X) = log ∫ Z ∑ c p(X,Z, c) ≥ Eq(Z,c|X) log p(X,Z, c) q(Z, c|X) = LELBO\nThe variational distribution is chosen to be: qφ(Z, c|X) = ∏ i qφ(zi, ci|xi) = ∏ i qφ(zi|xi)p(ci|zi) with p(ci|zi) = p(zi|ci)p(ci)∑ k p(zi|k)p(k) (24)\nwhere qφ(zi|xi) is a Gaussian distribution with mean µ(xi) and variance σ2(xi)I which are the outputs of a neural network, called encoder, parametrized by φ.\nThe ELBO can be then formulated as:\nLELBO(X) = Eqφ(Z|X) [log pθ(X|Z)]−DKL(qφ(Z, c|X)‖p(Z, c)). (25)" }, { "heading": "D FURTHER EXPERIMENTS", "text": "" }, { "heading": "D.1 COMPARISON WITH C-IDEC", "text": "In Table 3, we present the quantitative results in term of Adjusted Rand Index (ARI) and the percentage of satisfied constraints (SC) of both our model, CVaDE, and the strongest baseline, C-IDEC with N constraints." }, { "heading": "D.2 NOISY LABELS", "text": "In Tables 4/5/6/7, we present the quantitative results in term of Accuracy and Normalized Mutual Information of both our model, CVaDE, and the strongest baseline, C-IDEC with N noisy constraints (presented visually in Fig. 1). In particular, the results are computed for q ∈ {0.1, 0.2, 0.3}, where q determines the fraction of pairwise constraints with flipped signs (that is, when a must-link is turned into a cannot-link and vice-versa)." }, { "heading": "D.3 HEART ECHO", "text": "PCA Decompostion In Figure 4 we present a PCA decomposition of the embedded space learned by both the CVaDE and VaDE baseline, from the experiment presented in Section 4. It can be observed that the CVaDE model, using N constraints, is able to learn an embedded space which clusters by view much more effectively.\nImpact of Constraints In order to try to understand how the number of constraints provided impacts the performance of CVaDE, we performed an experiment using the heart echo dataset where progressively more and more constraints were provided to the model during training. Figure 5 demonstrates that the clustering performance of CVaDE improves as more pairwise constraints are provided. This trend continues until 5000 pairwise constraints are provided (requiring 0.35% of the dataset to be labeled), at which point the performance reaches a plateau.\nE IMPLEMENTATION DETAILS" }, { "heading": "E.1 HYPER-PARAMETERS SETTING", "text": "In Table 8 we specify the hyper-parameters setting of both our model, CVaDE, and the unsupervised baseline, VaDE. Given the semi-supervised setting, we did not focus in fine-tuning the hyperparameters but rather we chose standard configurations for all data sets. We observed that our model is robust against changes in the hyper-parameters, except for the batch size. The latter requires a high value, as we simplify the last term of Eq. 13 by allowing the search of pairwise constraints to be performed only inside the considered batch. For the VaDE, we used the same hyper-parameters setting used in their paper (Jiang et al., 2017)." }, { "heading": "E.2 HEART ECHO", "text": "In addition to the model described in Section 4, we also used a VGG-like convolutional neural network. This model is implemented in Tensorflow, using two VGG blocks (using a 3 × 3 kernel size) of 32 and 64 filters for the encoder, followed by a single fully-connected layer reducing down to an embedding of dimension 10. The decoder has a symmetric architecture.\nThe VAE is pretrained for 10 epochs, following which our model is trained for 300 epochs using a learning rate of 0.001 (with an exponential decay of 0.00001), and a batch size of 1024. Refer to the accompanying code for further details." }, { "heading": "E.3 FACE IMAGE GENERATION", "text": "The face image generation experiments using the UTK Face dataset described in Section 4 were carried out using VGG-like convolutional neural networks implemented in Tensorflow. In particular,\nthe input image size of 64×64×3 allowed two VGG blocks (using a 3×3 kernel size) of 64 and 128 filters for the encoder, followed by a single fully-connected layer reducing down to an embedding of dimension 50. The decoder has a symmetric architecture.\nThe VAE is pretrained for 100 epochs, following which our model is trained for 1000 epochs using a learning rate of 0.001 (with a decay of 0.00001), and a batch size of 1024. Refer to the accompanying code for further details." } ]
2,020
null
SP:95782322a8951193e0690262f6a90d2ed5ed7463
[ "This paper studies, through a provable approach, whether abstaining (i.e., refusing to answer) can be beneficial for achieving small adversarial/robust error in settings where the input is potentially adversarially perturbed. The paper proves a separation between the power of models with and without abstain. In particular, it is shown that for a certain adversarial model (more about this below) when we force the model to answer without an abstain option, it will have high adversarial error, but when abstain is allowed, it can have small adversarial error as well as small abstention rate in certain settings. The paper then studies algorithms for robust contrastive learning in which they map the inputs into high-dimensional spaces and then aim to classify them using an abstain-enabled model based on 1-NN. The paper studies ways to adjust the parameters of the model as the data comes in an online fashion (divided into batches). They show how to achieve sublinear regret in such settings. They then compare linear classifiers with their own (1-NN style) classifiers and show advantages in robustness with such models when abstaining is allowed.", "This paper proves some fundamental facts about classifiers that can't abstain (provide a non-classification) and their robustness to adversarial perturbations. In Sec. 4, they provide a result that such classifiers are always vulnerable to adversarial perturbations in a technical sense. In particular, there will always be a class in which most training examples can be randomly perturbed in a way that an incorrect label will result nearly half the time. In Sec 5, they propose a modified nearest-neighbor classification algorithm, with two parameters that control abstention and \"noise removal\". They provide upper bounds on error in a random subspace attack scheme, and refine/loosen these results in several more specific/general scenarios. In Secs. 6 & 7, they discuss methods to tune the two parameters and provide experimental evidence of their theoretical results." ]
We formally define a feature-space attack where the adversary can perturb datapoints by arbitrary amounts but in restricted directions. By restricting the attack to a small random subspace, our model provides a clean abstraction for non-Lipschitz networks which map small input movements to large feature movements. We prove that classifiers with the ability to abstain are provably more powerful than those that cannot in this setting. Specifically, we show that no matter how well-behaved the natural data is, any classifier that cannot abstain will be defeated by such an adversary. However, by allowing abstention, we give a parameterized algorithm with provably good performance against such an adversary when classes are reasonably well-separated in feature space and the dimension of the feature space is high. We further use a data-driven method to set our algorithm parameters to optimize over the accuracy vs. abstention trade-off with strong theoretical guarantees. Our theory has direct applications to the technique of contrastive learning, where we empirically demonstrate the ability of our algorithms to obtain high robust accuracy with only small amounts of abstention in both supervised and self-supervised settings. Our results provide a first formal abstention-based gap, and a first provable optimization for the induced trade-off in an adversarial defense setting.
[]
[ { "authors": [ "Rima Alaifari", "Giovanni S Alberti", "Tandri Gauksson" ], "title": "ADef: an iterative algorithm to construct adversarial deformations", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Philip Bachman", "R Devon Hjelm", "William Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Maria-Florina Balcan", "Vaishnavh Nagarajan", "Ellen Vitercik", "Colin White" ], "title": "Learning-theoretic foundations of algorithm configuration for combinatorial partitioning problems", "venue": "In Annual Conference on Learning Theory,", "year": 2017 }, { "authors": [ "Maria-Florina Balcan", "Travis Dick", "Ellen Vitercik" ], "title": "Dispersion for data-driven algorithm design, online learning, and private optimization", "venue": "In Annual Symposium on Foundations of Computer Science,", "year": 2018 }, { "authors": [ "Maria-Florina Balcan", "Tuomas Sandholm", "Ellen Vitercik" ], "title": "A general theory of sample complexity for multi-item profit maximization", "venue": "In ACM Conference on Economics and Computation,", "year": 2018 }, { "authors": [ "Maria-Florina Balcan", "Travis Dick", "Dravyansh Sharma" ], "title": "Learning piecewise Lipschitz functions in changing environments", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "James Bergstra", "Yoshua Bengio" ], "title": "Random search for hyper-parameter optimization", "venue": "The Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Arjun Nitin Bhagoji", "Daniel Cullina", "Chawin Sitawarin", "Prateek Mittal" ], "title": "Enhancing robustness of machine learning systems via data transformations", "venue": "In Annual Conference on Information Sciences and Systems,", "year": 2018 }, { "authors": [ "Avrim Blum", "Travis Dick", "Naren Manoj", "Hongyang Zhang" ], "title": "Random smoothing might be unable to certify `1 robustness for high-dimensional images", "venue": "Journal of Machine Learning Research,", "year": 2020 }, { "authors": [ "Wieland Brendel", "Jonas Rauber", "Alexey Kurakin", "Nicolas Papernot", "Behar Veliqi", "Marcel Salathé", "Sharada P Mohanty", "Matthias Bethge" ], "title": "Adversarial vision challenge", "venue": null, "year": 1808 }, { "authors": [ "Tom B Brown", "Nicholas Carlini", "Chiyuan Zhang", "Catherine Olsson", "Paul Christiano", "Ian Goodfellow" ], "title": "Unrestricted adversarial examples", "venue": "arXiv preprint arXiv:1809.08352,", "year": 2018 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Adversarial examples are not easily detected: Bypassing ten detection methods", "venue": "In ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "Kamalika Chaudhuri", "Sanjoy Dasgupta" ], "title": "Rates of convergence for the cluster tree", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Logan Engstrom", "Brandon Tran", "Dimitris Tsipras", "Ludwig Schmidt", "Aleksander Madry" ], "title": "A rotation and a translation suffice: Fooling CNNs with simple transformations", "venue": "arXiv preprint arXiv:1712.02779,", "year": 2017 }, { "authors": [ "Dafydd Evans", "Antonia J Jones", "Wolfgang M Schmidt" ], "title": "Asymptotic moments of near–neighbour distance distributions", "venue": "Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences,", "year": 2028 }, { "authors": [ "Nicholas Frosst", "Nicolas Papernot", "Geoffrey Hinton" ], "title": "Analyzing and improving representations with the soft nearest neighbor loss", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Aditya Ganeshan", "R Venkatesh Babu" ], "title": "FDA: Feature disruptive attack", "venue": "In IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Yonatan Geifman", "Ran El-Yaniv" ], "title": "Selective classification for deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Justin Gilmer", "Ryan P Adams", "Ian Goodfellow", "David Andersen", "George E Dahl" ], "title": "Motivating the rules of the game for adversarial example research", "venue": "arXiv preprint arXiv:1807.06732,", "year": 2018 }, { "authors": [ "Kathrin Grosse", "Praveen Manoharan", "Nicolas Papernot", "Michael Backes", "Patrick McDaniel" ], "title": "On the (statistical) detection of adversarial examples", "venue": "arXiv preprint arXiv:1702.06280,", "year": 2017 }, { "authors": [ "Rishi Gupta", "Tim Roughgarden" ], "title": "A PAC approach to application-specific algorithm selection", "venue": "SIAM Journal on Computing,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Olivier J Hénaff", "Aravind Srinivas", "Jeffrey De Fauw", "Ali Razavi", "Carl Doersch", "SM Eslami", "Aaron van den Oord" ], "title": "Data-efficient image recognition with contrastive predictive coding", "venue": null, "year": 1905 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "arXiv preprint arXiv:1808.06670,", "year": 2018 }, { "authors": [ "Hossein Hosseini", "Yize Chen", "Sreeram Kannan", "Baosen Zhang", "Radha Poovendran" ], "title": "Blocking transferability of adversarial examples in black-box learning systems", "venue": "arXiv preprint arXiv:1703.04318,", "year": 2017 }, { "authors": [ "Shengyuan Hu", "Tao Yu", "Chuan Guo", "Wei-Lun Chao", "Kilian Q Weinberger" ], "title": "A new defense against adversarial images: Turning a weakness into a strength", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Prannay Khosla", "Piotr Teterwak", "Chen Wang", "Aaron Sarna", "Yonglong Tian", "Phillip Isola", "Aaron Maschinot", "Ce Liu", "Dilip Krishnan" ], "title": "Supervised contrastive learning", "venue": "arXiv preprint arXiv:2004.11362,", "year": 2020 }, { "authors": [ "Cassidy Laidlaw", "Soheil Feizi" ], "title": "Playing it safe: Adversarial robustness with an abstain option", "venue": "arXiv preprint arXiv:1911.11253,", "year": 2019 }, { "authors": [ "Xin Li", "Fuxin Li" ], "title": "Adversarial examples detection in deep networks with convolutional filter statistics", "venue": "In IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Xingjun Ma", "Bo Li", "Yisen Wang", "Sarah M Erfani", "Sudanthi Wijewickrema", "Grant Schoenebeck", "Dawn Song", "Michael E Houle", "James Bailey" ], "title": "Characterizing adversarial subspaces using local intrinsic dimensionality", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Dongyu Meng", "Hao Chen" ], "title": "MagNet: a two-pronged defense against adversarial examples", "venue": "In ACM SIGSAC conference on computer and communications security,", "year": 2017 }, { "authors": [ "Jan Hendrik Metzen", "Tim Genewein", "Volker Fischer", "Bastian Bischoff" ], "title": "On detecting adversarial perturbations", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: a simple and accurate method to fool deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Anh Nguyen", "Jason Yosinski", "Jeff Clune" ], "title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Aditi Raghunathan", "Sang Michael Xie", "Fanny Yang", "John Duchi", "Percy Liang" ], "title": "Understanding and mitigating the tradeoff between robustness and accuracy", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Adi Shamir", "Itay Safran", "Eyal Ronen", "Orr Dunkelman" ], "title": "A simple explanation for the existence of adversarial examples with small hamming distance", "venue": null, "year": 1901 }, { "authors": [ "David Stutz", "Matthias Hein", "Bernt Schiele" ], "title": "Confidence-calibrated adversarial training: Generalizing to unseen attacks", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Jiawei Su", "Danilo Vasconcellos Vargas", "Kouichi Sakurai" ], "title": "One pixel attack for fooling deep neural networks", "venue": "IEEE Transactions on Evolutionary Computation,", "year": 2019 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "In European Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ], "title": "Robustness may be at odds with accuracy", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "Stella X Yu", "Dahua Lin" ], "title": "Unsupervised feature learning via non-parametric instance discrimination", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Chaowei Xiao", "Jun-Yan Zhu", "Bo Li", "Warren He", "Mingyan Liu", "Dawn Song" ], "title": "Spatially transformed adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Qiuling Xu", "Guanhong Tao", "Siyuan Cheng", "Lin Tan", "Xiangyu Zhang" ], "title": "Towards feature space adversarial attack", "venue": "arXiv preprint arXiv:2004.12385,", "year": 2020 }, { "authors": [ "Weilin Xu", "David Evans", "Yanjun Qi" ], "title": "Feature squeezing: Detecting adversarial examples in deep neural networks", "venue": "In Network and Distributed Systems Security Symposium,", "year": 2017 }, { "authors": [ "Yao-Yuan Yang", "Cyrus Rashtchian", "Hongyang Zhang", "Ruslan Salakhutdinov", "Kamalika Chaudhuri" ], "title": "Adversarial robustness through local Lipschitzness", "venue": "In Advances in neural information processing systems,", "year": 2020 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P Xing", "Laurent El Ghaoui", "Michael I Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Chengxu Zhuang", "Alex Lin Zhai", "Daniel Yamins" ], "title": "Local aggregation for unsupervised learning of visual embeddings", "venue": "In IEEE International Conference on Computer Vision,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "A substantial body of work has shown that deep networks can be highly susceptible to adversarial attacks, in which minor changes to the input lead to incorrect, even bizarre classifications (Nguyen et al., 2015; Moosavi-Dezfooli et al., 2016; Su et al., 2019; Brendel et al., 2018; Shamir et al., 2019). Much of this work has considered `p-norm adversarial examples, but there has also been recent interest in exploring adversarial models beyond bounded `p-norm (Brown et al., 2018; Engstrom et al., 2017; Gilmer et al., 2018; Xiao et al., 2018; Alaifari et al., 2019). What these results have in common is that changes that either are imperceptible or should be irrelevant to the classification task can lead to drastically different network behavior.\nOne reason for this vulnerability to adversarial attack is the non-Lipschitzness property of typical neural networks: small but adversarial movements in the input space can often produce large perturbations in the feature space. In this work, we consider the question of whether non-Lipschitz networks are intrinsically vulnerable, or if they could still be made robust to adversarial attack, in an abstract but (we believe) instructive adversarial model. In particular, suppose an adversary, by making an imperceptible change to an input x, can cause its representation F (x) in feature space (the penultimate layer of the network) to move by an arbitrary amount: will such an adversary always win? Clearly if the adversary can modify F (x) by an arbitrary amount in an arbitrary direction, then yes. But what if the adversary can modify F (x) by an arbitrary amount but only in a random direction (which it cannot control)? In this case, we show an interesting dichotomy: if the classifier must output a classification on any input it is given, then yes the adversary will still win, no matter how well-separated the classes are in feature space and no matter what decision surface the classifier uses. However, if the classifier is allowed to abstain, then it can defeat such an adversary so long as natural data of different classes are reasonably well-separated in feature space. Our results hold for generalizations of these models as well, such as adversaries that can modify feature representations in random low-dimensional subspaces, or directions that are not completely random. More broadly, our results provide a theoretical explanation for the importance of allowing abstaining, or selective classification, in the presence of adversarial attack.\nApart from providing a useful abstraction for non-Lipschitz feature embeddings, our model may be viewed as capturing an interesting class of real attacks. There are various global properties of an image, such as brightness, contrast, or rotation angle whose change might be “perceptible but not relevant” to classification tasks. Our model could also be viewed as an abstraction of attacks of that nature. Feature space attacks of other forms, where one can perturb abstract features denoting styles, including interpretable styles such as vivid colors and sharp outlines and uninterpretable ones, have also been empirically studied in (Xu et al., 2020; Ganeshan & Babu, 2019).\nAn interesting property of our model is that it is critical to be able to refuse to predict: any algorithm which always predicts a class label—therefore without an ability to abstain—is guaranteed to perform poorly. This provides a first formal hardness result about abstention in adversarial defense, and also a first provable negative result in feature-space attacks. We therefore allow the algorithm to output “don’t know” for some examples, which, as a by-product of our algorithm, serves as a detection mechanism for adversarial examples. It also results in an interesting trade-off between robustness and accuracy: by controlling how frequently we refuse to predict, we are able to trade (robust) precision off against recall. We also provide results for how to provably optimize for such a trade-off using a data-driven algorithm. Our strong theoretical advances are backed by empirical evidence in the context of contrastive learning (He et al., 2020; Chen et al., 2020; Khosla et al., 2020)." }, { "heading": "1.1 OUR CONTRIBUTIONS", "text": "Our work tackles the problem of defending against adversarial perturbations in a random feature subspace, and advances the theory and practice of robust machine learning in multiple ways.\n• We introduce a formal model that captures feature-space attacks and the effect of nonLipschitzness of deep networks which can magnify input perturbations.\n• We begin our analysis with a hardness result concerning defending against adversary without the option of “don’t know”. We show that all classifiers that partition the feature space into two or more classes—thus without an ability to abstain—are provably vulnerable to adversarial examples for at least one class of examples with nearly half probability.\n• We explore the power of abstention option: a variant of nearest-neighbor classifier with the ability to abstain is provably robust against adversarial attacks, even in the presence of outliers in the training data set. We characterize the conditions under which the algorithm does not output “don’t know” too often.\n• We leverage and extend dispersion techniques from data-driven decision making, and present a novel data-driven method for learning data-specific optimal hyperparameters in our defense algorithms to simultaneously obtain high robust accuracy and low abstention rates. Unlike typical hyperparameter tuning, our approach provably converges to a global optimum.\n• Experimentally, we show that our proposed algorithm achieves certified adversarial robustness on representations learned by supervised and self-supervised contrastive learning. Our method significantly outperforms algorithms without the ability to abstain." }, { "heading": "2 RELATED WORK", "text": "Adversarial robustness with abstention options. Classification with abstention option (a.k.a. selective classification (Geifman & El-Yaniv, 2017)) is a relatively less explored direction in the adversarial machine learning. Hosseini et al. (2017) augmented the output class set with a NULL label and trained the classifier to reject the adversarial examples by classifying them as NULL; Stutz et al. (2020) and Laidlaw & Feizi (2019) obtained robustness by rejecting low-confidence adversarial examples\naccording to confidence thresholding or predictions on the perturbations of adversarial examples. Another related line of research to our method is the detection of adversarial examples (Grosse et al., 2017; Li & Li, 2017; Carlini & Wagner, 2017; Ma et al., 2018; Meng & Chen, 2017; Metzen et al., 2017; Bhagoji et al., 2018; Xu et al., 2017; Hu et al., 2019). However, theoretical understanding behind the empirical success of adversarial defenses with an abstention option remains elusive.\nData-driven decision making. Data-driven algorithm selection refers to choosing a good algorithm from a parameterized family of algorithms for given data. It is known as “hyperparameter tuning” to machine learning practitioners and typically involves a “grid search”, “random search” (Bergstra & Bengio (2012)) or gradient-based search, with no guarantees of convergence to a global optimum. It was formally introduced to the theory of computing community by Gupta & Roughgarden (2017) as a learning paradigm, and was further extended in (Balcan et al., 2017). The key idea is to model the problem of identifying a good algorithm from data as a statistical learning problem. The technique has found useful application in providing provably better algorithms for several domains including clustering, mechanism design, and mixed integer programs, and providing guarantees like differential privacy and adaptive online learning (Balcan et al., 2018a;b; 2020). For learning in an adversarial setting, we provide the first demonstration of the effectiveness of data-driven algorithm selection in a defense method to optimize over the accuracy-abstention trade-off with strong theoretical guarantees." }, { "heading": "3 PRELIMINARIES", "text": "Notation. We will use bold lower-case letters such as x and y to represent vectors, lower-case letters such as x and y to represent scalars, and calligraphy capital letters such as X , Y and D to represent distributions. Specifically, we denote by x 2 X the sample instance, and by y 2 Y the label, where X ✓ Rn1 and Y indicate the image and label spaces, respectively. Denote by F : X ! Rn2 the feature embedding which maps an instance to a high-dimensional vector in the latent space F (X ). It can be parameterized, e.g., by deep neural networks. We will frequently use v 2 Rn2 to represent an adversarial perturbation in the feature space. Denote by dist(·, ·) the distance between any two vectors in the image or feature space. Examples of distances include dist(x1,x2) = kx1 x2k—the one induced by vector norm. We use B(x, ⌧) to represent a neighborhood of x: {x0 : dist(x,x0) ⌧} in the image or feature space. We will frequently denote by DX the distribution of instances in the input space, by DX|y the distribution of instances in the input space conditioned on the class y, by DF (X ) the distribution of features, and by DF (X )|y the distribution of features conditioned on the class y." }, { "heading": "3.1 RANDOM FEATURE SUBSPACE THREAT MODEL", "text": "In principle, the adversarial example for a given labeled data (x, y) is a data point x0 that causes a classifier to output a different label on x0 than the true label y. Probably one of the most popular adversarial examples is the norm-bounded perturbation in the input space. Despite a large literature devoted to defending against norm-bounded adversary by improving the Lipschitzness of neural network as a function mapping from input space to feature space (Zhang et al., 2019; Yang et al., 2020), it is typically not true that small perturbation in the input space necessarily implies small modification in the feature space. In this paper, we study a threat model where an adversary can modify the data by a large amount in the feature space. Note that because this large modification in feature space is assumed to come from a small perturbation in input space, we always assume that the true correct label y is the same for x0 as for x. Our model highlights the power of abstention in the adversarial learning: there is a provable separation when we have and do not have an abstention option under our threat model.\nOur threat model. In the setting of (robust) representation learning, we are given a set of training instances x1, ...,xm 2 X . Let x be an n1-dimensional test input for classification. The input is embedded into a high n2-dimensional feature space using a deep neural network F . We predict the class of x by a prediction function on F (x) which can potentially output “don’t know”. The adversary may corrupt F (x) such that the modified feature vector is restricted in a random n3-dimensional affine subspace denoted by S + {F (x)}, while the perturbation magnitude might be arbitrarily large. The adversary is given access to everything including F , x, S and the true label of x. Throughout the paper, we will refer adversary and adversarial example to this threat model.\nAlgorithm 1 ROBUSTCLASSIFIER(⌧, ) 1: Input: A test feature F (x) (potentially an adversarial example), a set of training features F (xi)\nand their labels yi, i 2 [m], a threshold parameter ⌧ , a separation parameter . 2: Preprocessing: Delete training examples F (xi) if minj2[m],yi 6=yj dist(F (xi), F (xj)) < 3: Output: A predicted label of F (x), or “don’t know”. 4: if mini2[m] dist(F (x), F (xi)) < ⌧ then 5: Return yarg mini2[m] dist(F (x),F (xi)) 6: else 7: Return “don’t know”" }, { "heading": "3.2 A META-ALGORITHM FOR INFERENCE-TIME ROBUSTNESS", "text": "Given a test data x, let r denote the shortest distance between F (x) and any training embedding F (xi) of different labels. Throughout the paper, we consider the prediction rule that we classify an unseen (and potentially adversarially modified) example with the class of its nearest training example provided that the distance between them is at most ⌧ ; otherwise the algorithm outputs “don’t know” (see Algorithm 1 and Figure 2). The adversary is able to corrupt F (x) by a carefullycrafted perturbation along a random direction, i.e., F (x) + v, where v is an adversarial vector of arbitrary length in a random n3-dimensional subspace of Rn2 . The parameter ⌧ trades the success rate off against the abstention rate; when ⌧ ! 1, our algorithm is equivalent to the nearest-neighbor algorithm. We also preprocess to remove outliers and points too close to them.\nx x0\nr\nFigure 1: Adversarial misclassification for nearest-neighbor predictor\nSome comments/further questions:\n2.1 How do the bounds depend on n1, n2?\n2.1.1 We claim the above ✏ 2\nr2 bound can be generalized to ✏ r n2 n1 upper bound on probability of error in the general case.\n2.1.2 This bound should be tight i.e. show ⌦ ⇣\n✏ r\nn2 n1⌘ lower bound.\nTODO: Add a concrete theorem here.\n2.2 This indicates that a ‘nice’ F could cluster together points from same class into small geometric regions, to give an even better bound. Well separation already boosts the above bound as it improves with increasing r. Can we extend sample complexity and prediction confidence bounds for F to robust bounds for our setting?\n– Concretely, if we have that F +h have a sample complexity of m(✏, ), then in the above adversary model the failure probability is only increased by the above upper bound. We should therefore get a slightly weaker sample complexity bound for our model. Also look at what we can say about accuracy assuming adversary did not perturb the input.\n2.3 The bound using n seems conservative, examples with the same label as x would not lead to misclassification for example. Also ‘directions’ where you may get adversarially close to examples of a fixed class probably overlap/correlate.\n– Can we extend the above argument about individual training points to regions/clusters corresponding to individual classes and get something tighter? We’ll probably also need to consider probability mass of input points to overcome low mass examples that violate our assumptions. Ideally we want our results to improve with more training examples.\n2.4 Probabilistic Lipschitzness [2] is an assumption on ‘data niceness’ which can give sample complexity guarantees for NN algorithm which scale with this ‘niceness’ of data distribution.\n– It looks like Probabilistic Lipschitzness does not directly apply to our model (captures standard threat model better), but we might want to introduce a similar definition for our setting.\n2.5 Reducing ✏ improves the above bound but at the cost of more “don’t knows”. We might want to quantify this trade-o of accuracy of output vs dismissibility of input.\n3 Robust PCA\nF (x) is low-rank by design, and therefore L0 perturbations to it can be detected/resolved using the Robust PCA method. Notice that this approach can be thought of as more than just a data assumption, even if x is full rank, embedding into F (x) results in n1 dimensional manifolds in n2 > n1 dimensional space.\nWe currently treat this as a separate direction to be looked at after Sections 2 and 4. Unlike Section 2, here we potentially expect unsupervised results. TODO(Dravy, Hongyang): Add more details and further questions.\n!(#) !(#′)\n! ) %\n! # + '\nFigure 2: Adversarial misclassificat n for nearest-neighbor predict r." }, { "heading": "4 NEGATIVE RESULTS WITHOUT AN ABILITY TO ABSTAIN", "text": "Several negative results are known for defending against adversarial examples beyond norm-bounded settings. For example, Shamir et al. (2019) provably show existence of targeted adversarial examples with small hamming distance in the input space to their clean examples. For feature-space attacks, several empirical negative results are known (Xu et al., 2020; Ganeshan & Babu, 2019). We present a hardness r sult concerning defenses without an ability to abstain, and prove that such defenses are inevitably doomed against our feature-space attacks. Theorem 4.1. For any classifier that partitions Rn2 into two or more classes, any data distribution D, any > 0 and any feature embedding F , there must exist at least one class y⇤, such that for at least a 1 probability mass of examples x from class y⇤ (i.e., x is drawn from DX|y⇤), for a random unit-length vector v, with probability at least 1/2 for some 0 > 0, F (x) + 0v is not labeled y⇤ by the classifier. In other words, there must be at least one class y⇤ such that for at least 1 probability mass of points x of class y⇤, the adversary wins with probability at least 1/2 .\nProof. Without loss of generality, we assume that the feature embedding F is an identity mapping. Define r to be a radius such that for every class y, at least a 1 probability mass of examples x of class y lie w thin distance r of the origin. Let R = r p n2/ . R is defined to be large e ough such that if we take a ball of radius R and move it by a distance r , at least a 1 fraction of the volume of the new ball is inside the intersection with the old ball. Now, let B be the ball of radius R centered at the origin. Let vol(B) denote the volu e of B and let voly(B) denote the volume of the subset of B that is assigned label y by the classifier. Let y⇤ be any label such that voly⇤(B)/vol(B) 1/2. Such a class y⇤ exists because we do not have the option to output “don’t know”. Now by the definition of y⇤, a point z picked uniformly at random from B has probability at least 1/2 of being classified differently from y⇤. This implies that, by the definition of R, if x is within distance r of the origin,\n4\nthen a point zx that is picked uniformly at random in the ball Bx of radius R centered at x has probability at least 1/2 of being classified differently from y⇤. This immediately implies that if we choose a random unit-length vector v, then with probability at least 1/2 , there exists 0 > 0 such that x+ 0v is classified differently from y⇤, since we can think of choosing v by first sampling zx from Bx and then defining v = (zx x)/kzx xk2. So, the theorem follows from the fact that, by the definition of r , at least 1 probability mass of examples x from class y⇤ are within distance r of the origin.\nWe remark that our lower bound applies to any classifier and exploits the fact that a classifier without abstention must label the entire feature space. For a simple linear decision boundary (center of Figure 3), a perturbation in any direction (except parallel to the boundary) can cross the boundary with an appropriate magnitude. The left and right figures show that if we try to ‘bend’ the decision boundary to ‘protect’ one of the classes, the other class is still vulnerable. Our argument formalizes and generalizes this intuition, and shows that there must be at least one vulnerable class irrespective of how you may try to shape the class boundaries, where the adversary succeeds in a large fraction of directions.\nTheorem 4.1 implies that all classifiers that partitions Rn2 into two or more classes—thus without an ability to abstain—are vulnerable to adversarial examples for at least one class of data with nearly half probability. Despite much effort has been devoted to empirically investigating the power of “don’t know” in the adversarial robustness, theoretical understanding behind the empirical success of these methods remains elusive. To the best of our knowledge, our work is the first result that provably demonstrates the power of “don’t know” in the algorithmic design of adversarially robust classifiers." }, { "heading": "5 POSITIVE RESULTS WITH AN ABILITY TO ABSTAIN", "text": "Theorem 4.1 gives a hardness result of robust classification without abtention. In this section, we explore the power of abstaining and show classifiers with an ability to abstain are provably robust.\nGiven a test instance x ⇠ DX , recall that r denotes the shortest distance between F (x) 2 Rn2 and any training embedding F (xi) 2 Rn2 with a different label. The adversary is allowed to corrupt F (x) with an arbitrarily large perturbation in a uniform-distributed subspace S of dimension n3. Consider the prediction rule that we classify the unseen example F (x) 2 Rn2 with the class of its nearest training example provided that the distance between them is at most ⌧ ; otherwise the algorithm outputs “don’t know” (see Algorithm 1 when = 0). Denote by Exadv(f) := ES⇠S1{9e 2 S + F (x) ✓ Rn2 s.t. f(e) 6= y and f(e) does not abstain} the robust error of a given classifier f for classifying instance x. Our analysis leads to the following positive results on this algorithm. Theorem 5.1. Let x ⇠ DX be a test instance, m be the number of training examples and r be the shortest distance between F (x) and F (xi) where xi is a training point from a different class. Suppose ⌧ = o ⇣ r q\n1 n3n2 ⌘ . The robust error of Algorithm 1, Exadv(ROBUSTCLASSIFIER(⌧, 0)), is\nat most m\nc⌧\nr q\n1 n3n2\n!n2 n3 + mcn2 n30 , where c > 0 and 0 < c0 < 1 are absolute constants.\nProof Sketch. We begin our analysis with the case of n3 = 1. Suppose we have a training example x0 of another class, and suppose F (x) and F (x0) are at distance D in the feature space. Because ⌧ = o (D), the probability that the adversary can move F (x) to within distance ⌧ of F (x0) should\nbe roughly the ratio of the surface area of a sphere of radius ⌧ to the surface area of a sphere of radius D, which is at most O\n⌧ D\nn2 1 O\n⌧ r n2 1. The analysis for the general case of n3 follows from a pealing argument: note that the random subspace in which the adversary vector is restricted to lie can be constructed by first sampling a vector v1 uniformly at random from a unit sphere in the ambient space Rn2 centered at 0; fixing v1, we then sample a vector v2 uniformly at random from a unit sphere in the null space of span{v1}; we repeat this procedure n3 times and let span{v1,v2, ...,vn3} be the desired adversarial subspace. For each step of construction, we apply the same argument as that of n3 = 1 with D = ⌦ ⇣ r q\nn2 i n2\n⌘ by a high probability, if we project\nF (x) and F (x0) to a random subspace of dimension n2 i. Finally, a union bound over m training points completes the proof. ⇤\nTrade-off between success probability and abstention rate. Theorem 5.1 captures the trade-off between the success probability of an algorithm and the abstention rate: a smaller value of ⌧ increases the success probability of the algorithm, while it also encourages Algorithm 1 to output “don’t know” more often. A related line of research to this observation is the trade-off between robustness and accuracy: Zhang et al. (2019); Tsipras et al. (2019) showed that there might be no predictor in the hypothesis class that has low natural and robust errors; even such a predictor exists for the well-separated data (Yang et al., 2020), Raghunathan et al. (2020) showed that the natural error could increase by adversarial training if we only have finite number of data. To connect the two trade-offs, we note that a high success probability of ROBUSTCLASSIFIER(⌧, 0) in Algorithm 1 tends to avoid the algorithm from predicting wrong labels for adversarial examples, while the associated high abstention rate encourages the algorithm to output “don’t know” even for natural examples, thus leading to a trivial non-accurate classifier." }, { "heading": "5.1 A MORE GENERAL ADVERSARY WITH BOUNDED DENSITY", "text": "We extend our results to a more general class of adversaries, which have a bounded distribution over the space of linear subspaces of a fixed dimension n3 and the adversary can perturb a test feature vector arbitrarily in the sampled adversarial subspace. Theorem 5.2. Consider the setting of Theorem 5.1, with an adversary having a -bounded distribution over the space of linear subspaces of a fixed dimension n3 for perturbing the test point. If E(⌧, r) denotes the bound on error rate in Theorem 5.1 for ROBUSTCLASSIFIER(⌧, 0) in Algorithm 1, then the error bound of the same algorithm against the -bounded adversary is O(E(⌧, r))." }, { "heading": "5.2 OUTLIER REMOVAL AND IMPROVED UPPER BOUND", "text": "The upper bounds above assume that the data is well-separated in the feature space. For noisy data and good-but-not-perfect embeddings, the condition may not hold. In Theorem E.1 (in Appendix E) we show that we obtain almost the same upper bound on failure probability under weaker assumptions by exploiting the noise removal threshold ." }, { "heading": "5.3 CONTROLLING ABSTENTION RATE ON NATURAL DATA", "text": "We show that we can control the frequency of outputting “don’t know”, when the data are nicely distributed according to the following generative assumption. Intuitively, it says that for every label class one can cover most of the distribution of the class with (potentially overlapping) balls of a fixed radius, each having a small lower bound on the density contained. This holds for well-clustered datasets (as is typical for feature data) for a sufficiently large radius. Assumption 1. We assume that at least 1 fraction of mass of the marginal distribution DF (X )|y over Rn2 can be covered by N balls B1, B2, ... BN of radius ⌧/2 and of mass PrDF (X) [Bk] C0 m ⇣ n2 log m + log 4N ⌘ , where C0 > 0 is an absolute constant and , 2 (0, 1).\nOur analysis leads to the following guarantee on the abstention rate. Theorem 5.3. Suppose that F (x1), ..., F (xm) are m training instances i.i.d. sampled from marginal distribution DF (X ). Under Assumption 1, with probability at least 1 /4 over the sampling, we have Pr([mi=1B(F (xi), ⌧)) 1 .\nTheorem 5.3 implies that when Pr[Bk] N and m = ⌦( n2N log n2N ), with probability at least 1 /4 over the sampling, we have Pr([mi=1B(F (xi), ⌧)) 1 . Therefore, with high probability, the algorithm will output “don’t know” only for an fraction of natural data." }, { "heading": "6 LEARNING DATA-SPECIFIC OPTIMAL THRESHOLDS", "text": "Given an embedding function F and a classifier f⌧ which outputs either a predicted class if the nearest neighbor is within distance ⌧ of a test point or abstains from predicting, we want to evaluate the performance of f⌧ on a test set T against an adversary which can perturb a test feature vector in a random subspace S ⇠ S. To this end, we define Eadv(⌧) := ES⇠S 1|T | P (x,y)2T 1{9e 2 S +F (x) ✓ Rn2 s.t. f(e) 6= y and f⌧ (e) does not abstain} as the robust error on the test set T , and Dnat(⌧) := 1 |T | P\n(x,y)2T 1{f⌧ (F (x)) abstains} as the abstention rate on the natural data. Eadv(⌧) and Dnat(⌧) are monotonic in ⌧ . The robust error Eadv(⌧) is optimal at ⌧ = 0, while we abstain from prediction all the time (i.e., Dnat(⌧) = 1). A simple approach is to fix an upper limit d⇤ on Dnat(⌧), which corresponds to the maximum abstention rate on natural data under our budget. Then it is straightforward to search for the optimal ⌧⇤ such that Dnat(⌧⇤) ⇡ d⇤ by using nearest neighbor distances of test points. For ⌧ < ⌧⇤ we have a higher abstention rate, and when ⌧ > ⌧⇤ we have a higher robust error rate. A potential problem with this approach is that Dnat(⌧) is non-Lipschitz, so small variation in ⌧ can possibly make the abstention rate significantly higher than d⇤.\nAn alternative objective which captures the trade-off between abstention rate and accuracy is defined as g(⌧) := Eadv(⌧) + cDnat(⌧), where c is a positive constant. If, for example, we are willing to take a one percent increase of the abstention rate for a two percent drop in the error rate, we could set c to be 12 . We can optimize g(⌧) in a data-driven fashion and obtain theoretical guarantee on the convergence to a global optimum. In the following, we consider the case where the test examples appear in an online fashion in small batches of size b, and we set the threshold ⌧ adaptively by a low-regret algorithm. We note in Corollary 6.3, using online-to-batch conversion, that our results imply a uniform convergence bound for objective g(⌧) in the supervised setting. Details of proofs in this section can be found in Appendix H.\nThe significance of data-driven design in this setting is underlined by the following two observations. Firstly, as noted above, optimization for ⌧ is difficult due to the non-Lipschitzness nature of Dnat(⌧) and the intractability of characterizing the objective function g(⌧) exactly due to Eadv(⌧). Secondly, the optimal value of ⌧ can be a complex function of the data geometry and sampling rate. We illustrate this by exact computation of optimal ⌧ for a simple intuitive setting: consider a binary classification problem where the features lie uniformly on two one-dimensional manifolds embedded in two-dimensions (i.e., n2 = 2, see Figure 4). Assume that the adversary perturbs in a uniformly random direction (n3 = 1). For this setting, in Appendix J we show that Theorem 6.1. Let ⌧⇤ := arg max⌧2R+ g(⌧) and = 2⇡crD . For the setting considered above, if we further assume D = o(r) and m = ! (log ), then there is a unique value of ⌧⇤ in [0, D/2). Furthermore, we have ⌧⇤ = ⇥ ⇣ D log( m)\nm\n⌘ if m > 1 ; otherwise, ⌧ ⇤ = 0.\nThe remaining section summarizes our main theoretical results.\nTheorem 6.2. Assume ⌧ is o min{m 1/n2 , r} , and the data distribution is continuous, -bounded, positive and has bounded partial derivatives. If ⌧ is set using a continuous version of the multiplicative updates algorithm (Algorithm 2 in Appendix H, Balcan et al. (2018a)), then with probability at least 1 , the expected regret in T rounds is bounded by O ⇣q n2T log RTmb rn2 n3 ⌘ , where R is a bound\non the largest distance between any two training points, b is the batch size, and r is the smallest distance between points of different labels. Corollary 6.3. Suppose we run the online algorithm of Theorem 6.2 on a validation set of size T , and use a randomized threshold ⌧̂ on the test set drawn from a uniform distribution over the thresholds ⌧1, . . . , ⌧T used in online learning. If the threshold which maximizes g(⌧) is ⌧⇤, then with probability\ngreater than 1 , we have |E[g(⌧̂)] g(⌧⇤)| O ⇣q\nn2 T log RTmb rn2 n3\n⌘ .\nRemark 1. The results can be generalized to a bounded density adversary (Corollary H.3). Remark 2. The above analysis can be extended to the problem of optimizing over by formulating the objective as function of two parameters, g(⌧, ) := Eadv(⌧, ) + cDnat(⌧, ) within a range 2 [r, s]. For fixed ⌧ , both Eadv(⌧, ) and Dnat(⌧, ) are piece-wise constant and monotonic. The proof of Lipschitzness of the pieces can be adapted easily to the case of r (Lemma H.2). Discontinuities in Eadv(⌧, ·) and Dnat(⌧, ·) can be bounded using the upper bound s for (Lemma H.4). Finally, the number of discontinuities in g(⌧, ) in a ball of radius w can be upper bounded by a product of the number of discontinuities in g(⌧, ·) and g(·, ) in intervals of width w." }, { "heading": "7 EXPERIMENTS ON CONTRASTIVE LEARNING", "text": "Theorem 5.1 sheds light on algorithmic designs of robust learning of feature embedding F . In order to preserve robustness against adversarial examples regarding a given test point x, in the feature space the theorem suggests minimizing ⌧—the closest distance between F (x) and any training feature F (xi) of the same label, and maximizing r—the closest distance between F (x) and any training feature F (xi) of different labels. This is conceptually consistent with the spirit of the nearest-neighbor algorithm, a.k.a. contrastive learning when we replace the max operator with the softmax operator for differentiable training:\nmin F\n1\nm\nX\ni2[m]\nlog\n0\nB@\nP j2[m],j 6=i,yi=yj e kF (xi) F (xj)k 2 T\nP k2[m],k 6=i e kF (xi) F (xk)k 2 T\n1\nCA , (1)\nwhere T > 0 is the temperature parameter. Loss (1) is also known as the soft-nearest-neighbor loss in the context of supervised learning (Frosst et al., 2019), or the InfoNCE loss in the setting of self-supervised learning (He et al., 2020)." }, { "heading": "7.1 CERTIFIED ADVERSARIAL ROBUSTNESS AGAINST EXACT COMPUTATION OF ATTACKS", "text": "We verify the robustness of Algorithm 1 when the representations are learned by contrastive learning. Given a embedding function F and a classifier f which outputs either a predicted class or abstains from predicting, recall that we define the natural and robust errors, respectively, as Enat(f) := E(x,y)⇠D1{f(F (x)) 6= y and f(F (x)) does not abstain}, and Eadv(f) := E(x,y)⇠D,S⇠S1{9e 2 S + F (x) ✓ Rn2 s.t. f(e) 6= y and f(e) does not abstain}, where S ⇠ S is a random adversarial subspace of Rn2 with dimension n3. Dnat(f) := E(x,y)⇠D1{f(F (x)) abstains} is the abstention rate on the natural examples. Note that the robust error is always at least as large as the natural error.\nSelf-supervised contrastive learning setup. Our experimental setup follows that of SimCLR (Chen et al., 2020). We use the ResNet-18 architecture (He et al., 2016) for representation learning with a two-layer projection head of width 128. The dimension of the representations is 512. We set batch size 512, temperature T = 0.5, and initial learning rate 0.5 which is followed by cosine learning rate decay. We sequentially apply four simple augmentations: random cropping followed by resize back to the original size, random flipping, random color distortions, and randomly converting image to grayscale with a probability of 0.2. In the linear evaluation protocol, we set batch size 512 and learning rate 1.0 to learn a linear classifier in the feature space by empirical risk minimization.\nSupervised contrastive learning setup. Our experimental setup follows that of Khosla et al. (2020). We use the ResNet-18 architecture for representation learning with a two-layer projection head of width 128. The dimension of the representations is 512. We set batch size 512, temperature T = 0.1, and initial learning rate 0.5 which is followed by cosine learning rate decay. We sequentially apply four simple augmentations: random cropping followed by resize back to the original size, random\nflipping, random color distortions, and randomly converting image to grayscale with a probability of 0.2. In the linear evaluation protocol, we set batch size 512 and learning rate 5.0 to learn a linear classifier in the feature space by empirical risk minimization.\nIn both self-supervised and supervised setups, we compare the robustness of the linear protocol with that of our defense protocol in Algorithm 1 under exact computation of adversarial examples using a convex optimization program in n3 dimensions and m constraints. Algorithm 4 in the appendix provides an efficient implementation of the attack.\nExperimental results. We summarize our results in Table 1. Comparing with a linear protocol, our algorithms have much lower robust error. Note that even if abstention is added based on distance from the linear boundary, sufficiently large perturbations will ensure the adversary can always succeed. For an approximate adversary which can be efficiently implemented for large n3, see Appendix L.2." }, { "heading": "7.2 ROBUSTNESS-ABSTENTION TRADE-OFF", "text": "The threshold parameter ⌧ captures the trade-off between the robust accuracy Aadv := 1 Eadv and the abstention rate Dnat on the natural data. We report both metrics for different values of ⌧ for supervised and self-supervised constrastive learning. The supervised setting enjoys higher adversarial accuracy and a smaller abstention rate for fixed ⌧ ’s due to the use of extra label information. We plot Aadv against Dnat for Algorithm 1 as hyperparameters vary. For small ⌧ , both accuracy and abstention rate approach 1.0. As the threshold increases, the abstention rate decreases rapidly and our algorithm enjoys good accuracy even with small abstention rates. For ⌧ ! 1 (i.e. the nearest neighbor search), the abstention rate on the natural data Dnat is 0% but the robust accuracy is also roughly 0%. Increasing (for small ) gives us higher robust accuracy for the same abstention rate. Too large may also lead to degraded performance." } ]
2,020
null
SP:9977ed83006cd0ccbf385f26220aa9395a723157
[ "The authors present a method for tackling the problem of over-smoothing in graph convolutional networks. Specifically, this is achieved by explicitly modelling a latent graph which, ideally, would be a graph which connects an observation to all other observations of the same class and no observations of a different class. In practice, there is only an uncertain picture of this latent graph as in many applications the labels must be estimated for unlabelled observations. The authors present an EM variational algorithm for approximating both this latent graph and using it to improve the estimation of a GCN. The authors demonstrate that the proposed method performs favourably on a battery of test against an array of existing methods for solving the node classification problem. ", "This paper proposes a method to alleviate the over-smoothing problem of GNNs. The key idea is to generate a latent graph structure via leveraging stochastic block model to approximate the observed graph structure and label information. The learned latent graph is expected to have a clear community structure with dense intra-class edges and sparse inter-class edges, so that labels of unlabeled nodes are better predicted based on the latent structure. The whole framework is well designed as an MLE problem, with EBLO solved by an alternate EM style algorithm. Both E-step and M-step are assumed to enhance each other's performance, but this point is not clearly validated in the experiments. Also, it is good to see some discussions on the relationship between the proposed framework and dropedge and adaedge methods. Overall, the idea makes sense in terms of joint topology optimization (via SBM) and node classification. The methodology is designed well as an MLE problem. The paper writes well and the experimental results demonstrate effectiveness to some extent." ]
Over-smoothing has emerged as a severe problem for node classification with graph convolutional networks (GCNs). In the view of message passing, the oversmoothing issue is caused by the observed noisy graph topology that would propagate information along inter-class edges, and consequently, over-mix the features of nodes in different classes. In this paper, we propose a novel architecture, namely VEM-GCN, to address this problem by employing the variational EM algorithm to jointly optimize the graph topology and learn desirable node representations for classification. Specifically, variational EM approaches a latent adjacency matrix parameterized by the assortative-constrained stochastic block model (SBM) to enhance intra-class connection and suppress inter-class interaction of the observed noisy graph. In the variational E-step, graph topology is optimized by approximating the posterior probability distribution of the latent adjacency matrix with a neural network learned from node embeddings. In the M-step, node representations are learned using the graph convolutional network based on the refined graph topology for the downstream task of classification. VEM-GCN is demonstrated to outperform existing strategies for tackling over-smoothing and optimizing graph topology in node classification on seven benchmark datasets.
[]
[ { "authors": [ "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Deep Gaussian embedding of graphs: Unsupervised inductive learning via ranking", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Léon Bottou" ], "title": "Large-scale machine learning with stochastic gradient descent", "venue": "In Proceedings of COMPSTAT’2010, pp", "year": 2010 }, { "authors": [ "Deli Chen", "Yankai Lin", "Wei Li", "Peng Li", "Jie Zhou", "Xu Sun" ], "title": "Measuring and relieving the oversmoothing problem for graph neural networks from the topological view", "venue": "In Proceedings of the 34th AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Ming Chen", "Zhewei Wei", "Zengfeng Huang", "Bolin Ding", "Yaliang Li" ], "title": "Simple and deep graph convolutional networks", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Yu Chen", "Lingfei Wu", "Mohammed J Zaki" ], "title": "Iterative deep graph learning for graph neural networks: Better and robust node embeddings", "venue": "In Proceedings of the 34th Conference on Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Matthias Fey" ], "title": "Just jump: Dynamic neighborhood aggregation in graph neural networks", "venue": "In ICLR Workshop on Representation Learning on Graphs and Manifolds,", "year": 2019 }, { "authors": [ "Luca Franceschi", "Mathias Niepert", "Massimiliano Pontil", "Xiao He" ], "title": "Learning discrete structures for graph neural networks", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Hongyang Gao", "Zhengyang Wang", "Shuiwang Ji" ], "title": "Large-scale learnable graph convolutional networks", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Samuel Gershman", "Noah Goodman" ], "title": "Amortized inference in probabilistic reasoning", "venue": "In Proceedings of the Annual Meeting of the Cognitive Science Society,", "year": 2014 }, { "authors": [ "Justin Gilmer", "Samuel S. Schoenholz", "Patrick F. Riley", "Oriol Vinyals", "George E. Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Liyu Gong", "Qiang Cheng" ], "title": "Exploiting edge features for graph neural networks", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Daniel Gribel", "Thibaut Vidal", "Michel Gendreau" ], "title": "Assortative-constrained stochastic block models", "venue": "arXiv preprint arXiv:2004.11890,", "year": 2020 }, { "authors": [ "Aditya Grover", "Jure Leskovec" ], "title": "node2vec: Scalable feature learning for networks", "venue": "In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2016 }, { "authors": [ "William L. Hamilton", "Rex Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Arman Hasanzadeh", "Ehsan Hajiramezanali", "Shahin Boluki", "Nick Duffield", "Mingyuan Zhou", "Krishna Narayanan", "Xiaoning Qian" ], "title": "Bayesian graph neural networks with adaptive connection sampling", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Paul W. Holland", "Kathryn Blackmond Laskey", "Samuel Leinhardt" ], "title": "Stochastic blockmodels: First steps", "venue": "Social Networks,", "year": 1983 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Xiaodong Jiang", "Pengsheng Ji", "Sheng Li" ], "title": "CensNet: Convolution with edge-node switching in graph neural networks", "venue": "In Proceedings of the 28th International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Wei Jin", "Yao Ma", "Xiaorui Liu", "Xianfeng Tang", "Suhang Wang", "Jiliang Tang" ], "title": "Graph structure learning for robust graph neural networks", "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2020 }, { "authors": [ "Yilun Jin", "Guojie Song", "Chuan Shi" ], "title": "GraLSP: Graph neural networks with local structural patterns", "venue": "In Proceedings of the 34th AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Johannes Klicpera", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Predict then propagate: Graph neural networks meet personalized PageRank", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Johannes Klicpera", "Stefan Weißenberger", "Stephan Günnemann" ], "title": "Diffusion improves graph learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Qimai Li", "Zhichao Han", "Xiao-Ming Wu" ], "title": "Deeper insights into graph convolutional networks for semi-supervised learning", "venue": "In Proceedings of the 32nd AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Ziyao Li", "Liang Zhang", "Guojie Song" ], "title": "GCN-LASE: towards adequately incorporating link attributes in graph convolutional networks", "venue": "In Proceedings of the 28th International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Jiaqi Ma", "Weijing Tang", "Ji Zhu", "Qiaozhu Mei" ], "title": "A flexible generative framework for graphbased semi-supervised learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-SNE", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Julian McAuley", "Christopher Targett", "Qinfeng Shi", "Anton Van Den Hengel" ], "title": "Image-based recommendations on styles and substitutes", "venue": "In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval,", "year": 2015 }, { "authors": [ "Andrew Kachites McCallum", "Kamal Nigam", "Jason Rennie", "Kristie Seymore" ], "title": "Automating the construction of internet portals with machine learning", "venue": "Information Retrieval,", "year": 2000 }, { "authors": [ "Federico Monti", "Davide Boscaini", "Jonathan Masci", "Emanuele Rodolà", "Jan Svoboda", "Michael M. Bronstein" ], "title": "Geometric deep learning on graphs and manifolds using mixture model CNNs", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Galileo Namata", "Ben London", "Lise Getoor", "Bert Huang" ], "title": "Query-driven active surveying for collective classification", "venue": "In 10th International Workshop on Mining and Learning with Graphs,", "year": 2012 }, { "authors": [ "Radford M Neal", "Geoffrey E Hinton" ], "title": "A view of the em algorithm that justifies incremental, sparse, and other variants", "venue": "In Learning in graphical models,", "year": 1998 }, { "authors": [ "Yin Cheng Ng", "Nicolò Colombo", "Ricardo Silva" ], "title": "Bayesian semi-supervised learning with graph Gaussian processes", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Mathias Niepert", "Mohamed Ahmed", "Konstantin Kutzkov" ], "title": "Learning convolutional neural networks for graphs", "venue": "In Proceedings of the 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Meng Qu", "Yoshua Bengio", "Jian Tang" ], "title": "GMNN: Graph Markov neural networks", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Leonardo F.R. Ribeiro", "Pedro H.P. Saverese", "Daniel R. Figueiredo" ], "title": "struc2vec: Learning node representations from structural identity", "venue": "In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2017 }, { "authors": [ "Yu Rong", "Wenbing Huang", "Tingyang Xu", "Junzhou Huang" ], "title": "DropEdge: Towards deep graph convolutional networks on node classification", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Galligher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI Magazine,", "year": 2008 }, { "authors": [ "Oleksandr Shchur", "Maximilian Mumme", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Pitfalls of graph neural network evaluation", "venue": "In Relational Representation Learning Workshop", "year": 2018 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": "Journal of Machine Learning Research,", "year": 1929 }, { "authors": [ "Louis Tiao", "Pantelis Elinas", "Harrison Nguyen", "Edwin V. Bonilla" ], "title": "Variational graph convolutional networks", "venue": "In Graph Representation Learning Workshop (NeurIPS", "year": 2019 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Louis-Pascal Xhonneux", "Meng Qu", "Jian Tang" ], "title": "Continuous graph neural networks", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Keyulu Xu", "Chengtao Li", "Yonglong Tian", "Tomohiro Sonobe", "Ken ichi Kawarabayashi", "Stefanie Jegelka" ], "title": "Representation learning on graphs with jumping knowledge networks", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Liang Yang", "Zesheng Kang", "Xiaochun Cao", "Di Jin", "Bo Yang", "Yuanfang Guo" ], "title": "Topology optimization based graph convolutional network", "venue": "In Proceedings of the 28th International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Ze Ye", "Kin Sum Liu", "Tengfei Ma", "Jie Gao", "Chao Chen" ], "title": "Curvature graph network", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Donghan Yu", "Ruohong Zhang", "Zhengbao Jiang", "Yuexin Wu", "Yiming Yang" ], "title": "Graph-revised convolutional network", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2020 }, { "authors": [ "Kai Zhang", "Yaokang Zhu", "Jun Wang", "Jie Zhang" ], "title": "Adaptive structural fingerprints for graph attention networks", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Yingxue Zhang", "Soumyasundar Pal", "Mark Coates", "Deniz Üstebay" ], "title": "Bayesian graph convolutional neural networks for semi-supervised classification", "venue": "In Proceedings of the 33rd AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Lingxiao Zhao", "Leman Akoglu" ], "title": "PairNorm: Tackling oversmoothing in GNNs", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Cheng Zheng", "Bo Zong", "Wei Cheng", "Dongjin Song", "Jingchao Ni", "Wenchao Yu", "Haifeng Chen", "Wei Wang" ], "title": "Robust graph representation learning via neural sparsification", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Complex graph-structured data are ubiquitous in the real world, ranging from social networks to chemical molecules. Inspired by the remarkable performance of convolutional neural networks (CNNs) in processing data with regular grid structures (e.g., images), a myriad of studies on GCNs have emerged to execute “convolution” in the graph domain (Niepert et al., 2016; Kipf & Welling, 2017; Gilmer et al., 2017; Hamilton et al., 2017; Monti et al., 2017; Gao et al., 2018). Many of these approaches follow a neighborhood aggregation mechanism (a.k.a., message passing scheme) that updates the representation of each node by iteratively aggregating the transformed messages sent from its neighboring nodes. Commencing with the pioneering works (Kipf & Welling, 2017; Gilmer et al., 2017), numerous strategies have been developed to improve the vanilla message passing scheme such as introducing self-attention mechanism (Veličković et al., 2018; Zhang et al., 2020), incorporating local structural information (Zhang et al., 2020; Jin et al., 2019; Ye et al., 2020), and leveraging the link attributes (Gong & Cheng, 2019; Li et al., 2019; Jiang et al., 2019).\nDespite significant success in many fundamental tasks of graph-based machine learning, message passing-based GCNs almost all process the observed graph structure as ground truth and might suffer from the over-smoothing problem (Li et al., 2018), which would seriously affect the node classification performance. Given the observed noisy graph topology (i.e., excessive inter-class edges are linked while many intra-class edges are missing), when multiple message passing layers are stacked to enlarge the receptive field (the maximum hop of neighborhoods), features of neighboring nodes in different classes would be dominant in message passing. Thus, node representations would be corrupted by the harmful noise and affect the discrimination of graph nodes. The over-smoothing phenomenon in GCNs has already been studied from different aspects. Li et al. (2018) first interpreted over-smoothing from the perspective of Laplacian smoothing, while Xu et al. (2018) and Klicpera et al. (2019a) associated it with the limit distribution of random walk. Furthermore, Chen et al. (2020a) developed quantitative metrics to measure the over-smoothness from the topological\nview. They argued that the key factor leading to over-smoothing is the noise passing between nodes of different categories and the classification performance of GCNs is positively correlated with the proportion of intra-class node pairs in all edges.\nIn this paper, we propose VEM-GCN, a novel architecture to address the over-smoothing problem with topology optimization for uncertain graphs. Considering that a “clearer” graph with more intra-class edges and fewer inter-class edges would improve the node classification performance of GCNs (Yang et al., 2019; Chen et al., 2020a), VEM-GCN approaches a latent adjacency matrix parameterized by the assortative-constrained stochastic block model (SBM) where nodes share the same label are linked and inter-class edges should be cut off. To jointly refine the latent graph structure and learn desirable node representations for classification, variational EM algorithm (Neal & Hinton, 1998) is adopted to optimize the evidence lower bound (ELBO) of the likelihood function. In the inference procedure (E-step), graph topology is optimized by approximating the posterior probability distribution of the latent adjacency matrix with a neural network learned from node embeddings. In the learning procedure (M-step), a conventional GCN is trained to maximize the log-likelihood of the observed node labels based on the learned latent graph structure. The E-step and M-step optimize the graph topology and improve the classification of unlabeled nodes in an alternating fashion.\nThe proposed VEM-GCN architecture is flexible and general. In the E-step, the neural network can support arbitrary desirable node embeddings generated by algorithms such as node2vec (Grover & Leskovec, 2016), struc2vec (Ribeiro et al., 2017), and GCNs, or the raw node attributes. The GCN in the M-step can also be substituted with arbitrary graph models. Furthermore, recent strategies for relieving the over-smoothing issue, i.e., AdaEdge (Chen et al., 2020a) and DropEdge (Rong et al., 2020), are shown to be the specific cases of VEM-GCN under certain conditions. For empirical evaluation, we conduct extensive experiments on seven benchmarks for node classification, including four citation networks, two Amazon co-purchase graphs, and one Microsoft Academic graph. Experimental results demonstrate the effectiveness of the proposed VEM-GCN architecture in optimizing graph topology and mitigating the over-smoothing problem for GCNs." }, { "heading": "2 BACKGROUND AND RELATED WORKS", "text": "Problem Setting. This paper focuses on the task of graph-based transductive node classification. A simple attributed graph is defined as a tuple Gobs = (V,Aobs,X), where V = {vi}Ni=1 is the node set, Aobs = [ aobsij ] ∈ {0, 1}N×N is the observed adjacency matrix, and X ∈ RN×f represents the collection of attributes with each row corresponding to the features of an individual node. Given the labels Yl = [yic] ∈ {0, 1}|Vl|×C for a subset of graph nodes Vl ⊂ V assigned to C classes, the task is to infer the classes Yu = [yjc] ∈ {0, 1}|Vu|×C for the unlabeled nodes Vu = V\\Vl based on Gobs. Graph Convolutional Networks (GCNs). The core of most GCNs is message passing scheme, where each node updates its representation by iteratively aggregating features from its neighborhoods. Denote with W(l) the learnable weights in the l-th layer, N (i) the set of neighboring node indices for node vi, and σ(·) the nonlinear activation function. A basic message passing layer takes the following form:\nh (l+1) i = σ (∑ j∈N (i)∪{i} α (l) ij W (l)h (l) j ) . (1)\nHere, h(l)j is the input features of node vj in the l-th layer, W (l)h (l) j is the corresponding transformed message, and α(l)ij is the aggregation weight for the message passing from node vj to node vi. Existing GCNs mainly differ in the mechanism for computing α(l)ij (Kipf & Welling, 2017; Veličković et al., 2018; Ye et al., 2020; Hamilton et al., 2017; Zhang et al., 2020).\nStochastic Block Model (SBM). SBM (Holland et al., 1983) is a generative model for producing graphs with community structures. It parameterizes the edge probability between each node pair by\nāij |yi,yj ∼ {\nBernoulli (p0) , if yi = yj Bernoulli (p1) , if yi 6= yj , (2)\nwhere āij is an indicator variable for the edge linking nodes vi and vj , yi and yj denote their corresponding communities (classes), p0 and p1 are termed community link strength and cross-\ncommunity link probability, respectively. The case where p0 > p1 is called an assortative model, while the case p0 < p1 is called disassortative. In this paper, we leverage an assortative-constrained SBM (Gribel et al., 2020) with p0 = 1 and p1 = 0 to model the latent graph for a clear topology.\nOver-smoothing. Real-world graphs often possess high sparsity and are corrupted by certain noise that leads to inter-class misconnection and missing intra-class edges. Over-smoothing is mainly caused by the indistinguishable features of nodes in different classes produced by the message passing along inter-class edges. Various strategies have been developed to alleviate this problem. JK-Net (Xu et al., 2018) utilizes skip connection for adaptive feature aggregation and DNA (Fey, 2019) further makes improvements based on the attention mechanism. PPNP and APPNP (Klicpera et al., 2019a) modify the message passing scheme by personalized PageRank (PPR) to avoid reaching the limit distribution of random walk. CGNN (Xhonneux et al., 2020) addresses over-smoothing in a similar manner as PPR. Zhao & Akoglu (2020) introduced a graph layer normalization scheme termed PairNorm to maintain the total pairwise distance between nodes unchanged across layers. GCNII (Chen et al., 2020b) extends GCN with Initial residual and Identity mapping. However, these methods cannot fundamentally address the over-smoothing issue, as they all view the observed graph as ground truth and the features of nodes in different classes would still be over-mixed along the inter-class edges. AdaEdge (Chen et al., 2020a) constantly refines the graph topology by adjusting the edges in a self-training-like fashion. However, AdaEdge only adjusts the edges linking nodes classified with high confidence, which leads to limited improvement or degradation in classification performance due to the incorrect operations for misclassified nodes. DropEdge (Rong et al., 2020) randomly removes a certain fraction of edges to reduce message passing. Despite enhanced robustness, DropEdge does not essentially optimize the graph topology. BBGDC (Hasanzadeh et al., 2020) generalizes Dropout (Srivastava et al., 2014) and DropEdge by adaptive connection sampling.\nUncertain Graphs and Topology Optimization. Learning with uncertain graphs is another related research area, where the observed graph structure is supposed to be derived from noisy data rather than ground truth. Bayesian approaches are typical methods that introduce uncertainty to network analysis. Zhang et al. (2019) developed BGCN that considers the observed graph as a sample from a parametric family of random graphs and makes maximum a posteriori (MAP) estimate of the graph parameters. Tiao et al. (2019) also viewed graph edges as Bernoulli random variables and used variational inference to optimize the posterior distribution of the adjacency matrix by approximating the pre-defined graph priors. Some other Bayesian methods have also been developed to combine GCNs with probabilistic models (Ng et al., 2018; Ma et al., 2019). However, without explicit optimization for the graph structure, they only improve the robustness under certain conditions such as incomplete edges, active learning, and adversarial attacks. For explicit topology optimization, Franceschi et al. (2019) presented LDS to parameterize edges as independent Bernoulli random variables and learn discrete structures for GCNs by solving a bilevel programming. However, LDS requires an extra validation set for training and suffers from limited scalability. TO-GCN (Yang et al., 2019) only adds the intra-class edges derived from the labeled nodes, which causes topology imbalance between Vu and Vl. GDC (Klicpera et al., 2019b) refines the adjacency matrix with graph diffusion to consider the links between high-order neighborhoods. However, the added edges might still be noisy to hamper the classification. GRCN (Yu et al., 2020) modifies the original adjacency matrix by adding a residual matrix with each element measuring the similarity between two corresponding node embeddings, and IDGL (Chen et al., 2020c) iteratively learns the graph structure in a similar manner. Pro-GNN (Jin et al., 2020) introduces low rank and sparsity constraints to recover a clean graph in defending adversarial attacks. NeuralSparse (Zheng et al., 2020) uses the Gumbel Softmax trick (Jang et al., 2017) to sample k neighbors from the original neighborhoods for each node but does not consider recovering missing intra-class edges. Different from the aforementioned methods, VEM-GCN aims at relieving the over-smoothing issue. We introduce a learned latent graph based on the assortative-constrained SBM to explicitly enhance intra-class connection and suppress inter-class interaction with the variational EM algorithm." }, { "heading": "3 METHODOLOGY", "text": "In this section, we develop the VEM-GCN architecture for transductive node classification. VEMGCN leverages the variational EM algorithm to achieve topology optimization, and consequently, address the over-smoothing issue by reducing noisy interactions between nodes in different classes. Specifically, E-step approximates the posterior probability distribution of the latent adjacency ma-\ntrix to optimize the graph structure, and M-step maximizes the evidence lower bound of the loglikelihood function based on the refined graph. We first introduce our motivation and provide an overview of the proposed VEM-GCN architecture. Subsequently, we elaborate the mechanisms of the variational E-step and M-step, respectively." }, { "heading": "3.1 MOTIVATION AND OVERVIEW", "text": "Motivation. As mentioned above, a graph with its nodes densely connected within their own communities (classes) has lower risk of over-smoothing. Under this consideration, the optimal adjacency matrix for GCN is à = YY> (Yang et al., 2019; Chen et al., 2020a), where Y ∈ RN×C is the matrix of one-hot-encoded ground-truth labels. However, since we have to infer Yu for the unlabeled nodes Vu, their true labels are not available for calculating Ã. Thus, we introduce a latent graph Alatent learned from Gobs through another neural network to help generate a topology clearer than Aobs for GCNs. It is obvious that à is equivalent to a SBM with p0 = 1 and p1 = 0, and therefore we base the posterior probability distribution of the latent graph on this assumption.\nOverview. The basic principle behind our proposed VEM-GCN architecture is maximum likelihood estimation (MLE) in a latent variable model, i.e., to maximize the log-likelihood function of the observed node labels Eqφ(Alatent|Gobs)[log pθ(Yl|Gobs)] based on the approximate posterior distribution qφ(Alatent|Gobs) of the latent graph Alatent. According to variational inference, the evidence lower bound (ELBO) is optimized instead:\nlog pθ(Yl|Gobs) ≥ LELBO(θ, φ; Yl,Gobs) = Eqφ(Alatent|Gobs)[log pθ(Yl,Alatent|Gobs)− log qφ(Alatent|Gobs)], (3)\nwhere the equality holds when qφ(Alatent|Gobs) = pθ(Alatent|Yl,Gobs). Note that qφ can be arbitrary desirable distributions on Alatent and we use a neural network to parameterize it in this work. To jointly optimize the latent graph topology Alatent and the ELBO LELBO(θ, φ; Yl,Gobs), we adopt the variational EM algorithm to solve it (refer to Appendix A for the full algorithm)." }, { "heading": "3.2 E-STEP", "text": "In the inference procedure (E-step), θ is fixed and the goal is to optimze qφ(Alatent|Gobs) to approximate the true posterior distribution pθ(Alatent|Yl,Gobs). Under the condition of SBM, we assume each edge of the latent graph to be independent. Thus, qφ(Alatent|Gobs) can be factorized by:\nqφ(Alatent|Gobs) = ∏ i,j qφ(a latent ij |Gobs). (4)\nUnlike LDS (Franceschi et al., 2019) using O(N2) Bernoulli random variables to characterize the optimized graph with N nodes, we parameterize qφ(alatentij |Gobs) through a neural network shared by all the possible node pairs (i.e., amortized variational inference (Gershman & Goodman, 2014)), as shown in Eq. 5. Hence, our method shows scalability for large-scale graphs and is easier to train.\nzi = NN(ei), qφ(alatentij = 1|Gobs) = sigmoid(ziz>j ), (5) where ei is the node embedding of node vi, which can be derived from any desirable network embedding methods (e.g., node2vec (Grover & Leskovec, 2016), struc2vec (Ribeiro et al., 2017), and GCNs) or the raw node attributes xi (the i-th row of X), and zi is the transformed features of node vi. NN(·) denotes a neural network and we use a Multi-Layer Perceptron (MLP) in this work. The probability for linking a node pair is defined as the inner-product of their transformed features activated by a sigmoid function.\nTo approximate the posterior probability distribution of Alatent, we rewrite pθ(Alatent|Yl,Gobs) as: pθ(Alatent|Yl,Gobs) = ∑ Yu pθ(Alatent,Yu|Yl,Gobs)\n= Epθ(Yu|Yl,Gobs)[pθ(Alatent|Yl,Yu,Gobs)]. (6)\nHere, pθ(Alatent|Yl,Yu,Gobs) is parameterized by the aforementioned assortative-constrained SBM (i.e., pθ(alatentij = 1|yi,yj) = yiy>j for the one-hot-encoded node label y), pθ(Yu|Yl,Gobs) is the\npredicted categorical distributions for the unlabeled nodes derived in the previous M-step. Consequently, we can sample Ŷu ∼ pθ(Yu|Yl,Gobs) to estimate the expectation in Eq. 6 and leverage stochastic gradient descent (SGD) to minimize the reverse KL-divergence between the approximate posterior distribution qφ(Alatent|Gobs) and the target pθ(Alatent|Yl,Gobs). Under appropriate assumptions, qφ will converge to pθ(Alatent|Yl,Gobs) as the iteration step of SGD t → ∞ (Bottou, 2010). Thus, we can obtain the following objective function in the variational E-step for optimizing φ:\nLE = − ∑ i,j ∑ alatentij ∈{0,1} λ(alatentij )pθ(a latent ij |yi,yj) log qφ(alatentij |Gobs), (7)\nwhere y is the ground truth label for node in labeled set Vl, otherwise sampled from pθ(Yu|Yl,Gobs) for the nodes without given labels in each training step, and λ(alatentij ) is the weighting hyperparameter to alleviate class imbalance between the inter-class edges and the intra-class edges." }, { "heading": "3.3 M-STEP", "text": "In the learning procedure (M-step), φ is fixed and θ is updated to maximize the ELBO in Eq. 3. By factorizing pθ(Yl,Alatent|Gobs) = pθ1(Yl|Alatent,Gobs)pθ2(Alatent|Gobs) with θ = {θ1, θ2}, we have:\nLELBO = Eqφ(Alatent|Gobs)[log pθ1(Yl|Alatent,Gobs)]−KL[qφ(Alatent|Gobs)‖pθ2(Alatent|Gobs)]. (8)\nHere, pθ1(Yl|Alatent,Gobs) in the first term can be parameterized by arbitrary GCN models described by Eq. 1 that infer the node labels from Alatent and X. We use the vanilla GCN (Kipf & Welling, 2017) in this work (see Eq. 13 in Appendix A). The second term is the KL-divergence between qφ(Alatent|Gobs) and the prior pθ2(Alatent|Gobs), which can be optimized by setting θ2 = φ to force KL[qφ(Alatent|Gobs)‖pθ2(Alatent|Gobs)] = 0. Actually, pθ2(Alatent|Gobs) is of little interest to the final node classification task and we just need to maximize Eqφ(Alatent|Gobs)[log pθ1(Yl|Alatent,Gobs)] in the M-step.\nConsidering the fact that the observed graph structure Aobs should not be fully discarded and the approximation qφ(Alatent|Gobs) derived in the previous E-step is sometimes not very accurate, we use qφ(Alatent|Gobs) to refine Aobs, and substitute qφ with the following q̄φ in practice:\nq̄φ(a latent ij = 1|Gobs) = p, if qφ > ε10, if qφ < ε2p · aobsij , otherwise , (9) where p ∈ (0, 1], ε1 is close to one (commonly 0.999), and ε2 is close to zero (commonly 0.01). Eq. 9 implies that, for edges predicted by qφ to be linked with high confidence (the value after sigmoid or the maximum value after softmax), they should be added to the observed graph with probability p. Edges predicted by qφ to be cut off with high confidence should be removed from the observed graph. Otherwise, we maintain the original graph structure with probability p.\nSimilar to the E-step, we can sample the latent adjacency matrix Âlatent ∼ q̄φ(Alatent|Gobs) (note that we pre-train pθ1 using Aobs) and leverage SGD to minimize the cross-entropy error between the GCN’s predictions pθ1(Yl|Âlatent,Gobs) and the ground-truth labels Yl for optimizing θ:\nLM = − ∑ vi∈Vl C∑ c=1 yic log pθ1(yic|Âlatent,Gobs). (10)\nIn the test procedure, the final predictions for Yu are Eq̄φ(Alatent|Gobs)[pθ1(Yu|Alatent,Gobs)], which can be approximated by Monte-Carlo sampling:\npθ(Yu|Yl,Gobs) = 1\nS S∑ i=1 pθ1(Yu|Ailatent,Gobs), with Ailatent ∼ q̄φ(Alatent|Gobs), (11)\nwhere the number of samples S and the probability p in q̄φ are tuned hyperparameters.\nThe two neural networks qφ and pθ are trained in an alternating fashion to reinforce each other. Topology optimization in the E-step improves the performance of the GCN in the M-step, and with more unlabeled nodes being correctly classified, qφ will better approximate the optimal graph Ã." }, { "heading": "3.4 DISCUSSIONS", "text": "In this subsection, we discuss the relationship between VEM-GCN and two recent works for tackling over-smoothing, i.e., DropEdge (Rong et al., 2020) and AdaEdge (Chen et al., 2020a). We show that these two methods are specific cases of VEM-GCN under certain conditions. More detailed comparisons with other related works (e.g., SBM-related GCNs) are discussed in Appendix B.\nVEM-GCN vs. DropEdge. DropEdge randomly removes a certain fraction of edges in each training step. The authors proved that this strategy can retard the convergence speed of over-smoothing. However, it does not address the over-smoothing issue at the core, since the graph topology is not fundamentally optimized and noisy messages still pass along inter-class edges. Considering the scenario where a node has few interactions with its community but many cross-community links, DropEdge cannot improve the discrimination of this stray node, since it does not recover the missing intra-class edges. We find that VEM-GCN degenerates to DropEdge, if we skip the E-step and just maximize Eq̄φ(Alatent|Gobs)[log pθ1(Yl|Alatent,Gobs)] with q̄φ(alatentij = 1|Gobs) = p · aobsij . VEM-GCN vs. AdaEdge. AdaEdge also constantly adjusts the graph topology in the training procedure. It adds the edge between two nodes which are predicted by the GCN as the same class with high confidence, and removes edges in a similar manner. If we skip the E-step and set q̄φ as Eq. 12, VEM-GCN and AdaEdge can be equivalent.\nq̄φ(a latent ij = 1|Gobs) = 1, if y′i = y ′ j and conf(y ′ i), conf(y ′ j) > τ1\n0, if y′i 6= y′j and conf(y′i), conf(y′j) > τ2 aobsij , otherwise , (12)\nwhere y′ is the prediction made by GCN, conf(·) denotes the corresponding confidence, τ1 and τ2 are two thresholds. Eq. 12 implies that, this self-training-like fashion only adjusts the edges whose interacting nodes have already been classified with high confidence. Therefore, the performance improvement is limited and would even get worse for some misclassified nodes, as it might wrongly add inter-class edges to the observed graph Aobs and remove helpful intra-class connections." }, { "heading": "4 EXPERIMENTS", "text": "To evaluate our VEM-GCN architecture, we conduct extensive experiments on seven benchmark datasets. Under the same setting as DropEdge (Rong et al., 2020) and a label-scarce setting (i.e., low label rate), we compare the performance of VEM-GCN against a variety of state of the arts for tackling over-smoothing, uncertain graphs and topology optimization in GCNs. We further give the visualization results of topology optimization and quantitative analysis to verify the effectiveness of VEM-GCN in relieving the over-smoothing issue (complexity analysis is provided in Appendix E.3)." }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "Datasets and Baselines. We adopt seven well-known benchmark datasets to validate the proposed method. Cora (Sen et al., 2008), Cora-ML (McCallum et al., 2000; Bojchevski & Günnemann, 2018), Citeseer (Sen et al., 2008), and Pubmed (Namata et al., 2012) are four citation network benchmarks, where nodes represent documents and edges are citations between documents. Amazon Photo and Amazon Computers are two segments from the Amazon co-purchase graph (McAuley et al., 2015), in which nodes represent goods and edges indicate that two goods are frequently bought together. In the Microsoft Academic graph (Shchur et al., 2018), nodes are authors and edges represent their co-authorship. All graphs use bag-of-words encoded representations as node attributes. An overview of the dataset statistics is summarized in Appendix C.\nSince VEM-GCN aims at addressing the over-smoothing problem with topology optimization, we evaluate the node classification performance of our method against various strategies for tackling over-smoothing, uncertain graphs and topology optimization in GCNs. For addressing the oversmoothing issue, five methods are considered: DropEdge (Rong et al., 2020), DropICE, AdaEdge (Chen et al., 2020a), PairNorm (Zhao & Akoglu, 2020), and BBGDC (Hasanzadeh et al., 2020), in which DropICE is implemented by removing the inter-class edges derived from Vl. For tackling uncertain graphs, we compare against several Bayesian approaches including BGCN (Zhang et al., 2019), VGCN (Tiao et al., 2019), and G3NN (Ma et al., 2019). For topology optimization, LDS\n(Franceschi et al., 2019), GDC (Klicpera et al., 2019b), TO-GCN (Yang et al., 2019), GRCN (Yu et al., 2020), and IDGL (Chen et al., 2020c) are the baselines. GMNN (Qu et al., 2019) is also taken as a baseline, as it also employs variational EM for transductive node classification.\nWe conduct node classification under two experimental settings, i.e., full-supervised and label-scarce settings. The full-supervised setting follows DropEdge (Rong et al., 2020), where each dataset is split into 500 nodes for validation, 1000 nodes for test and the rest for training. The label-scarce setting assigns labels to only a few nodes and selects 500 nodes for validation, while the rest are used for test. Under the label-scarce setting, we compare VEM-GCN with the baselines except for LDS, as LDS always uses the validation set for training, which is unfair for learning with limited training samples. DropICE is also omitted since the number of the removed inter-class edges derived from Vl is very small in the label-scarce setting and thus DropICE only obtains similar performance as the vanilla GCN. Considering that the classification performance is highly influenced by the split of the dataset (Shchur et al., 2018), we run all the models with the same 5 random data splits for each evaluation. To further ensure the credibility of the results, we perform 10 random weight initializations for each data split and report the average test accuracy for both experimental settings.\nModel Configurations. For a fair comparison, we evaluate all the methods under the same GCN backbone and the same training procedure. To be concrete, the graph model used in all baselines and our VEM-GCN (pθ1 in the M-step) is a vanilla GCN (Kipf & Welling, 2017) with the number of hidden units set as 32. Besides, we train the GCN backbone of all the methods for each dataset with the same dropout rate of 0.5, the same weight decay, the same learning rate of 0.01, the same optimizer (Adam (Kingma & Ba, 2015)), the same maximum training epoch of 1500, and the same early stopping strategy based on the validation loss with a patience of 50 epochs (for deeper models with more than 2 layers, we set the patience as 100 epochs). Note that IDGL empirically needs more training epochs to converge and we set its maximum training epoch as 10000 with a patience of 1000 epochs. As for qφ in the E-step, the input node embeddings are the attributes averaged over the neighborhood of each node and the network architecture is a four-layer MLP with hidden units of size 128, 64, 64, and 32, respectively. Please refer to Appendix D for more details of the implementations and hyperparameter settings for each dataset." }, { "heading": "4.2 RESULTS AND ANALYSIS", "text": "Full-supervised Setting. Table 1 summarizes the classification results. The highest accuracy in each column is highlighted in bold. Note that the results of BBGDC and LDS on three large graphs (i.e., Pubmed, Amazon Computers, and MS Academic) and IDGL on MS Academic graph are missing due to the out-of-memory error. Table 1 demonstrates that none of the baselines outperform the vanilla GCN in all cases, while VEM-GCN consistently improves the test accuracy of the GCN\nTable 2: Average test accuracy (%) and over-smoothness under the label-scarce setting (10 labels per class) with varying layers. For both metrics, the larger the better. A: Accuracy. S: Over-smoothness.\nDataset Method 2 layers 4 layers 6 layers 8 layers 10 layers\nCora\nVanilla GCN [A / S] 74.5 / 2.39 73.2 / 2.09 68.8 / 2.04 66.7 / 2.01 46.9 / 1.73 DropEdge [A / S] 75.0 / 2.01 75.5 / 2.59 58.1 / 2.10 42.7 / 1.84 32.1 / 1.46 AdaEdge [A / S] 74.9 / 2.47 73.9 / 2.42 72.0 / 2.47 68.9 / 2.05 54.6 / 1.86 VEM-GCN [A / S] 77.7 / 2.51 78.0 / 3.57 78.4 / 2.51 78.1 / 3.11 78.1 / 2.94\nCiteseer\nVanilla GCN [A / S] 61.0 / 1.82 56.7 / 1.83 53.7 / 1.75 44.9 / 1.66 26.7 / 1.51 DropEdge [A / S] 60.3 / 1.83 56.5 / 1.95 50.1 / 1.52 35.5 / 1.24 23.7 / 1.13 AdaEdge [A / S] 61.5 / 1.84 58.7 / 1.83 53.1 / 1.87 49.1 / 1.72 43.5 / 1.62 VEM-GCN [A / S] 64.2 / 1.89 64.2 / 2.01 63.7 / 1.85 63.8 / 1.93 63.7 / 1.83\nCora-ML Vanilla GCN [A / S] 83.4 / 2.87 81.3 / 3.15 77.7 / 2.46 66.4 / 2.11 44.9 / 1.73 DropEdge [A / S] 83.1 / 2.85 81.4 / 3.17 77.6 / 2.42 43.4 / 2.59 37.1 / 2.39 AdaEdge [A / S] 83.3 / 2.98 81.0 / 3.21 78.1 / 2.89 70.2 / 2.50 53.5 / 2.43 VEM-GCN [A / S] 84.4 / 3.78 84.4 / 3.88 84.4 / 3.70 84.4 / 4.45 84.3 / 4.12\nFigure 1: Convergence analysis.\nTable 3: Average test accuracy (%) under different label rates on the Amazon Photo dataset.\n# labels per class 5 10 20 30\nVanilla GCN 86.7 88.8 90.6 91.8 AdaEdge 86.5 88.9 90.5 91.8 DropEdge 86.5 88.8 90.5 91.6 PairNorm 78.6 83.7 86.2 88.1 BBGDC 87.3 88.4 90.1 90.4 TO-GCN 83.2 85.4 86.7 88.2 GDC 85.4 88.1 90.2 91.0 GRCN 84.1 88.3 90.4 91.9 IDGL 86.8 89.2 91.0 91.4 BGCN 85.1 87.1 89.1 91.1 VGCN 85.9 88.5 90.5 91.3 G3NN 86.3 88.8 90.6 90.8 GMNN 87.9 89.4 91.2 92.3\nVEM-GCN 89.2 90.5 91.8 92.8\nbackbone by noticeable margins. Specifically, we find the following facts under the full-supervised setting: (1) For tackling over-smoothing, AdaEdge, DropEdge and PairNorm demonstrate limited improvement on several datasets, while BBGDC and DropICE almost collapse for all cases. (2) LDS, TO-GCN, GDC, GRCN and IDGL cannot guarantee that their topology optimization could achieve performance gains for the GCN backbone. (3) Only adding intra-class edges (TO-GCN) or removing inter-class edges (DropICE) derived from Vl might cause topology imbalance between Vu and Vl. The GCN trained on Vl with enhanced graph topology would fail on Vu with the original graph topology. (4) Bayesian approaches and GMNN can only achieve comparable performance with the vanilla GCN in almost all cases. Overall, these facts imply that VEM-GCN significantly benefits from the large labeled data to generate a clearer topology and achieve better performance.\nLabel-scarce Setting. We randomly select 10 labeled nodes per class as the training set and evaluate the performance of VEM-GCN with varying layers. Table 2 shows the test accuracy and the oversmoothness measurements of the learned node embeddings (input node features of the last layer). The metric to measure the over-smoothness is defined in Appendix D.3 and supplementary results of VEM-GCN on additional datasets are shown in Appendix E.1. As can be seen in Table 2, the vanilla GCN severely suffers from the over-smoothing issue, while VEM-GCN can achieve performance gains even with deeper layers (e.g., on the Cora dataset). DropEdge and AdaEdge can relieve the over-smoothing issue to some extent, but the performance still decreases drastically when stacking more GCN layers. The results of over-smoothness measurements indicate that VEM-GCN indeed produces more separable node embeddings across different classes to address the over-smoothing\nproblem. We further take Amazon Photo as an example dataset to validate VEM-GCN under different label rates. Similar trend as Table 1 can be found in Table 3.\nConvergence Analysis and Visualization Results. VEM-GCN leverages the variational EM algorithm for optimization. In this subsection, we analyze the convergence of VEM-GCN. Figure 1 depicts the accuracy improvement curve of pθ1 during the EM iterations under the full-supervised setting. We find that VEM-GCN requires only a few iterations to converge. We further take CoraML as an example to give the corresponding visualization results of graph topology optimization. Figure 2 show that the observed graph is very sparse and contains a few intra-class edges, while the optimized graph recover many missing intra-class edges to relieve the over-smoothing problem. Note that, although the refined graph is much denser than the observed graph, the hyperparameter p (0.05 here) in q̄φ helps maintain the sparsity of the latent adjacency matrix in the training procedure. Thus, the M-step can still be implemented efficiently using sparse-dense matrix multiplications." }, { "heading": "5 CONCLUSION", "text": "In this paper, we present a novel architecture termed VEM-GCN for addressing the over-smoothing problem in GCNs with graph topology optimization. By introducing a latent graph parameterized by the assortative-constrained stochastic block model and utilizing the variational EM algorithm to jointly optimize the graph structure and the likelihood function, VEM-GCN outperforms a variety of state-of-the-art methods for tackling over-smoothing, uncertain graphs, and topology optimization in GCNs. For future work, we expect further improvements for the VEM-GCN architecture to deal with more complex graphs such as hypergraphs and heterogeneous graphs." }, { "heading": "A ALGORITHM", "text": "For a fair comparison, we adopt the vanilla GCN (Kipf & Welling, 2017) as the backbone for all baselines and our proposed VEM-GCN architecture. A two-layer GCN is in the following form:\npθ1(Y|A,X) = softmax ( Ḋ− 1 2 ȦḊ− 1 2 ReLU ( Ḋ− 1 2 ȦḊ− 1 2 XΘ(0) ) Θ(1) ) , (13)\nwhere X is the input node attribute matrix, IN is the identity matrix, Ȧ = A + IN is the adjacency matrix with added self-loops, Ḋ is its corresponding diagonal degree matrix, and θ1 = {Θ(0),Θ(1)} are the learnable weight parameters. Algorithm 1 describes the proposed VEM-GCN architecture." }, { "heading": "B FURTHER DISCUSSIONS", "text": "In addition to recent strategies for tackling over-smoothing issues, we further distinguish VEM-GCN from the SBM-related GCNs and VGCN and GMNN (that introduce variational inference).\nComparison with SBM-related GCNs. Stochastic block model (SBM) has been combined with GCNs in several recent works (i.e., BGCN (Zhang et al., 2019) and G3NN (Ma et al., 2019)). However, these architectures are totally different from VEM-GCN in motivations, objective functions and training methods. BGCN (Zhang et al., 2019) jointly infers the node labels and the parameters of SBM using only Aobs, which ignores the dependence of the graph on X and Yl. Then the\nAlgorithm 1 VEM-GCN Input: Observed graph Gobs and labels Yl for the labeled nodes Vl. Parameter: φ in the E-step and θ in the M-step. Output: Predicted labels Yu for the unlabeled nodes Vu.\n1: Pre-train pθ with Aobs and Yl to get initial pθ(Yu|Yl,Gobs). 2: for EM iteration t = 1, . . . , T do 3: E-step: 4: for training step s1 = 1, . . . , S1 do 5: Sample Ŷu ∼ pθ(Yu|Yl,Gobs) for the unlabeled nodes Vu. 6: Set pθ(Alatent |Yl,Gobs) = pθ(Alatent |Yl, Ŷu,Gobs) according to Eq. 6. 7: Update qφ to optimize the objective function in Eq. 7 with SGD. 8: end for 9: M-step:\n10: Obtain q̄φ(Alatent |Gobs) according to Eq. 9. 11: for training step s2 = 1, . . . , S2 do 12: Sample Âlatent ∼ q̄φ(Alatent |Gobs) for the latent adjacency matrix. 13: Update pθ to maximize the log-likelihood log pθ(Yl|Âlatent ,Gobs) with SGD. 14: end for 15: Predict categorical distributions pθ(Yu|Yl,Gobs) according to Eq. 11. 16: end for 17: return Final predicted labels for Vu based on pθ(Yu|Yl,Gobs).\nadjacency matrices sampled from the inferred SBM are used to train the GCN. Different from VEMGCN, BGCN neither explicitly promotes intra-class connection nor demotes inter-class interaction. It only achieves robustness under certain conditions such as adversarial attacks, benefiting from the uncertainty brought by the inferred SBM. G3NN is a flexible generative model, where the graph generated by SBM is based on the predictions of an additional MLP learned from only X and Yl. The predictions for the unlabeled nodes are still based on Gobs (i.e., the input adjacency matrix of the GCN is still Aobs). By contrast, VEM-GCN aims at addressing the over-smoothing issue with topology optimization. In VEM-GCN, the M-step trains a GCN to obtain the predictions of the unlabeled nodes based on Alatent, X, and Yl. We then estimate the posterior distribution on Alatent based on Yl and the predictions for the unlabeled nodes under the SBM assumption. Subsequently, the E-step optimizes the graph topology by training another auxiliary neural network with node embeddings as input to approximate the posterior distribution of Alatent. The E-step and M-step are optimized in an alternating fashion to improve each other.\nVEM-GCN vs. VGCN. VGCN (Tiao et al., 2019) also introduces a latent graph Alatent and optimizes LELBO in Eq. 3. However, it directly optimizes the ELBO in a VAE (Kingma & Welling, 2014) fashion and the posterior distribution of Alatent is set to approximate the pre-defined graph priors p(apriorij = 1) = ρ1a obs ij +ρ2(1−aobsij ) with 0 < ρ1, ρ2 < 1 using the re-parameterization trick. VGCN is to achieve robustness under fake link attacks and only shows comparable performance with GCN under the standard transductive learning setting (i.e., inferring Yu based on the original Gobs). By contrast, VEM-GCN does not introduce priors over graphs. We optimize the graph topology by explicitly enhancing intra-class connection and suppressing inter-class interaction using SBM and variational EM to relieve the over-smoothing issue.\nVEM-GCN vs. GMNN. Graph Markov Neural Network (GMNN) (Qu et al., 2019) also employs variational EM for node classification, but it is totally different from our method in motivations and objective functions. GMNN focuses on modeling the joint distribution of object (node) labels. Therefore, GMNN views Yu as latent variables and optimizes the following ELBO:\nlog pθ(Yl|X) ≥ Eqφ(Yu|X)[log pθ(Yl,Yu|X)− log qφ(Yu|X)]. (14)\nIn the E-step, GMNN parameterizes qφ(Yu|X) with a GCN and qφ(Yu|X) is optimized to approximate the posterior distribution pθ(Yu|Yl,X). In the M-step, GMNN utilizes another GCN to model the conditional distribution pθ(yi|yNB(i),X) for each node vi ∈ V (NB(i) is the neighbor set of node vi) with a conditional random field and maximizes the corresponding likelihood. On the contrary, VEM-GCN is proposed to relieve the over-smoothing issue. VEM-GCN optimizes the\nlatent graph to approximate its posterior distribution based on SBM in the E-step, and trains a GCN based on the latent graph in the M-step." }, { "heading": "C DATASET STATISTICS", "text": "We utilize seven node classification benchmarks in this paper, including four citation networks (i.e., Citeseer, Pubmed, Cora, and Cora-ML), two Amazon co-purchase graphs (i.e., Amazon Photo and Amazon Computers), and one Microsoft Academic graph, as summarized below.\n• Citation Networks. Cora, Citeseer, Pubmed can be downloaded from the official source code of GCN (Kipf & Welling, 2017) publicly available at https://github.com/ tkipf/gcn/tree/master/gcn/data, and Cora-ML can be downloaded from the source code of (A)PPNP (Klicpera et al., 2019a) publicly available at https:// github.com/klicperajo/ppnp/tree/master/ppnp/data.\n• Amazon Co-purchase Graph. The Amazon Photo and Amazon Computers datasets from the Amazon co-purchase graph can be publicly downloaded from https://github. com/shchur/gnn-benchmark/tree/master/data (Shchur et al., 2018).\n• Microsoft Academic Graph. The MS Academic graph can be downloaded from the source code of (A)PPNP (Klicpera et al., 2019a) publicly available at https://github.com/ klicperajo/ppnp/tree/master/ppnp/data.\nAn overview of the dataset statistics is listed in Table 4. Note that for these open datasets, three (Cora, Citeseer, Pubmed) are given in the form of undirected graphs, while four (Cora-ML, Amazon Photo, Amazon Computers, MS Academic) are directed graphs. GCN treats all these datasets as undirected graphs (i.e., a′ij = [aij + aji > 0], where [·] denotes Iverson bracket)." }, { "heading": "D FURTHER EXPERIMENTAL DETAILS", "text": "D.1 IMPLEMENTATIONS\nThe implementation of VEM-GCN consists of two alternating steps in each iteration, including a variational E-step and an M-step. In the variational E-step, a simple four-layer MLP is implemented for qφ, where the numbers of neuron units of each layer are 128, 64, 64, and 32, respectively. We use tanh as the nonlinear activation function for the hidden layers. In the M-step, pθ1 is a vanilla GCN with the number of hidden units set as 32, and we use the official source code from https://github.com/tkipf/gcn/tree/master/gcn. All the baselines and our VEMGCN architecture are trained on a single NVIDIA GTX 1080 Ti GPU with 11GB memory.\nWe just utilize the raw node attributes X as the input to qφ. Note that we can also support any other desirable network embedding method and these experiments is left for the future work. Considering the fact that the bag-of-words representations of X is often noisy, we average the features of each node over its neighborhoods to smooth the input signal. Let Ârow = (D + γIN )−1(A + γIN ) denote the “self-enhanced” adjacency matrix with row normalization (we use γ = 1.5 in this paper), and [A‖B] be the concatenation of matrices A and B along the last dimension. Consequently, we summarize the specific input to qφ for all the datasets as below.\n• For Cora, Citeseer, and MS Academic, we use X′ = ÂrowX as the input of qφ.\n• For Pubmed and Amazon Photo, we use X′ = [ X‖ÂrowX‖Â2rowX ] as the input of qφ.\n• For Cora-ML and Amazon Computers, we use X′ = [ X‖ÂrowX ] as the input of qφ.\nFor the sampling in the E-step, we find that it is not always stable to draw the sampled labels Ŷu ∼ pθ (Yu|Yl,Gobs). To alleviate this problem, we maintain Ŷu ← argmaxy(pθ (Yu|Yl,Gobs)) with probability pe, and sample Ŷu ∼ pθ (Yu|Yl,Gobs) with probability (1 − pe) to improve the performance in practice. For training of the E-step, we utilize SGD with momentum (of 0.99).\nIn the test procedure, we perform inference in two ways: (1) keep p in Eq. 9 the same as the training procedure (commonly p < 1), sample S (S > 1) adjacency matrices, and predict the classes for the unlabeled nodes according to Eq. 11; (2) set p = 1 (i.e., the latent adjacency matrix is now deterministic) and S = 1. We report the best test accuracy obtained using these two sampling methods on each dataset and find that (2) almost always performs better.\nD.2 HYPERPARAMETER SETTINGS\nTable 5 summarizes the hyperparameters adopted for the full-supervised setting on the seven benchmark datasets. For the label-scarce setting, the hyperparameters are the same, except for ε1 and ε2 that need to be carefully tuned in the search space: ε1 ∈ {0.95, 0.99, 0.995, 0.999, 0.9995, 0.9999}, ε2 ∈ {0.001, 0.005, 0.01, 0.05, 0.1}.\nD.3 METRIC FOR MEASURING OVER-SMOOTHNESS\nTo address the over-smoothing issue, one would prefer to reduce the intra-class distance to make node features in the same class similar, and increase the inter-class distance to produce distinguishable representations for nodes in different classes. Hence, we use the ratio of average inter-class distance to average intra-class distance (Euclidean distance of input node features in the last layer) to measure the over-smoothness. Formally, given the learned node embeddings H = {hi}Ni=1 (the input node features of the last layer), we first calculate the distance matrix D = [dij ] ∈ RN×N with each entry defined as\ndij = ∥∥∥∥ hi‖hi‖2 − hi‖hi‖2 ∥∥∥∥\n2\n, (15)\nwhere ‖ · ‖2 denotes Euclidean norm. Next, we define the inter-class mask matrix and intra-class mask matrix as follows:\nMinter = −à + 1N×N , (16) Mintra = Ã− IN , (17)\nwhere à = YY> is the optimal graph and 1N×N is a matrix of size N × N with all entries set to 1. Then we can obtain the inter-class distance matrix Dinter = [dinterij ] ∈ RN×N and the intraclass distance matrix Dintra = [dintraij ] ∈ RN×N by element-wise multiplication D with the mask\nmatrices:\nDinter = D ◦Minter, (18) Dintra = D ◦Mintra, (19)\nwhere ◦ denotes element-wise multiplication. Next we get the average inter-class distance ADinter and the average intra-class distance ADintra, with which we measure the over-smoothness as their ratio R:\nADinter =\n∑ i,j d\ninter ij∑\ni,j 1(d inter ij )\n, (20)\nADintra =\n∑ i,j d\nintra ij∑\ni,j 1(d intra ij )\n, (21)\nR = ADinter\nADintra , (22)\nwhere 1(x) = 1 if x > 0 otherwise 0." }, { "heading": "E FURTHER EXPERIMENTAL RESULTS", "text": "E.1 CLASSIFICATION RESULTS UNDER THE LABEL-SCARCE SETTING\nSupplementary experiments for Table 2 are shown in Table 6.\nE.2 VISUALIZATION RESULTS\nFigure 2 demonstrates the topology optimization results on the Cora-ML dataset under the fullsupervised setting. For the lable-scarce setting, the visualization results are shown in Figure 3.\nWe further take Cora-ML as an example dataset to provide t-SNE (Maaten & Hinton, 2008) visualizations of the learned node embeddings (input node features of the last layer) extracted by the vanilla GCN and our proposed VEM-GCN with varying layers under the label-scarce setting (10 labeled nodes per class). The results are shown in Figures 4 and 5. As can be seen, VEM-GCN indeed generates more separable node embeddings for nodes in different classes (colors) for classification. In particular, a ten-layer vanilla GCN severely suffers from the over-smoothing issue where node features in different classes are over-mixed and thus indistinguishable, while our VEM-GCN architecture with a ten-layer GCN as the backbone still achieves comparable performance with a two-layer model.\nE.3 COMPLEXITY ANALYSIS\nVEM-GCN is very flexible and general. There is no constraint on the design of the two neural networks qφ and pθ. Therefore, the architecture can be combined with arbitrary GCN models and\nnode embedding methods. Moreover, we generalize some existing state-of-the-art strategies for tackling the over-smoothing problem (i.e., DropEdge and AdaEdge). In comparison to the vanilla GCN, VEM-GCN achieves these benefits with topology optimization at the cost of efficiency. As illustrated in Sections 3.2 and 3.3, the M-step is a traditional training procedure for optimizing the GCNs. Although Alatent recovers more intra-class edges than Aobs, the parameter p in Eq. 9 helps maintain the sparsity of the latent graph in each step of the training procedure. Thus the M-step shares almost the same complexity as GCN. The E-step introduces an extra procedure that trains a MLP for graph structure optimization. However, to address the over-smoothing issue at the core, we argue that topology optimization is necessary. Actually, efficiency issue is a common problem for some Bayesian approaches and topology optimization methods. Despite decreased efficiency compared with the vanilla GCN, VEM-GCN optimizes Alatent with amortized variational inference (i.e., training a MLP shared by all the node pairs with mini-batch SGD in the E-step), which is faster than BGCN (Zhang et al., 2019) (Bayesian method) and LDS (Franceschi et al., 2019) (topology optimization) and is scalable for large-scale graphs. For training on the Amazon Photo dataset with a NVIDIA GTX 1080 Ti GPU, VEM-GCN is about 3× faster than LDS and 4× faster than BGCN.\nE.4 VEM-GCNII\nAs mentioned above, our VEM-GCN architecture is flexible and general. In the E-step, the neural network can support arbitrary desirable node embeddings and the GCN in the M-step can be substituted with any graph models. This subsection further verifies the effectiveness of our proposed method by trying different models for pθ1(Yl|Alatent,Gobs). We also apply the same node embeddings as illustrated in Appendix D. Trying more effective node embeddings is not the focus of this paper and is left for the future work.\nRecently, Chen et al. (2020b) proposed a simple and deep GCN model termed GCNII to address the over-smoothing issue. GCNII improves the vanilla GCN via Initial residual and Identity mapping:\nH(l+1) = σ (( (1− α(l))P̃H(l) + α(l)H(0) )( (1− β(l))IN + β(l)W(l) )) , (23)\nwhere H(0) = XW(0) is the output of the first layer (a fully connected layer), P̃ = Ḋ− 1 2 ȦḊ− 1 2 is the normalized adjacency matrix, α(l) and β(l) are two hyperparameters.\nWe utilize GCNII as the backbone that results in the VEM-GCNII architecture. We use the official source code from https://github.com/chennnM/GCNII and employ the hyperparameter\nsettings reported in (Chen et al., 2020b) that achieve the best performance on the three citation networks (i.e., Cora, Citeseer, and Pubmed) under the label-scarce setting. For the other four datasets, we roughly tuned the hyperparameters but found that GCNII does not outperform the vanilla GCN. Thus, we only evaluate GCNII and VEM-GCNII on Cora, Citeseer, and Pubmed with 10 labeled nodes per class as the training set. Experimental results are shown in Table 7. As can be seen, VEM-GCNII consistently boosts the performance of GCNII, further verifying the effectiveness and flexibility of our proposed architecture." } ]
2,020
null
SP:e0a53b0c2398f49df1c8c053acb1dc4bc64a0729
[ "The authors tackle the problem of zero-shot learning, that is, the recognition of classes and categories for which no visual data are available, but only semantic embedding, providing a description of the classes in terms of auxiliary textual descriptions. To this aim, authors propose a method dubbed Image-Guided Semantic Classification in which a two-stream network (fed by either visual and semantic embedding) learns a compatibility function whose recognition performance is enhanced by means of calibrated stacking (Chao et al. 2016). ", "This paper proposes a simple yet effective method for zero-shot learning. In the method, a network is learned to predict the compatibility function weight given the input of the image. The predicted weight is then applied to semantic attributes and the final class label is predicted by the maximum compatibility score. The method is evaluated on benchmark datasets and illustrates competitive performance." ]
We present a new visual-semantic embedding method for generalized zero-shot learning. Existing embedding-based methods aim to learn the correspondence between an image classifier (visual representation) and its class prototype (semantic representation) for each class. Inspired by the binary relevance method for multilabel classification, we learn the mapping between an image and its semantic classifier. Given an input image, the proposed Image-Guided Semantic Classification (IGSC) method creates a label classifier, being applied to all label embeddings to determine whether a label belongs to the input image. Therefore, a semantic classifier is image conditioned and is generated during inference. We also show that IGSC is a unifying framework for two state-of-the-art deep-embedding methods. We validate our approach with four standard benchmark datasets.
[]
[ { "authors": [ "Zeynep Akata", "Florent Perronnin", "Zaı̈d Harchaoui", "Cordelia Schmid" ], "title": "Label-embedding for attribute-based classification", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2013 }, { "authors": [ "Zeynep Akata", "Scott E. Reed", "Daniel Walter", "Honglak Lee", "Bernt Schiele" ], "title": "Evaluation of output embeddings for fine-grained image classification", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Z. Al-Halah", "M. Tapaswi", "R. Stiefelhagen" ], "title": "Recovering the missing link: Predicting classattribute associations for unsupervised zero-shot learning", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Yashas Annadani", "Soma Biswas" ], "title": "Preserving semantic relations for zero-shot learning", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Jimmy Lei Ba", "Kevin Swersky", "Sanja Fidler", "Ruslan Salakhutdinov" ], "title": "Predicting deep zeroshot convolutional neural networks using textual descriptions", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "M. Bucher", "S. Herbin", "F. Jurie" ], "title": "Generating visual representations for zero-shot classification", "venue": "In arXiv preprint arXiv:1708.06975,", "year": 2017 }, { "authors": [ "Soravit Changpinyo", "Wei-Lun Chao", "Boqing Gong", "Fei Sha" ], "title": "Synthesized classifiers for zeroshot learning", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Wei-Lun Chao", "Soravit Changpinyo", "Boqing Gong", "Fei Sha" ], "title": "An empirical study and analysis of generalized zero-shot learning for object recognition in the wild", "venue": "In Proc. of European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Long Chen", "Hanwang Zhang", "Wei Liu Jun Xiao", "Shih-Fu Chang" ], "title": "Zero-shot visual recognition using semantics-preserving adversarial embedding networks", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Mohamed Elhoseiny", "Yizhe Zhu", "Han Zhang", "Ahmed Elgammal" ], "title": "Link the head to the “beak”: Zero shot learning from noisy text description at part precision", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Ali Farhadi", "Ian Endres", "Derek Hoiem", "David A. Forsyth" ], "title": "Describing objects by their attributes", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "R. Felix", "B.G.V. Kumar", "I.D. Reid", "G. Carneiro" ], "title": "Multimodal cycle-consistent generalized zero-shot learning", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Andrea Frome", "Gregory S. Corrado", "Jonathon Shlens", "Samy Bengio", "Jeffrey Dean", "Marc’Aurelio Ranzato", "Tomas Mikolov" ], "title": "Devise: A deep visual-semantic embedding model", "venue": "In Proc. of Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Yanwei Fu", "Timothy M. Hospedales", "T.Y. Xiang", "Shaogang Gong" ], "title": "Transductive multi-view zero-shot learning", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2015 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Proc. of Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Dat Huynh", "Ehsan Elhamifar" ], "title": "Fine-grained generalized zero-shot learning via dense attributebased attention", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "D. Jayaraman", "K. Grauman" ], "title": "Zero-shot recognition with unreliable attributes", "venue": "In Proc. of Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Xu Jia", "Bert De Brabandere", "Tinne Tuytelaars", "Luc V. Gool" ], "title": "Dynamic filter networks", "venue": "In Proc. of Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Huajie Jiang", "Ruiping Wang", "Shiguang Shan", "Xilin Chen" ], "title": "Learning class prototypes via structure alignment for zero-shot recognition", "venue": "In Proc. of European Conference on Computer Vision,", "year": 2018 }, { "authors": [ "P. Kankuekul", "A. Kawewong", "S. Tangruamsub", "O. Hasegawa" ], "title": "Online incremental attributebased zero-shot learning", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2012 }, { "authors": [ "Elyor Kodirov", "Tao Xiang", "Shaogang Gong" ], "title": "Semantic autoencoder for zero-shot learning", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Christoph H. Lampert", "Hannes Nickisch", "Stefan Harmeling" ], "title": "Learning to detect unseen object classes by between-class attribute transfer", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Christoph H. Lampert", "Hannes Nickisch", "Stefan Harmeling" ], "title": "Attribute-based classification for zero-shot visual object categorization", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2014 }, { "authors": [ "H. Larochelle", "D. Erhan", "Y. Bengio" ], "title": "Zero-data learning of new tasks", "venue": "In Proc. of AAAI Conference on Artificial Intelligence,", "year": 2008 }, { "authors": [ "Yan Li", "Zhen Jia", "Junge Zhang", "Kaiqi Huang", "Tieniu Tan" ], "title": "Deep semantic structural constraints for zero-shot learning", "venue": "In Proc. of AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Shichen Liu", "Mingsheng Long", "Jianmin Wang", "Michael I. Jordan" ], "title": "Generalized zero-shot learning with deep calibration network", "venue": "In Proc. of Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yang Long", "Li Liu", "Ling Shao", "Fumin Shen", "Guiguang Ding", "Jungong Han" ], "title": "From zero-shot learning to conventional supervised classification: Unseen visual data synthesis", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Peirong Ma", "Xiao Hu" ], "title": "A variational autoencoder with deep embedding model for generalized zero-shot learning", "venue": "In Proc. of AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "G. A" ], "title": "MarcoBaroni. Hubness and pollution: Delving into cross-space mapping for zero-shot learning", "venue": "In Proc. of the Association for Computational Linguistics,", "year": 2016 }, { "authors": [ "T. Mikolov", "I. Sutskever", "K. Chen", "G.S. Corrado", "J. Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Proc. of Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Mohammad Norouzi", "Tomas Mikolov", "Samy Bengio", "Yoram Singer", "Jonathon Shlens", "Andrea Frome", "Gregory S. Corrado", "Jeffrey Dean" ], "title": "Zero-shot learning by convex combination of semantic embeddings", "venue": "In Proc. of International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Mark Palatucci", "Dean Pomerleau", "Geoffrey E. Hinton", "Tom M. Mitchell" ], "title": "Zero-shot learning with semantic output codes", "venue": "In Proc. of Neural Information Processing Systems,", "year": 2009 }, { "authors": [ "Genevieve Patterson", "James Hays" ], "title": "Sun attribute database: Discovering, annotating, and recognizing scene attributes", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2012 }, { "authors": [ "J. Pennington", "R. Socher", "C. Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proc. of Empirical Methods in Natural Language Processing,", "year": 2014 }, { "authors": [ "Ruizhi Qiao", "Lingqiao Liu", "Chunhua Shen", "Anton van den Hengel" ], "title": "Less is more: Zero-shot learning from online textual documents with noise suppression", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Miloš Radovanović", "Alexandros Nanopoulos", "Mirjana Ivanović" ], "title": "Hubs in space: Popular nearest neighbors in high-dimensional data", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2010 }, { "authors": [ "Bernardino Romera-Paredes", "Philip H.S. Torr" ], "title": "An embarrassingly simple approach to zero-shot learning", "venue": "In Proc. of IEEE International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "E. Schonfeld", "S. Ebrahimi", "S. Sinha", "T. Darrell", "Z. Akata" ], "title": "Generalized zero- and few-shot learning via aligned variational autoencoders", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Yutaro Shigeto", "Ikumi Suzuki", "Kousuke Hara", "Masashi Shimbo", "Yuji Matsumoto" ], "title": "Ridge regression, hubness, and zero-shot learning", "venue": "In Proc. of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases,", "year": 2015 }, { "authors": [ "Richard Socher", "Milind Ganjoo", "Hamsa Sridhar", "Osbert Bastani", "Christopher D. Manning", "Andrew Y. Ng" ], "title": "Zero-shot learning through cross-modal transfer", "venue": "In Proc. of Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Vinay Kumar Verma", "Piyush Rai" ], "title": "A simple exponential family framework for zero-shot learning", "venue": "In Proc. of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases,", "year": 2017 }, { "authors": [ "Vinay Kumar Verma", "Gundeep Arora", "Ashish Mishra", "Piyush Rai" ], "title": "Generalized zero-shot learning via synthesized examples", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Vinay Kumar Verma", "Dhanajit Brahma", "Piyush Rai" ], "title": "Meta-learning for generalized zero-shot learning", "venue": "In Proc. of AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Wei Wang", "Chunyan Miao", "Shuji Hao" ], "title": "Zero-shot human activity recognition via nonlinear compatibility based method", "venue": "In Proc. of International Conference on Web Intelligence,", "year": 2017 }, { "authors": [ "Wei Wang", "Vincent Wenchen Zheng", "Han Yu", "Chunyan Miao" ], "title": "A survey of zero-shot learning: Settings, methods, and applications", "venue": "ACM Transactions on Intelligent Systems and Technology,", "year": 2019 }, { "authors": [ "Peter Welinder", "Steve Branson", "Takeshi Mita", "Catherine Wah", "Florian Schroff", "Serge J. Belongie", "Pietro Perona" ], "title": "Caltech-ucsd birds 200", "venue": "In Caltech, Tech. Rep. CNS-TR2010-001,", "year": 2010 }, { "authors": [ "Yongqin Xian", "Zeynep Akata", "Gaurav Sharma", "Quynh N. Nguyen", "Matthias Hein", "Bernt Schiele" ], "title": "Latent embeddings for zero-shot classification", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Yongqin Xian", "Tobias Lorenz", "Bernt Schiele", "Zeynep Akata" ], "title": "Feature generating networks for zero-shot learning", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Yongqin Xian", "Christoph H. Lampert", "Bernt Schiele", "Zeynep Akata" ], "title": "Zero-shot learning—a comprehensive evaluation of the good, the bad and the ugly", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2019 }, { "authors": [ "Yongqin Xian", "Saurabh Sharma", "Bernt Schiele", "Zeynep Akata" ], "title": "f-vaegan-d2: A feature generating framework for any-shot learning", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Guo-Sen Xie", "Li Liu", "Xiaobo Jin", "Fan Zhu", "Zheng Zhang", "Jie Qin", "Yazhou Yao", "L.M. Shao" ], "title": "Attentive region embedding network for zero-shot learning", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Yongxin Yang", "Timothy M. Hospedales" ], "title": "A unified perspective on multi-domain and multi-task learning", "venue": "In Proc. of International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Yunlong Yu", "Zhong ji", "Jungong Han", "Zhongfei Zhang" ], "title": "Episode-based prototype generating network for zero-shot learning", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Yang Zhang", "Boqing Gong", "Mubarak Shah" ], "title": "Fast zero-shot image tagging", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Ziming Zhang", "Venkatesh Saligrama" ], "title": "Zero-shot learning via semantic similarity embedding", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Fang Zhao", "Jian Zhao", "Shuicheng Yan", "Jiashi Feng" ], "title": "Dynamic conditional networks for few-shot learning", "venue": "In Proc. of European Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Pengkai Zhu", "Hanxiao Wang", "Venkatesh Saligrama" ], "title": "Generalized zero-shot recognition based on visually semantic embedding", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Yizhe Zhu", "Mohamed Elhoseiny", "Bingchen Liu", "Xi Peng", "Ahmed Elgammal" ], "title": "A generative adversarial approach for zero-shot learning from noisy texts", "venue": "In Proc. of IEEE International Conference on Computer Vision and Pattern Recognition,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "As a feasible solution for addressing the limitations of supervised classification methods, zeroshot learning (ZSL) aims to recognize objects whose instances have not been seen during training (Larochelle et al., 2008; Palatucci et al., 2009). Unseen classes are recognized by associating seen and unseen classes through some form of semantic space; therefore, the knowledge learned from seen classes is transferred to unseen classes. In the semantic space, each class has a corresponding vector representation called a class prototype. Class prototypes can be obtained from human-annotated attributes that describe visual properties of objects (Farhadi et al., 2009; Lampert et al., 2014) or from word embeddings learned in an unsupervised manner from text corpus (Mikolov et al., 2013; Pennington et al., 2014; Devlin et al., 2018).\nA majority of ZSL methods can be viewed using the visual-semantic embedding framework, as displayed in Figure 1 (a). Images are mapped from the visual space to the semantic space in which all classes reside, or images and labels are projected to a latent space (Yang & Hospedales, 2015; Liu et al., 2018). Then, the inference is performed in this common space (Akata et al., 2013; Frome et al., 2013; Socher et al., 2013), typically using cosine similarity or Euclidean distance. Another perspective of embedding-based methods is to construct an image classifier for each unseen class by learning the correspondence between a binary one-versus-rest image classifier (i.e., visual representation of a class) and its class prototype in the semantic space (i.e., semantic representation of a class) (Wang et al., 2019). Once this correspondence function is learned, a binary one-versus-rest image classifier can be constructed for an unseen class with its prototype (Wang et al., 2019). For example, a commonly used choice for such correspondence is the bilinear function (Frome et al., 2013; Akata et al., 2013; 2015; Romera-Paredes & Torr, 2015; Li et al., 2018). Considerable efforts have been made to extend the linear function to nonlinear ones (Xian et al., 2016; Wang et al., 2017; Elhoseiny et al., 2017; Qiao et al., 2016). Figure 1 (b) illustrates this perspective.\nLearning the correspondence between an image classifier and a class prototype has the following drawbacks. First, the assumption of using a single image classifier for each class is restrictive because the manner for separating classes in both visual and semantic spaces would not be unique. We argue that semantic classification should be conducted dynamically conditioned on an input image. For example, the visual attribute wheel may be useful for classifying most car images. Nevertheless, cars with missing wheels should also be correctly recognized using other visual attributes. Therefore, instance-specific semantic classifiers are more preferable than category-specific ones because the classifier weights can be adaptively determined based on image content. Second, the scale of\ntraining data for learning the correspondence is constrained to be the number of class labels. In other words, a training set with C labels has only C visual-semantic classifier pairs to build the correspondence. This may hinder the robustness of deep models that usually require large-scale training data. Finally, although class embedding has rich semantic meanings, each class is represented by only a single class prototype to determine where images of that class collapse inevitably (MarcoBaroni, 2016; Fu et al., 2015). The mapped semantic representations from images may collapse to hubs, which are close to many other points in the semantic space, rather than being similar to the true class label (MarcoBaroni, 2016).\nIn this paper, we present a new method, named Image-Guided Semantic Classification (IGSC), to address these problems. IGSC aims to learn the correspondence between an image and its corresponding label classifier, as illustrated in Figure 1 (c). In contrast to existing methods focusing on the learning of visual (or semantic) representations (Zhang et al., 2016; Frome et al., 2013; Socher et al., 2013), IGSC analyzes the input image and seeks for combinations of variables in the semantic space (e.g., combinations of attributes) that distinguish a class (belonging to the input) from other classes. The proposed IGSC method has the following characteristics:\n• IGSC learns the correspondence between an image in the visual space and a classifier in the semantic space. The correspondence can be learned with training pairs in the scale of training images rather than that of classes.\n• IGSC performs learning to learn in an end-to-end manner. Label classification is conducted by a semantic classifier whose weights are generated on the fly. This model is simple yet powerful because of its adaptive nature.\n• IGSC unifies visual attribute detection and label classification. This is achieved via the design of a conditional network (the proposed classifier learning method), in which label classification is the main task of interest and the conditional input image provides additional information of a specific situation.\n• IGSC alleviates the hubness problem. The correspondence between an image and a semantic classifier learned from seen classes can be transferred to recognize unseen concepts.\nWe evaluated IGSC with experiments conducted on four public benchmark datasets, including SUN (Patterson & Hays, 2012), CUB (Patterson & Hays, 2012), AWA2 (Lampert et al., 2014), and aPY (Farhadi et al., 2009). Experimental results demonstrated that the proposed method achieved promising performance, compared with current state-of-the-art methods.The remainder of the paper is organized as follows: We briefly review related work in Section 2. Section 3 presents the proposed framework. The experimental results and conclusions are provided in Sections 4 and 5, respectively." }, { "heading": "2 RELATED WORK", "text": "Zero-shot learning has evolved rapidly during the last decade, and therefore documenting the extensive literature with limited pages is rarely possible. In this section, we review a few representative zero-shot learning methods and refer readers to (Xian et al., 2019a; Wang et al., 2019) for a comprehensive survey. One pioneering main stream of ZSL uses attributes to infer the label of an image belonging to one of the unseen classes (Lampert et al., 2014; Al-Halah et al., 2016; Norouzi et al.,\n2014; Jayaraman & Grauman, 2014; Kankuekul et al., 2012). The attributes of an image are predicted, then the class label is inferred by searching the class which attains the most similar set of attributes. For example, the Direct Attribute Prediction (DAP) model (Lampert et al., 2009) estimates the posterior of each attribute for an image by learning probabilistic attribute classifiers. A test sample is then classified by each attribute classifier alternately, and the class label is predicted by probabilistic estimation. Similar to the attribute-based methods, the proposed method has the merits of modeling the relationships among classes. However, IGSC unifies these two steps: attribute classifier learning and inferring from detected attributes to the class. Furthermore, attribute classifiers are jointly learned in IGSC.\nA broad family of ZSL methods apply an embedding framework that directly learns a mapping from the visual space to the semantic space (Palatucci et al., 2009; Akata et al., 2013; 2015; RomeraParedes & Torr, 2015). The visual-to-semantic mapping can be linear (Frome et al., 2013) or nonlinear (Socher et al., 2013). For example, DeViSE (Frome et al., 2013) learns a linear mapping between the image and semantic spaces using an efficient ranking loss formulation. Cross-Modal Transfer (CMT) (Socher et al., 2013) uses a neural network with two hidden layers to learn a nonlinear projection from image feature space to word vector space. More recently, deep neural network models are proposed to mirror learned semantic relations among classes in the visual domain from the image (Annadani & Biswas, 2018) or from the part (Zhu et al., 2018a) levels. IGSC is also an embedding-based ZSL method. IGSC differs significantly from existing methods in that IGSC learns the correspondence between an image and its semantic classifier, enabling the possibility of using different classification manners to separate class prototypes in the semantic space.\nRecent ZSL models adopt the generative adversarial network (GAN) (Goodfellow et al., 2014) or other generative models for synthesizing unseen examples (Bucher et al., 2017; Long et al., 2017; Jiang et al., 2018; Verma et al., 2018; Xian et al., 2018; Zhu et al., 2018b; Xian et al., 2019b; Verma et al., 2020; Yu et al., 2020; Ma & Hu, 2020) or for reconstructing training images (Chen et al., 2018). The synthesized images obtained at the training stage can be fed to conventional classifiers so that ZSL is converted into the conventional supervised learning problem (Long et al., 2017). The transformation from attributes to image features require involving generative models such as denoising autoencoders (Bucher et al., 2017), GAN (Xian et al., 2018; Zhu et al., 2018b) or their variants (Verma et al., 2018; Felix et al., 2018; Xian et al., 2019b; Yu et al., 2020; Ma & Hu, 2020). Despite outstanding performances reported in the papers, these works leverage some form of the unseen class information during training. In view of real-world applications involving recognition in-the-wild, novel classes including the image samples as well as the semantic representations may not be available in model learning. The proposed method is agnostic to all unseen class information during training. Furthermore, the proposed method is much simpler in the architecture design and has a much smaller model size, compared with the generative methods." }, { "heading": "3 APPROACH", "text": "" }, { "heading": "3.1 PROBLEM DESCRIPTION", "text": "Given a training set S = {(xn, yn), n = 1 . . . N}, with yn ∈ Ys being a class label in the seen class set, the goal of ZSL is to learn a classifier f : X → Y which can generalize to predict any image x to its correct label, which is not only in Ys but also in the unseen class set Yu. In the prevailing family of compatibility learning ZSL (Xian et al., 2019a; Ba et al., 2015), the prediction is made via:\nŷ = f(x;W ) = argmax y∈Y F (x, y;W ). (1)\nIn particular, if Y = Yu, this is the conventional ZSL setting; if Y = Ys∪Yu, this is the generalized zero-shot learning (GZSL) setting, which is more practical for real-world applications. The compatibility function F (·)—parameterized by W—is used to associate visual and semantic information. In the visual space, each image x has a vector representation, denoted by θ(x). Similarly, each class label y has a vector representation in the semantic space (called the class prototype), denoted by φ(y). In short, θ(x) and φ(y) are the image and class embeddings, both of which are given." }, { "heading": "3.2 IMAGE-GUIDED SEMANTIC CLASSIFICATION MODEL", "text": "The compatibility function in this work is achieved by implementing two functions, h(θ(x);W ) and g(φ(y);M), as illustrated in Figure 2. The first function h(·) receives an image embedding as input and returns parameters M characterizing a label classifier:\nM = h(θ(x);W ). (2)\nIn other words, h(·) learns the mapping between image representations and model (i.e., semantic classifier) representations. Each image has its own semantic classifier. Images of the same class may have different classifier weights. Different from existing methods where the classifier weights are part of model parameters and thereby being static after training, the classifier weights in IGSC are dynamically generated during test time.\nThe image-to-classifier mapping can be either linear or nonlinear. Figure 2 shows an implementation of a nonlinear model that involves two fully connected layers and an output layer. The dimension of the output layer is set to accommodate the label classifier weights. We emphasize again that W are the only model parameters required to learn from training data.\nThe second function g(·) is a label classifier, characterized by the parameters outputted by h(·). This function takes a label vector as input, and returns a prediction score indicating the probability of the label belonging to the input image:\ns = g(φ(y);M). (3)\nLet sj denote the prediction score for a label j. In multi-class (single-label) image classification, the final compatibility score is obtained by normalizing the prediction scores to probabilistic values with softmax:\nF (x, yj ;W ) = exp(sj)∑|Y| k=1 exp(sk) . (4)\nOne image is assigned to the class with the highest compatibility score. In multi-label image classification, we replace softmax by a sigmoid activation function. The prediction is made by choosing labels whose compatibility score is greater than a threshold.\nIt is worth noting that the mechanism of IGSC is similar to that of Dynamic Filter Networks (Jia et al., 2016), in which the filters are generated dynamically conditioned on an input. A similar mechanism also appears in (Zhao et al., 2018), which predicts a set of adaptive weights from conditional inputs to linearly combine the basis filters. The proposed method differs fundamentally in that both (Jia et al., 2016) and (Zhao et al., 2018) focus on learning image representations, while our method aims to learn model representations that are applied to a different modality (i.e., labels)." }, { "heading": "3.3 FORMS OF LABEL CLASSIFIERS", "text": "The image-guided label classifier can be either linear or nonlinear, which receives a label embedding and returns a prediction score of the label. In this study we experiment with two variations of the label classifier. The linear label classifier is represented as:\ng(φ(y);M) = mφ(y) + b. (5) where m ∈ Rd is a weight vector, b is a threshold and M = (m, b). The dimension d is set to that of the label vector (e.g., d = 300 if using 300-dim word2vec (Mikolov et al., 2013)). Alternatively, the nonlinear label classifier is implemented using a two-layer neural network:\ng(φ(y);M) = m2 tanh(M1φ(y) + b1) + b2, (6) where M1 ∈ Rh×d,m2 ∈ Rh and M = (M1, b1,m2, b2). The nonlinear classifier characterizes the d-dim semantic space by using h perceptrons and performs the classification task. We set h to 30 in the experiments. As will be shown in Section 4, the nonlinear label classifier outperforms a linear one.\nFor GZSL, it is beneficial to enable calibrated stacking (Chao et al., 2016), which reduces the scores for seen classes. This leads to the following modification:\nŷ = argmax y∈Ys∪Yu\n( g(φ(y);M)− γ1[y ∈ Ys] ) , (7)\nwhere 1[y ∈ Ys] ∈ {0, 1} indicates whether or not y is a seen class and γ is a calibration factor." }, { "heading": "3.4 LEARNING MODEL PARAMETERS", "text": "Recall that the objective of ZSL is to correctly assign the correct label to an image. This is a typical classification problem. For a training sample xi, Let yi = {y1i , y2i , ..., y |Ys| i } ∈ {0, 1} denote the one-hot encoding of the ground truth label and pi = {p1i , p2i , ..., p |Ys| i } denote the compatibility scores of xi (Equ. 4). That is, p j i = F (xi, yj ;W ). The model parameters W are learned by minimizing the cross entropy loss:\nL = − N∑ i=1 |Ys|∑ j=1 yji log(p j i ) + (1− y j i ) log(1− p j i ). (8)\nWe had also experimented with the hinge loss and achieved similar performances. The model parameters including W and those of the image/semantic embedding networks can be jointly learned end-to-end; however, the results reported in Section 4 were obtained by freezing the weights of feature extractors for a fair comparison. That is, all methods under comparison used the same image and semantic representations in the experiments." }, { "heading": "3.5 CONNECTION TO PREVIOUS METHODS", "text": "Finally we show how previous supervised visual-semantic embedding methods—DeViSE (Frome et al., 2013) and CMT (Socher et al., 2013)—are special cases of our method.\nDeViSE (Frome et al., 2013) uses a projection layer (a linear transformation) that maps a visual vector to the semantic space, and compute a dot-product similarity between the projected visual vector and the vector representation of the correct label. The behavior is identical to a special case of our method, where both h(·) and g(·) are linear. CMT (Socher et al., 2013) uses a neural network with two hidden layers and the standard nonlinearity tanh to learn a nonlinear projection from image feature space to word vector space and compute the Euclidean distances of the L2 normed vectors. This is identical to the special case of using nonlinear h(·) and linear g(·), except that we use ReLU instead of tanh in the nonlinear transformation." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATASETS AND EXPERIMENTAL SETTING", "text": "We used four popular benchmark datasets, including coarse-grained and fine-grained datasets, for evaluating the proposed method. The statistics of the datasets are summarized in Table 1. Please see\n(Xian et al., 2019a) for detailed descriptions. We followed the new split provided by (Xian et al., 2019a) because this split ensured that classes at test should be strictly unseen at training.\nVisual and semantic embeddings. For a fair comparison, we used the 2048-dimensional ResNet101 features provided by (Xian et al., 2019a) as image representations. For label representations, we used the semantic embeddings provided by (Xian et al., 2019a), each of which is an L2-normalized attribute vector. Note that IGSC is flexible in that the visual and semantic embeddings, h(·) and g(·) functions can all be customized to meet specific needs.\nTraining details. We used Adaptive Moment Estimation (Adam) for optimizing the model. We augmented the data by random cropping and mirroring. The learning rate was set fixed to 10−5. Training time for a single epoch ranged from 91 seconds to 595 seconds (depending on which dataset was used). Training the models using four benchmark datasets roughly took 11 hours in total. The runtime was reported running on a machine with an Intel Core i7-7700 3.6-GHz CPU, NVIDIA’s GeForce GTX 1080Ti and 32 GB of RAM. The dimension in the nonlinear variant of the semantic classifier g(·) was set to 30 in the experiments.\nEvaluation protocols. We followed the standard evaluation metrics used in the literature. For ZSL, we used average per-class top-1 accuracy as the evaluation metric, where the prediction (Eq. 1) is successful if the predicted class is the correct ground truth. For GZSL, we reported accs (test images are from seen classes and the prediction labels are the union of seen and unseen classes) and accu (test images are from unseen classes and the prediction labels are the union of seen and unseen classes). We computed the harmonic mean (Xian et al., 2019a) of accuracy rates on seen classes accs and unseen classes accu:\nH = 2× accs × accu accs + accu . (9)\nThe harmonic mean offers a comprehensive metric in evaluating GZSL methods. The harmonic mean value is high only when both accuracy rates are high. We reported the average results of three random trials for each ZSL and GZSL experiment." }, { "heading": "4.2 ABLATION STUDY", "text": "First, we investigate the effects of different designs of the image-to-classifier mapping function h(·) and the label classifier g(·). We reported the results on the SUN benchmark; however, similar findings can be found using other datasets.\nTable 2 shows the results of the ablation experiment. In both settings (ZSL and GZSL), using a nonlinear image-to-classifier mapping (i.e., h(·)) is essential to the performance. A significant performance gain was observed when a nonlinear h(·) was applied. The combination of linear h(·) and nonlinear g(·) performed the worst. A possible reason is that a linear mapping does not have a sufficient capacity to model the relation between a visual feature and its corresponding semantic\nclassifier, and using a nonlinear g(·) exacerbates the overfitting problem of learned semantic classifiers to seen classes. While a nonlinear h(·) successfully modeled the mapping between a visual feature and its label classifier, using a nonlinear g(·) further improved the recognition performance, especially under the setting of GZSL." }, { "heading": "4.3 COMPARISONS WITH STATE-OF-THE-ART EMBEDDING-BASED APPROACHES", "text": "We compared the IGSC method with a variety of standard and generalized ZSL methods as reported in (Xian et al., 2019a). These methods can be categorized into 1) attribute-based: DAP (Lampert et al., 2009), IAP (Lampert et al., 2009), CONSE (Norouzi et al., 2014), SSE (Zhang & Saligrama, 2015), SYNC (Changpinyo et al., 2016); and 2) embedding-based: CMT/CMT* (Socher et al., 2013), LATEM(Xian et al., 2016), ALE(Akata et al., 2013), DeViSE(Frome et al., 2013), SJE(Akata et al., 2015), ESZSL(Romera-Paredes & Torr, 2015), SAE(Kodirov et al., 2017), GFZSL(Verma & Rai, 2017). Performances of the methods are directly reported from the paper (Xian et al., 2019a).\nPlease note that all methods under comparison—including the proposed method—are inductive to both unseen images and unseen semantic vectors. Only labeled training instances and class prototypes of seen classes are available in this experimental setting. Alternatively, methods that are transductive for unseen class prototypes and unlabeled unseen test instances can achieve better performances because more information is involved in model learning. Recent methods in the inductive setting are only inductive to samples (Jiang et al., 2018; Felix et al., 2018; Xian et al., 2019b; Schonfeld et al., 2019; Verma et al., 2020; Yu et al., 2020; Ma & Hu, 2020; Huynh & Elhamifar, 2020).\nThese methods use unseen class labels during training, which is different to our setting and, therefore, are not compared.\nWe reported the performance the proposed IGSC method with (or without) calibrated stacking (Equ. 7): 1) IGSC uses the nonlinear-nonlinear combination; 2) IGSC+CS enables calibrated stacking. Table 3 shows the conventional ZSL results. IGSC has a superior performance to those of other methods on the CUB dataset and achieves comparable performances on the other datasets. Although GFZSL (Verma & Rai, 2017) has the best performances on SUN and AWA2, this method performs poorly under the GZSL setting.\nTable 4 shows the generalized ZSL results. In this experiment, recent inductive methods (Chen et al., 2018; Annadani & Biswas, 2018; Xie et al., 2019) are included for comparison. The semanticspreserving adversarial embedding network (SP-AEN) (Chen et al., 2018) is a GAN-based method, which uses an adversarial objective to reconstruct images from semantic embeddings. The preserving semantic relations (PSR) method (Annadani & Biswas, 2018) is an embedding-based approach utilizing the structure of the attribute space using a set of relations. Finally, the attentive region embedding network (AREN) (Xie et al., 2019) uses an attention mechanism to construct the embeddings from the part level (i.e., local regions), which consists of two embedding streams to extract image regions for semantic transfer.\nBy examining the harmonic mean values, IGSC consistently outperformed other competitive methods on three out of the four datasets. The performance gain validated the effectiveness of learning image-guided semantic classifiers. Compared with embedding based methods, this novel paradigm not only has more training pairs (in the scale of the training images) for learning the correspondence between an image and its corresponding label classifier but also allows different manners to separate classes based on the content of input image. In comparison with attribute based methods which take a two-step pipeline to detect attributes from one image and aggregate the detection results for label prediction, IGSC unifies the steps. Compared with recent methods (Chen et al., 2018; Annadani & Biswas, 2018; Xie et al., 2019), IGSC is much simpler and therefore has a greater flexibility. We have not integrated powerful components for GZSL such as feature generators and attention models, yet it has achieved comparable (or superior) performance to existing sophisticated methods.\nAn additional reason that explains the improved performance is that the hubness may be alleviated in IGSC, which avoids nearest neighbor searches of class prototypes in the semantic space. We conducted an experiment to realize whether IGSC reduces hubness. To measure the degree of hubness, we used the skewness of the empirical N1 distribution (Radovanović et al., 2010; Shigeto et al., 2015). We conducted this experiment on the SUN benchmark because it is the only dataset containing an equal number of test images per class. As we hardly found the skewness analyses in recent literature, we implemented DeViSE (Frome et al., 2013) and compare it with the proposed method. The results are summarized in Table 5. IGSC produced smaller skewness values. One possible reason explaining why hubness is alleviated is that the “matching” between a visual representation and a class prototype is more flexible in IGSC than that in nearest neighbor search. A label is considered correct as long as its embedding is on the right side of the decision surface, learned conditioned on the input image embedding.\nTo better understand the strengths and weaknesses of the proposed method, we compare its performance with recent GAN-based methods. The notable performance of f-VAEGAN-D2 (Xian et al., 2019b) achieved the harmonic mean values of 41.3 on SUN and 53.6 on CUB. This GAN-based approach and its variants focus on the design of a data augmentation scheme where arbitrarily many synthetic features of both seen and unseen classes can be created to aid in improving the discriminative power of classifiers. Linear softmax classifiers are typically used in such approaches. Alternatively, the proposed embedding-based method focuses on the design of classifiers. These techniques may complement each other to advance methods for zero-shot learning." }, { "heading": "4.4 DISCUSSIONS", "text": "We discuss the model flexibility and visualize the classifier weights generated by IGSC.\nModel extendibility. The proposed IGSC model is flexible in that the visual and semantic embeddings, the h(·) and g(·) functions can all be customized to meet specific needs. We provide a proof of concept analysis, in which we investigate the effect of replacing Res-101 with Res-152. Table 6 shows the result. Performance improvements are observed when we use a deeper visual model. By elaborating other components in IGSC, it seems reasonable to expect this approach should yield even better results.\nModel visualization. Finally, we visualize the “model representations” of the label classifiers by using t-SNE (van der Maaten & Hinton, 2008) of the dynamically generated classifier weights. Figure 3 displays the visualization results using the aPY dataset. Each point in the figure represents a label classifier generated on-the-fly by feeding a test image to the IGSC model. Colors indicate class labels. Although each image has its own label classifier, the IGSC method tends to generate similar model representations for images of the same class (Fig. 3 (a)). Please note that this framework allows the learning of a “single” label classifier for each class (class-level classifier), i.e., the model representations for images of the same class are identical. However, the results show that instancelevel classifiers benefit the optimization of recognition accuracy (points are scattered but fall around)." }, { "heading": "5 CONCLUSION", "text": "We propose a unifying visual-semantic embedding model that transform an image into a label classifier, consequently used to predict the correct label in the semantic space. Modeling the correspondence between an image and its label classifier enables a powerful GZSL method that achieves promising performances on four benchmark datasets. One future research direction we are pursuing is to extend the method for multi-label zero-shot learning, in which images are assigned with multiple labels from an open vocabulary. This would take full advantage of the semantic space. Another direction is to explore model learning with a less restricted setting, which can be transductive for specific unseen classes or test instances." } ]
2,020
null
SP:2997e3ea21f2a8a5dbb7952ecabcc70dfc1e0c57
[ "This paper proposes to change the perturbation budget for adversarial attacks to a non-uniform setting where differet input pixels have different perturbation budgets. To achieve this, an additional network is trained to learn the perturbation budget for each part of the input. The approach seems to perform better than a uniform perturbation budget and also learns semantically meaningful budgets for the input.", "This paper address the problem of training robust neural networks with non-uniform perturbation budgets on different input pixels. In practice, a perturbation budget generator is introduced to generate the context-aware perturbation budget (i.e. conditioned on the input) for each pixel of the input image. A “robustness volume” constraint on generated perturbation budgets to control the robustness intensity is also proposed. Extensive experiments on MNIST and CIFAR10 demonstrate the proposed outperform SOTA method under various uniform perturbation budgets." ]
Existing methods for training robust neural networks generally aim to make models uniformly robust on all input dimensions. However, different input dimensions are not uniformly important to the prediction. In this paper, we propose a novel framework to train certifiably robust models and learn non-uniform perturbation budgets on different input dimensions, in contrast to using the popular `∞ threat model. We incorporate a perturbation budget generator into the existing certified defense framework, and perform certified training with generated perturbation budgets. In comparison to the radius of `∞ ball in previous works, the robustness intensity is measured by robustness volume which is the multiplication of perturbation budgets on all input dimensions. We evaluate our method on MNIST and CIFAR-10 datasets and show that we can achieve lower clean and certified errors on relatively larger robustness volumes, compared to methods using uniform perturbation budgets. Further with two synthetic datasets constructed from MNIST and CIFAR-10, we also demonstrate that the perturbation budget generator can produce semantically-meaningful budgets, which implies that the generator can capture contextual information and the sensitivity of different features in input images.
[]
[ { "authors": [ "Battista Biggio", "Igino Corona", "Davide Maiorca", "Blaine Nelson", "Nedim Šrndić", "Pavel Laskov", "Giorgio Giacinto", "Fabio Roli" ], "title": "Evasion attacks against machine learning at test time", "venue": "In Joint European conference on machine learning and knowledge discovery in databases,", "year": 2013 }, { "authors": [ "Patrick Bilic", "Patrick Ferdinand Christ", "Eugene Vorontsov", "Grzegorz Chlebus", "Hao Chen", "Qi Dou", "Chi-Wing Fu", "Xiao Han", "Pheng-Ann Heng", "Jürgen Hesser" ], "title": "The liver tumor segmentation benchmark (lits)", "venue": null, "year": 1901 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Adversarial examples are not easily detected: Bypassing ten detection methods", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "Jeremy M Cohen", "Elan Rosenfeld", "J Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Sven Gowal", "Krishnamurthy Dvijotham", "Robert Stanforth", "Rudy Bunel", "Chongli Qin", "Jonathan Uesato", "Timothy Mann", "Pushmeet Kohli" ], "title": "On the effectiveness of interval bound propagation for training verifiably robust models", "venue": "arXiv preprint arXiv:1810.12715,", "year": 2018 }, { "authors": [ "Sven Gowal", "Jonathan Uesato", "Chongli Qin", "Po-Sen Huang", "Timothy Mann", "Pushmeet Kohli" ], "title": "An alternative surrogate loss for pgd-based adversarial testing", "venue": null, "year": 1910 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical Report TR-2009,", "year": 2009 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "CJ Burges" ], "title": "Mnist handwritten digit database", "venue": "ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist,", "year": 2010 }, { "authors": [ "Chen Liu", "Ryota Tomioka", "Volkan Cevher" ], "title": "On certifying non-uniform bound against adversarial attacks", "venue": "arXiv preprint arXiv:1903.06603,", "year": 2019 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Matthew Mirman", "Timon Gehr", "Martin Vechev" ], "title": "Differentiable abstract interpretation for provably robust neural networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: a simple and accurate method to fool deep neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Omar Fawzi", "Pascal Frossard" ], "title": "Universal adversarial perturbations", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Ian Goodfellow" ], "title": "Transferability in machine learning: from phenomena to black-box attacks using adversarial samples", "venue": "arXiv preprint arXiv:1605.07277,", "year": 2016 }, { "authors": [ "Hadi Salman", "Jerry Li", "Ilya Razenshteyn", "Pengchuan Zhang", "Huan Zhang", "Sebastien Bubeck", "Greg Yang" ], "title": "Provably robust deep learning via adversarially trained smoothed classifiers", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ali Shafahi", "Mahyar Najibi", "Mohammad Amin Ghiasi", "Zheng Xu", "John Dickerson", "Christoph Studer", "Larry S Davis", "Gavin Taylor", "Tom Goldstein" ], "title": "Adversarial training for free", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Gagandeep Singh", "Timon Gehr", "Matthew Mirman", "Markus Püschel", "Martin Vechev" ], "title": "Fast and effective robustness certification", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In ICLR,", "year": 2013 }, { "authors": [ "Eric Wong", "Zico Kolter" ], "title": "Provable defenses against adversarial examples via the convex outer adversarial polytope", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Eric Wong", "Frank Schmidt", "Jan Hendrik Metzen", "J Zico Kolter" ], "title": "Scaling provable adversarial defenses", "venue": "In NIPS,", "year": 2018 }, { "authors": [ "Eric Wong", "Leslie Rice", "J Zico Kolter" ], "title": "Fast is better than free: Revisiting adversarial training", "venue": "arXiv preprint arXiv:2001.03994,", "year": 2020 }, { "authors": [ "Kaidi Xu", "Zhouxing Shi", "Huan Zhang", "Yihan Wang", "Kai-Wei Chang", "Minlie Huang", "Bhavya Kailkhura", "Xue Lin", "Cho-Jui Hsieh" ], "title": "Provable, scalable and automatic perturbation analysis on general computational graphs, 2020", "venue": null, "year": 2020 }, { "authors": [ "Xuanang Xu", "Fugen Zhou", "Bo Liu", "Dongshan Fu", "Xiangzhi Bai" ], "title": "Efficient multiple organ localization in ct image using 3d region proposal network", "venue": "IEEE transactions on medical imaging,", "year": 2019 }, { "authors": [ "Jiancheng Yang", "Rui Shi", "Bingbing Ni" ], "title": "Medmnist classification decathlon: A lightweight automl benchmark for medical image analysis", "venue": "arXiv preprint arXiv:2010.14925,", "year": 2020 }, { "authors": [ "Runtian Zhai", "Chen Dan", "Di He", "Huan Zhang", "Boqing Gong", "Pradeep Ravikumar", "Cho-Jui Hsieh", "Liwei Wang" ], "title": "Macer: Attack-free and scalable robust training via maximizing certified radius, 2020", "venue": null, "year": 2020 }, { "authors": [ "Huan Zhang", "Tsui-Wei Weng", "Pin-Yu Chen", "Cho-Jui Hsieh", "Luca Daniel" ], "title": "Efficient neural network robustness certification with general activation functions", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Huan Zhang", "Hongge Chen", "Chaowei Xiao", "Bo Li", "Duane Boning", "Cho-Jui Hsieh" ], "title": "Towards stable and efficient training of verifiably robust neural networks", "venue": "In International Conference on Learning Representations,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "It has been demonstrated that deep neural networks, although achieving impressive performance on various tasks, are vulnerable to adverarial perturbations (Szegedy et al., 2013). Models with high accuracy on clean and unperturbed data can be fooled to have extremely poor performance when input data are adversarially perturbed. The existence of adversarial perturbations causes concerns in safety-critical applications such as self-driving cars, face recognition and medical diagnosis.\nA number of methods have been proposed for training robust neural networks that can resist to adversarial perturbations to some extent. Among them, adversarial training (Goodfellow et al., 2015; Madry et al., 2018) and certified defenses (Wong et al., 2018; Gowal et al., 2018; Zhang et al., 2020) are of the most reliable ones so far, and most of them are trying to make the network robust to any perturbation within an `p norm ball. Taking the commonly used `∞-ball defense as an example, robust training methods aim to make model robust to perturbation on any pixel, which means the model is uniformly robust on all the input dimensions. But is this a valid assumption we should make?\nAs we know, human perception is non-uniform (humans focus on important features even though these features can be sensitive to small noise) and context dependent (what part of image is important heavily depends on what is on the image). We expect a robust model to be close to human perception, rather than learn to defend against a particular fixed threat model, e.g., the traditional `∞-norm one. Intuitively, we expect a good model to be more sensitive to important features and less sensitive to unimportant features, and the importance of features should be context-dependent. Taking the MNIST hand-written digit classification problem as an example, the digit 9 can be transformed to 4 simply by modifying just a few pixels on its head, so those pixels should be considered more important, and enforcing them to be robust to a large perturbation may not be correct. On the other hand, the pixels on the frame of such an input image can be safely modified without changing the ground-truth label of the image. Therefore, a uniform budget in robust training can greatly hamper the performance of neural networks on certain tasks, and will force network to ignore some important features that are important for classification. Robustness certification with non-uniform perturbation budgets has been\ndiscussed in a prior work (Liu et al., 2019), but training robust models and learning context-dependent perturbation budgets has not been addressed in prior works, which is more challenging and important for obtaining robust models. A detailed discussion on our difference with Liu et al. (2019) is in Sec. 2.2.\nIn this paper, we propose the first method that can learn context-dependent non-uniform perturbation budgets in certified robust training, based on prior certified defense algorithms on `p-norm threat models (Zhang et al., 2020; Xu et al., 2020). To learn a context-dependent budget without introducing too many learnable parameters, we introduce a perturbation budget generator with an auxiliary neural network, to generate the context-dependent budgets based on the input image. We also impose constraints on the generator to make generated budgets satisfy target robustness volumes and ranges of budgets, where robustness volume is defined as the multiplication of budgets on all input dimensions. We then train the classifier with a linear-relaxation-based certified defense algorithm, auto LiRPA (Xu et al., 2020) generalized from CROWN-IBP (Zhang et al., 2020), to minimize the verified error under given budget constraints. The gradients of the loss function can be back-propagated to the perturbation budgets, allowing training the classification network and budget generator jointly in robust training.\nOur contribution can be summarized below:\n• We propose a novel algorithm to train robust networks with contextual perturbation budgets rather than uniform ones. We show that it can be incorporated into certified defense methods with linear relaxation-based robustness verification.\n• We demonstrate that our method can effectively train both the classifier and the perturbation generator jointly, and we able to train models on relatively larger robustness volumes and outperform those trained with uniform budgets.\n• We also show that the learned perturbation budgets are semantically meaningful and align well with the importance of different pixels in the input image. We further confirm this with two synthetic tasks and datasets constructed from MNIST and CIFAR-10 respectively." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "" }, { "heading": "2.1 TRAINING ROBUST NEURAL NETWORKS", "text": "Since the discovery of adversarial examples (Szegedy et al. (2013), Biggio et al. (2013)), a great number of works has been devoted to improving the robustness of neural networks from both attack and defense perspectives (Moosavi-Dezfooli et al., 2016; Carlini & Wagner, 2017; Papernot et al., 2016; Moosavi-Dezfooli et al., 2017; Gowal et al., 2019). On a K-way classification task, training an adversarially robust neural network fw with weight w can generally be formulated as solving the following min-max optimization problem:\nmin w E (x,y)∼D max δ∈∆ L(fw(x + δ), y), (1)\nwhere D is the data distribution, and ∆ is a threat model defining the space of the perturbations, and L is a loss function.\nAdversarial training (Goodfellow et al., 2015; Madry et al., 2018) applies adversarial attacks to solve the inner maximization problem and train the neural network on generated adversarial examples, with efficiency advanced in some recent works (Shafahi et al., 2019; Wong et al., 2020). However, robustness improvements from adversarial training do not have provable guarantees.\nSome other recent works seek to train networks that have provable robustness, namely certified defense methods. Such methods solves the inner maximization by computing certified upper bounds that provably hold true for any perturbation within the threat model, including abstract interpretation (Singh et al., 2018), interval bound propagation (Gowal et al., 2018; Mirman et al., 2018), randomized smoothing (Cohen et al., 2019; Salman et al., 2019; Zhai et al., 2020), and linear-relaxation-based methods (Wong & Kolter, 2018; Mirman et al., 2018; Wong et al., 2018; Zhang et al., 2020; Xu et al., 2020). However, nearly all existing certified defense methods treat all input features equally in the threat model, such as an `p-ball threat models, ∆={δ :‖δ ‖p≤ }, especially the `∞ threat model commonly used in many previous works. But different input pixels are not uniformly important to the prediction, and thus we propose to learn contextual perturbation budgets for different pixels." }, { "heading": "2.2 ROBUSTNESS CERTIFICATION WITH NON-UNIFORM BUDGETS", "text": "There has been a recent work Liu et al. (2019) studying robustness certification with non-uniform budgets. Their work aims to maximize the robustness volume while ensuring the network prediction is certifiably correct, and this optimization problem is solved using augmented Lagrangian method. To highlight, their method has several limitations: 1) The perturbation budgets are obtained by solving a constrained optimization problem for each input example, which is very time-consuming. This is not only too inefficient for training as it can bring a large overhead in each training step, but also incapable for learning perturbation budgets jointly with the classifier. In contrast, we have a significantly different scheme – we aim to maximize the accuracy under given target robustness volumes. And our perturbation budgets is obtained with a lightweight neural-network-based generator, and it can be jointly trained with the classifier in an end-to-end way. 2) Their work only focuses on certifying trained models due to the inherent limitation of the method, while we address of the problem of training robust neural networks and learning contextual perturbation budgets simultaneously. Consequently, we are able to empirically demonstrate that we can effectively train robust models with much larger robustness volumes and achieve lower errors under given target robustness volumes. 3) We further have experiments with synthetic tasks on MNIST and CIFAR-10 in Sec. 4.2 and Sec. 4.3 respectively to demonstrate that the learned perturbation budgets are semantically meaningful and can capture contextual information." }, { "heading": "3 PROPOSED METHOD", "text": "" }, { "heading": "3.1 PROBLEM SETTING", "text": "Variable threat model Unlike many previous works that use an `∞ threat model with a uniform on all input dimensions, we allow different pixels to have different perturbation budgets, but they need to meet some constraints as we will define below. For an n-dimensional input x, when the perturbation budget of pixel xi is i, the threat model is\n∆( 1, 2, · · · , n) = {δ : |δi| ≤ i, 1 ≤ i ≤ n}. And thereby the `∞ threat model is a special case with i = 0(∀1 ≤ i ≤ n). We define the robustness volume of a threat model ∆ as the multiplication of all i(1 ≤ i ≤ n):\nV (∆) = n∏ i=1 i, i.e., log V (∆) = n∑ i=1 log i. (2)\nIn principal, we have a target robustness volume, V0 = n0 , and fw is considered to be provably robust under this target robustness volume on instance (x, y) if and only if:\n∃∆ ∈ D, ∀k 6= y, min δ∈∆ ([fw(x + δ)]y − [fw(x + δ)]k) > 0. (3)\nThis means that the predicted score of the ground-truth class y is certifiably larger than any other class k 6= y under some threat model ∆. And instead of using a fixed ∆, in our framework ∆ can be taken from a threat model space D, which consists of threat models ∆( 1, 2, · · · , n) satisfying the following two constraints:\nVolume constraint: ∑ i log i = n log 0, (4)\nRange constraint: l ≤ i ≤ u, l = α 0, u = min(α 0, 1). (5)\nThe volume constraint states that the robustness volume of ∆ is equal to the target robustness volume n0 (written in log domain above). For the range constraint, there are relative factors α and α controlling the perturbation budget range of each pixel, namely [l, u]. This constraint can be set to guarantee a minimum robustness and also prevent the model from over-invariant on each pixel.\nPerturbation budget generation A classifier can be accompanied by a perturbation budget generator (x) which tries to find perturbation budgets 1, 2, · · · , n and the corresponding threat model ∆(x) that can minimize the verified loss of the classifier while satisfying constraints (4) and (5), so that (3) is more likely to hold true with ∆ generated by (x). We will state the optimization problem in the next paragraph.\nRobust classification Accompanied by a perturbation budget generator, we aim to learn a robust classifier fw with the following min-max optimization problem:\nmin w E (x,y)∼D max δ∈∆(x) L(fw(x + δ), y). (6)\nNote that a key difference between this problem and the traditional one in (1) is that now the threat model ∆(x) is variable and dependent on the input x. This ∆(x) is generated by (x), under the given volume and range constraints. We evaluate the robustness of a classifier fw by computing an average verified error on all the test instances, where the verified correctness on each instance is evaluated similarly as (3), where ∆ is taken as the generated ∆(x)." }, { "heading": "3.2 ALGORITHM FRAMEWORK", "text": "Figure 1 illustrates our algorithm framework. Given an input x, the perturbation budget generator outputs perturbation budgets (x) based on an auxiliary network gθ(x) parameterized by θ. Then a robustness verifier takes the unperturbed input x and perturbation budgets (x) as input and computes a robust loss function Lrob(x, (x)). During training, this loss function is back propagated to compute its gradients w.r.t. the parameters of the classifier fw and also perturbation budgets (x), and the gradients w.r.t. (x) are further propagated the parameters of gθ(x). fw and gθ are jointly trained to minimize the robust loss. During test, the verifier also outputs a verified error for evaluation by computing the lower bound of the margin, m(x, (x)) (see Sec. 3.4)." }, { "heading": "3.3 PERTURBATION BUDGET GENERATOR", "text": "For the perturbation budget generator (x), we generate perturbation budgets in two steps. In the first step, we generate initial perturbation budgets that are only required to meet the range constraint. And in the second step, we refine the perturbation budgets to meet the volume constraint. These two steps are detailed below.\nInitial perturbation budgets We first use a neural network gθ(x) ∈ [0, 1]n to generate an initial budget distribution. Specifically, we assign [̃(x)]i = exp(log l + [gθ(x)]i(log u − log l)) as the initial budget of the i-th pixel, and thereby [̃(x)]i ∈ [l, u]. In our work, we employ a two-layer convolutional model with a sigmoid activation in the last as gθ(x).\nRefining the perturbation budgets We then refine the perturbation budgets to make them meet the volume constraint. We compute the log volume of these budgets, log V = n log l + (log u − log l) ∑ i[̃(x)]i, and compare it with the target log robustness volume log V0 = n log 0. If V <V0, we need to allocate (V0 − V ) to the log perturbation budgets of different pixels, while still ensuring that each budget is no larger than u. This allocation is done proportionally according to the distance between the current log perturbation budget log[̃(x)]i and the upper limit log u of each pixel, for each pixel i, and then budget log[ (x)]i is obtained with:\nlog[ (x)]i = log[̃(x)]i + (n log 0 − V ) log u− log[̃(x)]i n log u− V\nfor V <V0 = n log 0.\nIf V > n log 0, we remove redundant perturbation budgets. Similar to previous case, this refinement is done proportionally according to the distance between log[ (x)]i and the lower limit log l:\nlog[ (x)]i = log[̃(x)]i − (V − n log 0) log[̃(x)]i − log l V − n log l\nfor V >V0 = n log 0." }, { "heading": "3.4 CERTIFIED TRAINING WITH NON-UNIFORM PERTURBATION BUDGETS", "text": "For the robustness verifier, we adopt auto LiRPA (Xu et al., 2020). It is based on linear relaxations and generalized from CROWN-IBP (Zhang et al., 2020), and there is also a loss fusion technique for more efficient robust training. In CROWN-IBP and also the default setting in auto LiRPA, there are generally two passes for each input batch, excluding the gradient back-propagation pass. In the first pass, Interval Bound Propagation (IBP) (Mirman et al., 2018; Gowal et al., 2018) is used to compute the output bounds of intermediate layers, which is required by linear relaxations. Given an input x and a uniform perturbation budget , the input of the IBP pass is the lower and upper bounds of the input image, [Clip(x − ), Clip(x + )], where Clip(·) stands for clipping the image bounds into domain [0, 1]n. In the second pass, the CROWN algorithm (Zhang et al., 2018) is applied on the output layer to compute its linear bounds w.r.t. perturbed input (x+ δ), utilizing intermediate bounds by IBP for linearly relaxing activation functions. Let h(x + δ) represent the output layer, then its linear bounds w.r.t. (x + δ) and concrete (final) bounds without (x + δ) are (Zhang et al., 2018):\nAx + b− ‖A‖1 ≤ A(x + δ) + b ≤ h(x + δ) ≤ A(x + δ) + b ≤ Ax + b+ ‖A‖1 , (7) where A,A,b,b are parameters of linear bounds obtained by CROWN. We refer readers to Xu et al. (2020) for their detailed algorithm.\nWhen non-uniform perturbation budgets are used, (x) from the perturbation generator is used rather than scalar . This causes differences mainly on the input lower and upper bounds for the IBP pass and the concretization of linear bounds in (7).\nIn inference, we compute the lower bound of the minimum margin between the ground truth label y and any other label i 6= y:\nm(x, (x)) = min δ∈∆(x) m(x + δ),wherem(x + δ) = min i6=y ([fw(x + δ)]y − [fw(x + δ)]i), (8)\nby taking the margin functions as the output layer in robustness verification. m(x, (x)) is the lower bound of the margin for all possible perturbation δ ∈ ∆. The classifier is verifiably robust on this instance and the threat model ∆(x) from the generated perturbation budgets, if m(x, (x)) > 0, as defined in (3).\nDuring training, we use loss fusion (Xu et al., 2020), to compute and optimize the robust loss Lrob(x, (x)):\nLrob(x, (x)) ≥ max δ∈∆(x) L(fw(x + δ), y), (9)\nwhere the robust loss is an upper bound of the right-hand-side, obtained from the verifier. The auto LiRPA verifier is differentiable and thus we are able to back-propagate the gradients of the loss function w.r.t. the parameters of the both classifier and perturbation budget generator for training. In contrast, this joint training appears to be more difficult for adversarial training methods such as Projected Gradient Descent (PGD) attack (Madry et al., 2018). The PGD loss is technically differentiable to the perturbation budgets. However, take N -step PGD training as an example, each step we add a noise roughly N , so the is used by every step of PGD. Eventually, to get the correct gradient w.r.t , we will need to backprop through the N PGD update steps, which makes the gradient complicated and hard to calculate." }, { "heading": "4 EXPERIMENTS", "text": "We conduct experiments mainly on MNIST (LeCun et al., 2010) and CIFAR-10 (Krizhevsky et al., 2009) to demonstrate the effectiveness of our proposed method. We show that our method can\neffectively train certifiably robust models with learned contextual perturbation budgets, and the models achieve lower verified errors compared to those using uniform budgets. We also constructed two synthetic datasets to demonstrate that the learned perturbation budgets are semantically meaningful. We report implementation details in Appendix A." }, { "heading": "4.1 EXPERIMENTS ON MNIST AND CIFAR-10", "text": "For MNIST, we consider robustness volumes with 0 = 0.4, 0.6, 0.8 respectively and set α to 0.5 and α = 2.0, and for CIFAR-10, we consider 0 = 8/255, 32/255 respectively and set α to 0.125 and α to 2.0. We reproduce baseline models with uniform budgets following the best setting in CROWN-IBP (Zhang et al., 2020), and in particular, the models with uniform perturbation budgets 0 = 0.4 on MNIST and 0 = 8/255 on CIFAR-10 are directly downloaded from their released models1. We evaluate the baseline models and our models with learned perturbation budgets mainly using clean error and verified error with the corresponding type of perturbation budgets used for training, i.e., baseline models are verified with uniform budgets and our models are verified with learned budgets. In addition, we also report verified error computed with Liu et al. (2019)2, where the model is considered as verified on an example if and only if the volume of perturbation budgets by Liu et al. (2019) is at least n0 .\nResults are shown in Table 1 for MNIST and Table 2 for CIFAR-10. On MNIST, on the smaller 0 = 0.4 target volume, our model has lower clean error and lower verified error on learned budgets, compared to the model trained and evaluated with uniform budgets, though the verified error of our model is higher than the one trained with uniform budgets when evaluating with Liu et al. (2019). When using larger target volumes, we find that training with uniform budgets totally fail on 0 = 0.6 and 0 = 0.8, because we are able to totally break the classification by perturbing every pixel to 0.5 under uniform perturbation budgets = 0.6 or = 0.8, and the model trained with 0 = 0.4 uniform budget is not able to generalize to 0 = 0.6 and 0 = 0.8 target robustness volumes in test. In contrast to using uniform budgets, with our learned budgets, we are able to handle 0 = 0.6 and 0 = 0.8, achieving reasonable verified errors. Moreover, on CIFAR-10, our models using learned\n1https://github.com/huanzhang12/CROWN-IBP 2https://github.com/liuchen11/CertifyNonuniformBounds\nbudgets achieve lower verified errors than models with uniform budgets on both 0 = 8/255 and 0 = 32/255. The experiments demonstrates the effectiveness of our method for training more robust models under non-uniform perturbation budgets.\nWe note that our models have higher verified errors when evaluated on uniform budgets, but evaluation on uniform budgets is not the major goal of this paper, nor it is the ultimate goal of robust machine learning, as we have argued that non-uniform budgets are more practical. Nevertheless, in case that one expects to maintain a performance on uniform budgets while training with non-uniform learned budgets, this can be achieved by setting α which controls the minimum budget of each pixel, as we demonstrate in Appendix C.\nWe also visualize our learned perturbation budget distributions in Figure 2 and Figure 3 for MNIST and CIFAR-10 respectively. Our perturbation budget generator is able to identify important and sensitive features in the input image and assign relatively smaller budgets to the corresponding pixels, while insensitive pixels are assigned with relatively larger perturbation budgets. Note that the perturbation budgets also highlight sensitive pixels that are missing from input images. Therefore, learned perturbations contain important features from both the ground truth class and the classes that are close to it. For example, as visualized in Figure 2, perturbation budgets of digit 4 combine features from both 4 and 9, and digit 5 looks like 5 overlapped with 8." }, { "heading": "4.2 EXPERIMENTS ON WATERMARKED MNIST", "text": "To demonstrate another benefit of using different perturbation budgets for different pixels, we artificially constructed a dataset from MNIST, namely the Watermarked MNIST. For each image in the original MNIST dataset, as shown in the first row of Figure 4, we add a small watermark to the left or right margin of the image, which decides the gold label jointly with the main digit in the image. The watermark is darker than the main digit, and thus we expect to see the perturbation generator assign smaller perturbation budgets to the watermark, to retain the information carried by the watermark, while training uniform budgets is likely to fail for breaking the watermark. Specifically, the watermark is a digit of zero or one, chosen randomly, from an image with the corresponding label in the original dataset. We scale down the image to 12× 12, and we crop out 2 pixels on each border and retain the inner 8× 8 part. The opacity of the watermark is reduced to 30%. We randomly put the watermark either on the left or right 28 × 8 border, and the vertical position span is randomly chosen from [0, 7], [1, 8], · · · , [20, 27]. The gold label of the new image is the main digit itself, if the watermark is 0, otherwise the gold label is the main digit plus 10. In this way, we construct a 20-class digit classification problem.\nWe then perform certified defense on this synthetic dataset, with 0 = 0.4. For our learned perturbation budget generator, we set α = 0.125, α = 2.5. We compare the results with the method using uniform perturbation budgets, as shown in Table 3. When uniform perturbation budgets are used, the clean error and the verified error are very large (higher than 50%). This is because the values of the pixels with the watermark is around 30% of those in an original digit, and thus when a perturbation budget of ≥ 0.3 is uniformly added to all the pixels, it is enough to perturb the image and make the watermark disappear, and thus the model cannot achieve low errors. In contrast, as shown in Figure 4, by learning contextual perturbation budgets, the budget generator tends to assign relatively small budgets to pixels with the watermark. Pixels with the main digit tend to have budgets that are somewhat larger, and there are empty pixels which receive even larger budgets. With such perturbation budgets, we achieve much lower verified error (19.04 v.s. 59.29 on 0 = 0.4) and clean error (3.48 v.s. 51.75 on 0 = 0.4) under the same robustness volume, as shown in Table 3. This experiment demonstrates the importance of learning different perturbation budgets for different pixels and the effectiveness of our proposed method." }, { "heading": "4.3 EXPERIMENTS ON DOUBLED CIFAR-10", "text": "To further demonstrate the ability of the perturbation generator in identifying important input features, we create a harder synthetic dataset, namely Doubled CIFAR-10, based on CIFAR-10. An original input image in CIFAR-10 has a size of 32 × 32. In this setting, each new input image consists of two original images on the left and the right respectively. Two images combined in this way yield a new one with a size of 32× 64. We also add a 4× 64 border to the top of the new image and make the border either red or green. If the border if red, the gold label is taken as the gold label of the left original image, otherwise it is the gold label of the right original image. The border serves as a hint for the model to tell which original 32× 32 image it should look at for prediction. Thereby the size of each new image is 36× 64. And with such a dataset, we expect to further study whether the perturbation budget generator can identify important features in the input image, since now one of the two component image in each new image does not affect the gold label, and thus the model is expected to assign large perturbation budgets on this component image.\nWe use 0 = 16/255, α = 0.0625, α = 1.5. Results are shown in Table 4. Again, the errors of the model with learned perturbation budgets are much lower than those of the model with uniform budgets. We also visualize the learned perturbation budgets in Figure 5. We observe that when the border is in red, large perturbation budgets are assigned to the right side, which is consistent with that the gold label is determined only by the left side in this case. This observation also holds true for the case when the border is in green. Moreover, we also observe that the perturbation budget distributions of the other side are semantically meaningful. This experiment further demonstrates that our proposed method is able to identify important features in the input and learn semantically meaningful distributions of perturbation budgets." }, { "heading": "5 CONCLUSIONS", "text": "In this paper, we propose a novel approach to learn contextual perturbation budgets and successfully incorporate it into linear-relaxation-based certified training methods. We show its superiority over methods with uniform perturbation budgets via experiments on MNIST and CIFAR-10. We also demonstrate that the perturbation budget generator can capture semantic information and align well with features in the input image with synthetically constructed Watermarked MNIST and Doubled CIFAR10 datasets. Our proposed method can be used in settings where the perturbation budget of a dataset cannot be easily determined as in toy datasets, and can handle context dependent perturbations." }, { "heading": "C CONTROLLING PERFORMANCE ON UNIFORM PERTURBATION BUDGETS", "text": "In our experiment results shown in Table 1 and Table 2, we find that models trained with our learned budgets tend to have higher verified errors when the models are evaluated with uniform budgets. Although we have argued that evaluation on uniform budgets is not our major goal and it is less practical than learned non-uniform budgets, in this section, we demonstrate that it is still possible to maintain a performance on uniform budgets while training with learned budgets by controlling α. We train a model on MNIST with 0 = 0.5 and α = 0.6, which means the minimum budget of each pixel is 0.3. As a result, verified error on non-uniform budgets is 10.46, while the verified error on uniform = 0.3 budget is 9.96, which is close to the 9.40 verified error reported in CROWN-IBP (Zhang et al., 2020) on the same “DM-small” model trained with uniform budgets. And if 0 = 0.6, α = 0.5, the verified error on uniform = 0.3 is 11.22 which is also not far away from 9.40, while now our model has a robustness on a volume as large as 0.6n." } ]
2,020
LEARNING CONTEXTUAL PERTURBATION BUDGETS
SP:42b2a4961b167d02370a0924d0666be1bf962110
[ "The paper proposes a new regularization for the dictionary in the learned convolutional sparse coding model of Sreter & Giryes '18. The main contribution is that the dictionary is regularized to be composed of 1) a fixed low-pass filter and 2) a set of learned filters to occupy the complementary high-frequency space. A second contribution is that the thresholding in the network is adjustable according to the estimated noise level in the image. ", "The paper proposes a denoising method with a neural network inspired from convolutional dictionary learning. In the proposed method, one atom of the dictionary is constrained to be a low frequency filters and all other atoms are to be high frequency atoms. The authors also propose to make the threshold depends on the noise level to better adapt to different noise level and to use strided convolution to reduce the computational cost of the method. The method is then evaluated on images from BSD68." ]
Sparse representation via a learned dictionary is a powerful prior for natural images. In recent years, unrolled sparse coding algorithms (e.g. LISTA) have proven to be useful for constructing interpretable deep-learning networks that perform on par with state-of-the-art models on image-restoration tasks. In this study we are concerned with extending the work of such convolutional dictionary learning (CDL) models. We propose to construct strided convolutional dictionaries with a single analytic low-pass filter and a set of learned filters regularized to occupy the complementary frequency space. By doing so, we address the necessary modeling assumptions of natural images with respect to convolutional sparse coding and reduce the mutual coherence and redundancy of the learned filters. We show improved denoising performance at reduced computational complexity when compared to other CDL methods, and competitive results when compared to popular deep-learning models. We further propose to parameterize the thresholds in the soft-thresholding operator of LISTA to be proportional to the estimated noise-variance from an input image. We demonstrate that this parameterization enhances robustness to noise-level mismatch between training and inference.
[]
[ { "authors": [ "Pierre Ablin", "Thomas Moreau", "Mathurin Massias", "Alexandre Gramfort" ], "title": "Learning step sizes for unfolded sparse coding", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Michal Aharon", "Michael Elad", "Alfred Bruckstein" ], "title": "K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation", "venue": "IEEE Transactions on Signal Processing,", "year": 2006 }, { "authors": [ "Dmitry Batenkov", "Yaniv Romano", "Michael Elad" ], "title": "On the global-local dichotomy in sparsity modeling", "venue": "In Compressed Sensing and its Applications,", "year": 2017 }, { "authors": [ "Ilker Bayram", "Ivan W. Selesnick" ], "title": "A subband adaptive iterative shrinkage/thresholding algorithm", "venue": "IEEE Transactions on Signal Processing,", "year": 2010 }, { "authors": [ "Amir Beck", "Marc Teboulle" ], "title": "A fast iterative shrinkage-thresholding algorithm for linear inverse problems", "venue": "SIAM Journal on Imaging Sciences,", "year": 2009 }, { "authors": [ "S Grace Chang", "Bin Yu", "Martin Vetterli" ], "title": "Adaptive wavelet thresholding for image denoising and compression", "venue": "IEEE Transactions on Image Processing,", "year": 2000 }, { "authors": [ "Xiaohan Chen", "Jialin Liu", "Zhangyang Wang", "Wotao Yin" ], "title": "Theoretical linear convergence of unfolded ISTA and its practical weights and thresholds", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Kostadin Dabov", "Alessandro Foi", "Vladimir Katkovnik", "Karen Egiazarian" ], "title": "Image denoising by sparse 3-D transform-domain collaborative filtering", "venue": "IEEE Transactions on Image Processing,", "year": 2007 }, { "authors": [ "D. Donoho", "I. Johnstone" ], "title": "Ideal spatial adaptation by wavelet shrinkage", "venue": "Biometrika, 81:425–455,", "year": 1994 }, { "authors": [ "David Donoho", "Iain M. Johnstone" ], "title": "Adapting to unknown smoothness via wavelet shrinkage", "venue": "Journal of the American Statistical Association,", "year": 1995 }, { "authors": [ "Michael Elad", "J.L. Starck", "P. Querre", "D.L. Donoho" ], "title": "Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA)", "venue": "Applied and Computational Harmonic Analysis,", "year": 2005 }, { "authors": [ "Karol Gregor", "Yann LeCun" ], "title": "Learning fast approximations of sparse coding", "venue": "In Proceedings of the 27th International Conference on Machine Learning,", "year": 2010 }, { "authors": [ "Roger Grosse", "Rajat Raina", "Helen Kwong", "Andrew Y. Ng" ], "title": "Shift-Invariant Sparse Coding for Audio", "venue": "Classification. Cortex,", "year": 2007 }, { "authors": [ "Kenzo Isogawa", "Takashi Ida", "Taichiro Shiodera", "Tomoyuki Takeguchi" ], "title": "Deep shrinkage convolutional neural network for adaptive noise reduction", "venue": "IEEE Signal Processing Letters,", "year": 2017 }, { "authors": [ "Dohyun Kim", "Daeyoung Park" ], "title": "Element-wise adaptive thresholds for learned iterative shrinkage thresholding algorithms", "venue": "IEEE Access,", "year": 2020 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Clement Lalanne", "Maxence Rateaux", "Laurent Oudre", "Matthieu P. Robert", "Thomas Moreau" ], "title": "Extraction of Nystagmus Patterns from Eye-Tracker Data with Convolutional Sparse Coding", "venue": "In 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC),", "year": 2020 }, { "authors": [ "Bruno Lecouat", "Jean Ponce", "Julien Mairal" ], "title": "Fully trainable and interpretable non-local sparse models for image restoration, 2020", "venue": null, "year": 2020 }, { "authors": [ "Julien Mairal", "Francis Bach", "Jean Ponce", "Guillermo Sapiro" ], "title": "Online dictionary learning for sparse coding", "venue": "In Proceedings of the 26th International Conference on Machine Learning,", "year": 2009 }, { "authors": [ "Julien Mairal", "Francis Bach", "Jean Ponce", "Guillermo Sapiro", "Andrew Zisserman" ], "title": "Non-local sparse models for image restoration", "venue": "In Proceedings of 12th IEEE International Conference on Computer Vision, pp", "year": 2009 }, { "authors": [ "Julien Mairal", "Francis Bach", "Jean Ponce" ], "title": "Sparse Modeling for Image and Vision Processing", "venue": "Foundations and Trends® in Computer Graphics and Vision,", "year": 2014 }, { "authors": [ "Stephane Mallat" ], "title": "A Wavelet Tour of Signal Processing: The Sparse Way", "venue": "Elsevier Science,", "year": 2008 }, { "authors": [ "David Martin", "Charless Fowlkes", "Doron Tal", "Jitendra Malik" ], "title": "A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics", "venue": "In Proceedings of Eighth IEEE International Conference on Computer Vision,", "year": 2001 }, { "authors": [ "Sreyas Mohan", "Zahra Kadkhodaie", "Eero P. Simoncelli", "Carlos Fernandez-Granda" ], "title": "Robust and interpretable blind image denoising via bias-free convolutional neural networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Thomas Moreau", "Alexandre Gramfort" ], "title": "Distributed convolutional dictionary learning (dicodile): Pattern discovery in large images and signals", "venue": "arXiv preprint arXiv:1901.09235,", "year": 2019 }, { "authors": [ "Balas Kausik Natarajan" ], "title": "Sparse approximate solutions to linear systems", "venue": "SIAM Journal on Computing,", "year": 1995 }, { "authors": [ "Bruno A Olshausen", "David J Field" ], "title": "Emergence of simple-cell receptive field properties by learning a sparse code for natural images", "venue": null, "year": 1996 }, { "authors": [ "Vardan Papyan", "Jeremias Sulam", "Michael Elad" ], "title": "Working locally thinking globally: Theoretical guarantees for convolutional sparse coding", "venue": "IEEE Transactions on Signal Processing,", "year": 2017 }, { "authors": [ "Tobias Plotz", "Stefan Roth" ], "title": "Benchmarking denoising algorithms with real photographs", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Praveen Kumar Pokala", "Prakash Kumar Uttam", "Chandra Sekhar Seelamantula" ], "title": "ConFirmNet: Convolutional FirmNet and application to image denoising and inpainting", "venue": "In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Matan Protter", "Michael Elad" ], "title": "Image sequence denoising via sparse and redundant representations", "venue": "IEEE Transactions on Image Processing,", "year": 2008 }, { "authors": [ "Ivan Selesnick" ], "title": "Sparse regularization via convex analysis", "venue": "IEEE Transactions on Signal Processing,", "year": 2017 }, { "authors": [ "Dror Simon", "Michael Elad" ], "title": "Rethinking the CSC model for natural images", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Hillel Sreter", "Raja Giryes" ], "title": "Learned convolutional sparse coding", "venue": "In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2018 }, { "authors": [ "Jeremias Sulam", "Michael Elad" ], "title": "Expected patch log likelihood with a sparse prior", "venue": "In International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Ivana Toić", "Pascal Frossard" ], "title": "Dictionary learning: what is the right representation for my signal", "venue": "IEEE Signal Processing Magazine,", "year": 2011 }, { "authors": [ "Brendt Wohlberg" ], "title": "Sporco: A python package for standard and convolutional sparse representations", "venue": "In Proceedings of the 15th Python in Science Conference, Austin, TX, USA,", "year": 2017 }, { "authors": [ "Kailun Wu", "Yiwen Guo", "Ziang Li", "Changshui Zhang" ], "title": "Sparse coding with gated learned ISTA", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Kai Zhang", "Wangmeng Zuo", "Yunjin Chen", "Deyu Meng", "Lei Zhang" ], "title": "Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising", "venue": "IEEE Transactions on Image Processing,", "year": 2017 }, { "authors": [ "Kai Zhang", "Wangmeng Zuo", "Lei Zhang" ], "title": "FFDNet: Toward a fast and flexible solution for CNNbased image denoising", "venue": "IEEE Transactions on Image Processing,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Sparsity in a transform domain is an important and widely applicable property of natural images. This property can be exploited in a variety of tasks such as signal representation, feature extraction, and image processing. For instance, consider restoring an image from a degraded version (noisy, blurry, or missing pixels). These inverse problems are generally ill-posed and require utilizing adequate prior knowledge, for which sparsity has proven extremely effective (Mairal et al., 2014).\nIn recent years, such problems have been tackled with deep neural network architectures that achieve superior performance but are not well-understood in terms of their building blocks. In this study, we are interested in utilizing the knowledge from classical signal processing and spare coding literature to introduce a learned framework which is interpretable and that can perform on-par with state-ofthe-art deep-learning methods. We choose to explore this method under the task of natural image denoising, in line with much of the recent literature (Sreter & Giryes, 2018; Simon & Elad, 2019; Lecouat et al., 2020). As a benefit of this interpretability, we are able to extend the framework for a blind-denoising setting using ideas from signal processing.\nIn sparse representation we seek to approximate a signal as a linear combination of a few vectors from a set of vectors (usually called dictionary atoms). Olshausen & Field (1996), following a neuroscientific perspective, proposed to adapt the dictionary to a set of training data. Later, dictionary learning combined with sparse coding was investigated in numerous applications (Mairal et al., 2009a; Protter & Elad, 2008). More specifically, for a set ofN image patches (reshaped into column vectors) X = [x1, · · · ,xN ] ∈ Rm×N , we seek to find the dictionary D∗ ∈ Rm×k and the sparse representation Z∗ = [z∗1 , · · · , z∗N ] ∈ Rk×N such that\nD∗, Z∗ = arg min D,Z N∑ i=1 ‖zi‖0 subject to:Dzi = xi, ∀i = 1, · · · , N. (1)\nThis formulation is not tractable for large signals since minimizing the `0-pseudo-norm involves a combinatorial optimization (Natarajan, 1995). To address this complication, a popular technique is to relax the problem by using the `1-norm as a surrogate (Sreter & Giryes, 2018). When dealing with inverse problems such as denoising, learning the dictionary from the degraded signal has shown effective (Toić & Frossard, 2011). Let yi = xi + ni ∈ Rm represent the noisy signal where ni follows an additive white Gaussian distribution, N ( 0, σ2nI ) . Then, the relaxed formulation can be written as\nmin D,Z N∑ i=1 ‖zi‖1 s.t. N∑ i=1 1 2 ‖Dzi − yi‖22 ≤ or minD,Z N∑ i=1 1 2 ‖Dzi − yi‖22 + λ‖zi‖1 (2)\nwhere λ is a regularization parameter and is nontrivialy related to the representation error . We will refer to this as the basis-pursuit denoising (BPDN) formulation of dictionary learning. Many iterative algorithms have been proposed in the literature to solve this problem (Mairal et al., 2014). A majority of these algorithms split the problem into a step updating the dictionary followed by a step solving for the sparse codes.\nNote that learning a dictionary over independent image patches neglects the dependencies between these patches. As a result, the models involving patch processing are inherently sub-optimal (Batenkov et al., 2017; Simon & Elad, 2019). Although enforcing local priors on merged images (Sulam & Elad, 2015) and utilizing self-similarity between patches (Mairal et al., 2009b) have been proposed as ideas to mitigate this flaw, ideally a global shift-invariant model is more appropriate. By constraining the dictionary to have a Toeplitz structure, the Convolutional Sparse Coding (CSC) model has been introduced which replaces the local patch processing with a global convolution (Grosse et al., 2007; Papyan et al., 2017).\nAlgorithms for solving the CSC model are also discussed in (Moreau & Gramfort, 2019; Wohlberg, 2017). In this study, we are interested in interpretable CSC-based deep-learning models. A metric known as the mutual-coherence is well known to be related to the representation capability of the dictionary and is of special concern in using the CSC model with natural images (Simon & Elad, 2019). We take an alternative route to Simon & Elad (2019) in addressing the mutual-coherence of CSC-based deep-learning models, which is both less computationally expensive and improves the denoising performance. We continue the discussion about CSC-based deep-learning models in Sec. 1.1.\nAnother important aspect of the sparse representation is the sparse coding algorithm. For a given signal y ∈ Rm and dictionary D, iterative soft-thresholding algorithm (ISTA) (Beck & Teboulle, 2009) finds the solution to the BPDN functional, z∗ = arg minz 1/2 ‖Dz − y‖ 2 2 + λ‖z‖1, by repeating the following iteration until a convergence criterion is reached:\nz(k+1) = Sλη(k) ( zk − η(k)DT ( Dz(k) − y )) where Sθ(x) = sgn(x)(|x| − θ)+, θ ≥ 0. (3)\nHere, η(k) is the step-size of the descent algorithm at iteration k. Note that performing sparse coding with an iterative method like ISTA for all patches is computationally exhausting and slow. To resolve this issue, Gregor & LeCun (2010) proposed to approximate the sparse coding via a learned differentiable encoder, dubbed LISTA. Further extensions of LISTA both in terms of practice and theory have been studied in the literature (Wu et al., 2019; Chen et al., 2018). More recently, using LISTA combined with dictionary learning has been a research highlight (Sreter & Giryes, 2018; Simon & Elad, 2019; Lecouat et al., 2020). We refer to this type of models that leverages LISTA for convolutional dictionary learning as CDL models." }, { "heading": "1.1 RELATED WORKS", "text": "In this study, we are interested in the CDL model that concatenates a LISTA network with a linear convolutional synthesis dictionary. Let D be a convolutional dictionary with M filters (and their integer shifts). We denote the filters in D by d j where j ∈ {1, · · · ,M}. Let Zi denote the sparse code for the data sample yi = xi + ni where i ∈ {1, 2, · · · , N} and n ∼ N (0, σ2nI). The corresponding subband signal to d j in Zi can be denoted as z j i . Then the convolutional dictionary learning problem is written as\nminimize dj ,Zi N∑ i=1 1 2 ‖yi − M∑ j=1 dj ∗ zji ‖ 2 2 + λ M∑ j=1 ‖zji ‖1. (4)\nSreter & Giryes (2018) introduce the approximate convolutional sparse coding (ACSC) framework for “task-driven convolutional sparse coding”, combining a convolutional extension of LISTA with a linear convolutional decoder. The proposed framework offers a strategy for training an approximate convolutional sparse coding network and a corresponding convolutional dictionary in an end-to-end fashion. They demonstrate competitive performance against classical patch-based methods such as K-SVD (Aharon et al., 2006), on image denoising and image inpainting. Our proposed baseline model (CDLNet) differs from the ACSC model by use of mean-subtraction preprocessing, employing small-strided convolutions, and imposing a norm-constraint on the synthesis dictionary.\nSimon & Elad (2019) extend the framework of Sreter & Giryes (2018) by considering the role of stride in the stable recovery of signals and proposed the “CSCNet” framework. They argue that the CSC model for image representation in a sparse domain is limited by the inclusion of “smooth filters”, which are required to represent the piecewise smooth characteristics of natural images. This limitation manifests itself in the maximum cross-correlation between atoms of the dictionary, known as the mutual-coherence. They empirically show that using relatively large stride, while processing shifted-duplicates of the input, improves denoising performance of the model. Although using large stride reduces the mutual coherence of the learned filters, all possible shifts of the image need to be processed and averaged, yielding a model very similar to patch-processing. We propose a frequency regularization strategy to mitigate the problem of smooth-varying filters which does not require shift-averaging.\nNote that the parameter λ in equation 4 depends on the desired sparsity, relative to the noise-level, and is directly related to the threshold values in ISTA. Sreter & Giryes (2018) propose to learn different thresholds for each channel, effectively changing the regularizer term in equation 4 to∑M j=1 ‖λjz j i ‖1. Inspired by the benefit of minimax-concave (MC) penalty (Selesnick, 2017) over `1 norm, Pokala et al. (2020) propose “ConFirmNet” where firm-thresholding function is used in the network. Kim & Park (2020) propose a signal adaptive threshold scheme for LISTA where the threshold is decreased if the previous estimate of an element is large.\nMohan et al. (2020) explore the role of bias-vectors in popular deep-learning network’s convolution operators. They advocate for eliminating the biases completely to improve generalization in blinddenoising where there is mismatch between training and inference noise level. Isogawa et al. (2017) propose altering the biases of deep neural-networks by scaling them with the input noise standarddeviation. Their method is ultimately a non-blind denoising scheme as they use the ground-truth noise statistics during training and inference. In contrast, we propose a blind-denoising scheme that is motivated by the interpretation of the biases in LISTA as thresholds and employ a scaling by the noise variance (in the last layer of LISTA), estimated from the input signal during training and inference. Performance of different denoising techniques on other noise distributions have also been studied in the literature, which is not the focus of this study (Abdelhamed et al., 2018; Plotz & Roth, 2017)." }, { "heading": "1.2 CONTRIBUTION OF THIS STUDY", "text": "The unrolled convolutional sparse coding and dictionary learning frameworks have led to the field dubbed “interpretable deep-learning”. The networks constructed in such a way have the benefit of interpretability and decreased parameter count while performing quite closely to other state-of-theart deep-learning models. In this study we further extend such frameworks. We propose utilizing a strided convolutional dictionary with a fixed low-pass channel and a set of frequency-regularized learnt filters (Section 2.2). Our experimental results demonstrate that such frequency regularization and small stride leads to more interpretable dictionary filters than the prior work. Consequently, by limiting the number of low-pass atoms in the dictionary and using small-strided convolutions, we address the modeling assumptions associated with the convolutional sparse coding model (Section 2.1.1). Additionally, leveraging interpretability of our network, we propose to parameterize the softthresholding operator in LISTA such that the thresholds are proportional to the estimated input noiselevel for a given image (Section 2.3). Experimentally, we show improved denoising performance at reduced computational complexity compared to other frameworks (Section 3.2). Furthermore, our parameterization of the learned thresholds greatly improves robustness to noise-level mismatch between training and inference and increases the generalizability of the network (Section 3.3)." }, { "heading": "2 PROPOSED FRAMEWORK", "text": "" }, { "heading": "2.1 CONVOLUTIONAL DICTIONARY LEARNING NETWORK (CDLNET)", "text": "We seek to solve the natural image denoising problem via the convolutional dictionary learning model on the BPDN functional,\nminimize dj ,Zi N∑ i=1 1 2 ‖yi− M∑ j=1 dj ∗zji ‖ 2 2 + M∑ j=1 ‖λjzji ‖1 subject to: ‖d j‖22 ≤ 1 ∀j ∈ {1, · · · ,M}. (5)\nA norm constraint is imposed on the dictionary atoms to remove the arbitrary scaling of coefficients, as in Mairal et al. (2014). We propose the following learned CDL model, dubbed CDLNet, which involves a LISTA module followed by a learned convolutional synthesis dictionary,D,\nx̂ = Dz(K), z(k+1) = Sθ(k) ( z(k) −A(k)(B(k)z(k) − y) ) , k = 0, . . . ,K − 1, z(0) = 0 (6)\nwhere ISTA has been unrolled for K steps. Here, A(k) and B(k) are small-strided convolution analysis and synthesis operators respectively. We untie the parameters at each iteration of LISTA following the theoretical analysis of Chen et al. (2018). A threshold vector 0 ≤ θ(k) ∈ RM is learned corresponding to the M subbands of the convolutional sparse code at iteration k.\nThe reconstructed signal is given by x̂. The total learnable parameters are given by Θ = {{A(k),B(k),θ(k)}K−1k=0 , {dj}Mj=1}. Note that a traditional LISTA network requires supervised training on sparse codes computed from ISTA. On the other hand, the CDLNet can learn to approximate sparse coding and the dictionary in an unsupervised fashion by minimizing a suitable loss function designed for the image reconstruction task (Sreter & Giryes, 2018) (i.e. unsupervised in the code-domain, but supervised in the signal-domain). In this sense the network mimics the common dictionary learning strategy of alternating between computing sparse codes and updating the dictionary, however, the sparse coding is done via a learned algorithm with fast inference. An alternative unsupervised LISTA training strategy, which minimizes the BPDN functional (equation 2), was presented in Ablin et al. (2019). As in (Sreter & Giryes, 2018; Simon & Elad, 2019), we employ `2 loss between the restored image and its ground-truth clean image throughout this study." }, { "heading": "2.1.1 A DISCUSSION ON MUTUAL COHERENCE OF THE LEARNED DICTIONARY", "text": "The approximately piecewise smooth nature of natural images will require a synthesis dictionary to contain “smoothly-varying” low-pass atoms. As Simon & Elad (2019) discuss, such low-pass atoms pose a problem for BPDN. A sufficient condition for the faithful recovery of the `0 sparse code from an `1 basis pursuit can be given in terms of the dictionary’s mutual coherence, µ(D). Note that for matrix A with normalized columns ai, we have µ(A) = maxi 6=j |a>i aj |. For the convolutional dictionary, the atoms of D are composed of the shifts of its filters, {dj}Mj=1. This poses a problem in that the inner product between any of such low-pass filters and their own integer-translates will greatly increase the mutual coherence and potentially harm the reconstruction performance of the system.\nSreter & Giryes (2018) do not address this issue in the ACSC framework. Simon & Elad (2019) propose to use large strides on the order of the filter size, along with averaging reconstructions from shifted input signals – effectively returning to a patch-based approach. In CDLNet we use small\nstrided convolutions (stride=2, in both horizontal and vertical directions) without an averaging reconstruction scheme. Furthermore, we use a preset low-pass filter, and parameterize other filters to be in the complimentary frequency space of the low-pass. We empirically show that the combination of the proposed regularization scheme and small stride reduces the mutual coherence of the dictionary, improves denoising performance of the model, and reduces the computational cost." }, { "heading": "2.2 FREQUENCY REGULARIZATION OF A CONVOLUTIONAL DICTIONARY", "text": "In this section we propose a method for regularizing the synthesis dictionary to contain only a single low-pass filter. Note that in the BPDN formulation, the hyperparameter λ determines a trade-off between data-fidelity to the observation, y, and sparsity of the transform domain coefficients, z. Following Sreter & Giryes (2018), we extend this to a vector, λ ∈ RM , to reflect prior knowledge on the expected levels of sparsity in different subbands of the decomposition. The learned thresholds, θ(k) ultimately reflect these weights, representing sparsity priors on each subband. In the case of natural image reconstruction, their piecewise smooth nature necessitates a subband decomposition which contains an approximation signal, for which a sparsity prior is ill-suited.\nTo address these assumptions, we designate the first channel of the sparse code as the approximation signal and fix its corresponding synthesis filter to an analytic low-pass filter. Note that total variation regularization of the low-pass signal has been previously studied (Elad et al., 2005; Lalanne et al., 2020), however, we’re concerned with regularizing the dictionary elements for reasons concerning mutual coherence. Knowing in which subband the approximation signal lives allows us to remove it from the soft-thresholding operation (θ(k)0 = 0), thereby removing any misplaced assumption of sparsity. Further, we wish to ensure no additional low-pass filters are learned during training so that we are not inadvertently violating the sparsity assumptions of the model (i.e. thresholding other lowfrequency subbands) and reduce the mutual coherence of dictionary. This restriction on the number of low-pass filters has the added benefit of improving stable recovery bounds of the dictionary as discussed in Section 2.1.1.\nThe issue of learning high-pass/band-pass filters is both non-trivial and ill-posed. If we naively assert that such a set of filters must simply be “non-low-pass”, we may consider projecting filters onto the set of zero-mean filters there by removing their DC-component. However, this allows for the learning of filters whose frequency response is arbitrarily close to DC. Alternatively, preprocessing the signal by removing its low-pass component is not effective as it produces a noisy low-frequency signal and does not properly regularize the learned filters. As demonstrated in Appendix 5.1, even when the input images are preprocessed in this way, the learned dictionary can still contain low-pass filters leading to high mutual coherence.\nA more apt characterization is to consider the learning of filters occupying the frequency-space complement to that of the low-pass filter. Let h denote a fixed low-pass filter and g = δ − h be its high-pass complement, where δ is the discrete Dirac delta function. We formalize the regularization by considering the following effective dictionary elements,\nd1 = h, dj = g ∗ d̃j , ‖dj‖2 ≤ 1, j = 2, . . . ,M. (7) We refer to {dj}Mj=1 and {d̃j}Mj=2 as the effective and learned filters respectively. Signal reconstruction is ultimately performed with the effective filters which compose D. Note that the norm constraint is necessary to avoid large responses in the transition band of the low-pass filter. By explicitly denoting which subbands of our decomposition are expected to be sparse, this regularization technique forms a sufficiently expressive model for the reconstruction of natural images. This has the added benefit of nearly eliminating the correlation between the atoms corresponding to the lowpass filter, d1, and the atoms corresponding to high-frequency filters, dj , as d1 ∗ dj = h ∗ (δ − h) ∗ d̃j = (h− h ∗ h) ∗ d̃j ≈ 0, for j 6= 1." }, { "heading": "2.3 BLIND DENOISING: NOISE-ADAPTIVE LEARNED THRESHOLDS", "text": "As presented, the CDLNet model and any similar network utilizing LISTA is not amenable to generalizing denoising performance across a set of noise levels. Note that the threshold values in softthresholding operator are directly proportional to the expected sparsity and the noise level in each subband Bayram & Selesnick (2010). As a result, the sparsity hyperparameter, λ, and consequently the threshold values should be functions of the noise variance, i.e. θ(k) = θ(k)(σ2n).\nWe thus propose to parameterize the thresholds in the last layer in CDLNet as θ(K) = ν(K)σ̂2n, where σ̂2n is the estimated noise variance which can be estimated from the input noisy image, and ν(K) is a vector containing the learned scaling factors for different subbands. We employ a commonly used estimator, σ̂n ≈ Median(|c|)/0.6745, where c denotes the diagonal-detail Wavelet subband of an input image (Chang et al., 2000; Mallat, 2008; Donoho & Johnstone, 1994; 1995). The proposed parmeterization of thresholds is inspired by the MAP estimate of orthogonal Wavelet denoising under a Laplace distribution prior of the high-frequency coefficients and the Gaussian distribution prior on the noise(Bayram & Selesnick, 2010). This parameterization enables the proposed CDLNet to handle varying input noise-levels while maintaining the integrity of CDLNet as an unfolded dictionary learning model." }, { "heading": "3 EXPERIMENTAL SETUPS AND RESULTS", "text": "Models: are trained via stochastic gradient descent on the `2-loss with parameter constraints,\nminimize Θ={{A(k),B(k),θ(k)}K−1k=0 ,{dj} M j=1} ‖x− x̂(y; Θ)‖22 subject to: θ(k) ≥ 0 ∀k, ‖dj‖22 ≤ 1 ∀j,\nwhere the parameter constraints are enforced by projection onto the constraint set after each gradient step. Models of different capacity are trained by varying the number of unrollings K and number of subbands M . Filters are of size 7 × 7. CDLNet is used to refer to our proposed base-model, differing from other mentioned CDL methods (ACSC (Sreter & Giryes, 2018) and CSCNet (Simon & Elad, 2019)) by its use of stride-2 convolutions, mean-subtraction of input signals, and the above projection operations during training. A 3×3 isotropic Gaussian filter (σ=0.6) is used as the analytic low-pass filter for frequency-regularized models, denoted FCDLNet. We use (F)CDLNet+Blind to refer to networks with noise-adaptive thresholds as in section 2.3. In blind denoising cases, the noise level is estimated using the estimator in section 2.3 both during training and inference. Implementation and trained models are provided here1.\nDataset: All CDLNet models and variants are trained on the BSD432 dataset (Martin et al., 2001). Random crops of size 128×128 are flipped, rotated, and batched online during training. Independent identically distributed Gaussian noise is drawn from σn ∈ σtrainn uniformly within each batch and added to the ground-truth signal. As preprocessing, all images are normalized by 255 to have range of [0, 1] and mean of each image is subtracted. Testing is performed on the associated BSD68 test-set (Martin et al., 2001).\nTraining: is performed with the Adam optimizer (Kingma & Ba, 2015), using its default settings in PyTorch. Mini-batches consist of 10 samples. A learning rate of 1e-3 is set at the start of training and reduced by a factor of 0.95 every 50 epochs. Training is run until convergence. As advised by Lecouat et al. (2020), backtracking is used to correct for model divergence by reloading the most recent checkpoint within the last 10 epochs and reducing the learning rate by a factor of 0.8.\nInitialization: A single set of M filters are initialized by drawing from a standard normal distribution and subsequently normalized w.r.t each filter. This corresponds to our expectation that most filters will learn to be approximately zero-mean and spatially localized. We found that this initialization greatly improves convergence speed over drawing from a standard uniform distribution. All convolution operators are initialized with this same weight. Following Simon & Elad (2019), we then normalize A(k) by the spectral norm L = ‖A(k)B(k)‖2, which corresponds to initializing the step-sizes of ISTA to η(k) = 1/L. Thresholds are initialized to θ(k) = 1e-1/L." }, { "heading": "3.1 EFFECT OF FREQUENCY REGULARIZATION AND STRIDE ON LEARNED DICTIONARIES", "text": "To validate the effectiveness of small-stride and the proposed frequency regularization on the learned synthesis dictionary, we train three CDLNet models containing convolutions with (a) no stride, (b) stride 2, and (c) stride 2 with frequency regularization. For all modelsM=32, K=20, and σtrainn =25. Figure 2 shows the learned filters in the spatial and frequency domain. Without stride, the learned dictionary consists of some “noise-like” filters with non-localized frequency responses and a few directional filters. The stride 2 model (b) learns more directional filters and overall a dictionary with\n1https://www.dropbox.com/sh/6lgy1w5v6b5b4jc/AAC8Cwgshu0h8ySfwivSRZpMa?dl=0\nlower mutual-coherence compared to (a). However, both (a) and (b) produce multiple low-frequency filters in unpredictable channels. With frequency regularization added in (c), we are able to control the subband in which our low-frequency information is located. The learned filters in (c) are all directional or texture high-pass, and the mutual-coherence is decreased as predicted." }, { "heading": "3.2 DENOISING PERFORMANCE AGAINST OTHER FRAMEWORKS", "text": "In this section we demonstrate the efficacy of the proposed methods on single noise-level grayscale image denoising. We train two FCDLNet models of varying capacity (FCDLNet withM=64, K=10 and Big FCDLNet withM=169 andK=30)2. We compare these to the classic collaborative filtering method BM3D (Dabov et al., 2007), popular convolutional neural network based methods FFDNet (Zhang et al., 2018) and DnCNN (Zhang et al., 2017), and CDL method proposed by Simon & Elad (2019), CSCNet. All learned methods have been trained on the same dataset, BSD432. Average peak signal-to-noise ratio (PSNR) on BSD68 testset is shown in Table 1. Visual comparison between the above mentioned models and FCDLNet is presented in Figure 3.\nThe FCDLNet with trainable parameters on the order of CSCNet shows improved performance across noise-levels. Interestingly, Big FCDLNet is observed to compete very well with state-ofthe-art deep-learning denoising networks. This is done without the use of common deep-learning tricks such as batch-normalization or residual learning (both of which are employed in DnCNN). The ability to train larger CDLNet models of competitive performance without such methods may suggest an appeal to more interpretable networks.\nThe average run-time at inference of different models averaged over Set-12 (Sreter & Giryes, 2018) images of size 512 × 512 is also given in Table 1. The timing experiments were conducted with an Intel Xeon E5 at 2.6GHz CPU, an Nvidia P40 GPU, and 4GB of RAM, running Linux version 3.10.0. We observe that by leveraging small-strided convolutions and forgoing the “shift-duplicate processing” of CSCNet, FCDLNet has significantly reduced (10x to 20x) computation time both on GPU and CPU compared to CSCNet, while having better denoising quality.\n2Corresponding filters are available here." }, { "heading": "3.3 ROBUSTNESS TO NOISE LEVEL MISMATCH IN TRAINING AND INFERENCE", "text": "In this section we provide experimental results regarding the generalization of the networks across noise-levels. The main focus is to investigate the effect of the proposed blind denoising framework (section 2.3), especially for cases with mismatch between noise-range during training (σtrainn ) and testing (σtestn ).\nIn Figure 4 we show the average PSNR values for three different training noise ranges: (a) [0, 20], (b) [15, 35], and (c) [30, 50]. Networks are trained by uniformly sampling the noise-level within the training range at each iteration. All networks have close to 120k learnable parameters with M=64 andK=20. The trained networks are then tested on different noise levels σtestn = [0, 50], and average PSNR is calculated over the BSD68 dataset.\nAs shown in Figure 4, all networks perform closely over the training noise-range. On the other hand, when tested on noise-levels outside the training range, the networks with adaptive thresholds (as in Section 2.3) perform superior compared to others. In spite of increasing input signal-to-noise ratio for noise-levels below the training range, we observe that models without noise-adaptive thresholds have diminishing performance returns (note the plateau of CDLNet/FCDLNet in σtestn = [0, 15] in (b) and σtestn = [0, 30] in (c)). On the other hand, denoising behavior of models with noiseadaptive thresholds (CDLNet+Blind and FCDLNet+Blind) extends to the lower noise-range. Similarly, we observe that models without noise-adaptive thresholds have a more significant performance drop compared to noise-adaptive models when generalizing above the training noise level. Another notable observation is that FCDLNet models perform better than their non-frequency regularized counterparts in low noise-levels due to the proper treatment of the low-pass signal.\nWe also compare the generalization of the proposed networks against other CDL methods. Pokala et al. (2020) propose ConFirmNet model where they use firm-thresholding in LISTA and show better performance compared to ACSC (Sreter & Giryes, 2018) when training and testing noise levels are different. Results from Pokala et al. (2020) are summarized and compared to our framework in Table 2. FCDLNet performs on par with ConFirmNet when σtrainn = 20. To allow the proposed scaling parameters (ν(K)) to properly fit to the noise-variance, we train over σtrainn = [18, 22]. As seen in Table 2 and from our discussion above, simply training over a noise range gives marginal improvement. However, when combined with noise-adaptive thresholds (FCDLNet+Blind), we observe significant improvement in generalization over other methods." }, { "heading": "4 DISCUSSION AND CONCLUSION", "text": "In this study we investigated unrolled convolutional sparse coding and dictionary learning frameworks. These frameworks have the benefit of interpretability while maintaining similar performance compared to other state-of-the-art deep learning models. We proposed employing a strided convolutional dictionary constructed with a fixed lowpass filter and a set of learned frequency regularized filters. As illustrated, small-strided and frequency regularized convolutions give the benefit of reduced mutual coherence of the dictionary and properly address the modeling assumptions regarding convolutional sparse coding. We showed that learned high-pass filters are more structured covering different orientations and textures. In comparison to other CDL models of similar parameter count, our proposed framework showed improved denoising performance whilst reducing the computational cost. The learned dictionary filters are more interpretable with lower mutual coherence. Additionally, experimental results with FCDLNet models of similar size to the state-of-the-art denoising models showed competitive denoising performance.\nWe further investigated the generalizability of CDL networks in scenarios where noise-level mismatch exists between training and inference. Leveraging the interpretability of CDLNet, we proposed to parameterize the thresholds in LISTA such that they are scaled based on the estimated input noise variance. Experimental results demonstrated that this reparameterization greatly improves the robustness to noise-level mismatch between training and testing and increases the generalizability of the network.\nIn future work we aim to explore the possible extensions of the proposed models and further leverage the interpretability of this framework. The proposed frequency regularization scheme provides the required grounds for multiresolution representation learning. Note that by further processing of the fixed lowpass channel one can achieve a multiresolution representation while in other frameworks the lowpass information is represented in multiple, non-predetermined channels, making this extension challenging (see discussion in Section 3.1). Further augmenting the thresholds of the CDLNet model to be employed at each layer of LISTA, with both signal and noise adaptivity, is a promising direction for improved generalization of the network. Additionally, investigating other noise distribution models is an exciting avenue of research." }, { "heading": "5 APPENDIX", "text": "" }, { "heading": "5.1 REMOVING THE LOW-PASS INFORMATION FROM THE INPUT IMAGES IS NOT EFFECTIVE FOR REDUCING THE MUTUAL COHERENCE", "text": "Instead of the proposed frequency regularization approach, it may be tempting to simply remove the low-pass information of the signal as a preprocessing step. More specifically, let y be the noisy image, one can process the high-pass signal y−h∗y with the proposed network without adding any frequency regularization. Note that this approach is not equivalent to the proposed frequency regularization scheme as the removed low-pass channel (h∗y) is still noisy. Although the thresholds for the low-pass channel in FCDLNet are set to zero, this does not mean that the low-pass information is removed from the denoising framework. As a result, in FCDLNet, the low-pass filtering at each LISTA stage is ultimately a filtering of an incrementally cleaner image, producing a full subband decomposition of a cleaned up low and high frequency bands at the final stage of LISTA. Additionally, note that removing the low-frequency component of the signal does not stop the learned dictionary from learning multiple (redundant) low-pass filters. Even though the input signal does not contain low-frequency information, the filters are not necessarily regularized to take advantage of this property. As shown in figure 5, we observed that the filters learned in this scheme have multiple low-pass channels." } ]
2,020
null
SP:56eb9cca9680e7ac118f3baf29789f172715c7d0
[ "The authors propose a new framework for compositional lifelong learning. In the proposed approach, the composition and adaptation parts are separated when a lifelong learner faces a new task: first, learn the best way to compose all existing components for the new task (and train an optional new component if exiting components aren't sufficient to reach a good performance), and only then adapt the components parameters to better fit the new problem. This new framework is validated on extensive experiments, using three composition and 3 adaptation strategies from the literature on 9 datasets. The paper is pleasing to read, each choice is discussed and justified", "The paper introduces a framework for lifelong learning of compositional structures. The algorithm is loosely inspired by biological learning and consists of two main steps. The step of component selection relies on existing methods that can learn task-specific structure. In the next step (adaptation), the algorithm adapts the knowledge from the previous tasks to the current task and if that is insufficient to solve the task, new components are added. Adaptation step relies on existing methods for adapting the knowledge state given a new task in continual learning (component parameters are updated). Knowledge expansion (adding new components) uses component dropout, a method proposed by the authors which combines pruning and alternating backpropagation steps with and without the potential new component. The proposed method is beneficial in terms of computational complexity in comparison with the standard lifelong learning methods. The authors evaluate the method on three compositional structures and show that it outperforms the baselines. The paper includes visualisation of the learned components, extensive appendix with additional experiments and ablation studies, and a systematic overview of the prior work in learning compositional structures and lifelong learning." ]
A hallmark of human intelligence is the ability to construct self-contained chunks of knowledge and adequately reuse them in novel combinations for solving different yet structurally related problems. Learning such compositional structures has been a significant challenge for artificial systems, due to the combinatorial nature of the underlying search problem. To date, research into compositional learning has largely proceeded separately from work on lifelong or continual learning. We integrate these two lines of work to present a general-purpose framework for lifelong learning of compositional structures that can be used for solving a stream of related tasks. Our framework separates the learning process into two broad stages: learning how to best combine existing components in order to assimilate a novel problem, and learning how to adapt the set of existing components to accommodate the new problem. This separation explicitly handles the trade-off between the stability required to remember how to solve earlier tasks and the flexibility required to solve new tasks, as we show empirically in an extensive evaluation.
[ { "affiliations": [], "name": "Jorge A. Mendez" } ]
[ { "authors": [ "Ferran Alet", "Tomas Lozano-Perez", "Leslie P Kaelbling" ], "title": "Modular meta-learning", "venue": "In Proceedings of the 2nd Conference on Robot Learning (CoRL-19),", "year": 2018 }, { "authors": [ "Matko Bošnjak", "Tim Rocktäschel", "Jason Naradowsky", "Sebastian Riedel" ], "title": "Programming with a differentiable forth interpreter", "venue": "In Proceedings of the 34th International Conference on Machine Learning (ICML-17),", "year": 2017 }, { "authors": [ "Rudy Bunel", "Matthew Hausknecht", "Jacob Devlin", "Rishabh Singh", "Pushmeet Kohli" ], "title": "Leveraging grammar and reinforcement learning for neural program synthesis", "venue": "In 6th International Conference on Learning Representations", "year": 2018 }, { "authors": [ "Jonathon Cai", "Richard Shin", "Dawn Song" ], "title": "Making neural programming architectures generalize via recursion", "venue": "In 5th International Conference on Learning Representations", "year": 2017 }, { "authors": [ "Michael Chang", "Abhishek Gupta", "Sergey Levine", "Thomas L. Griffiths" ], "title": "Automatically composing representation transformations as a means for generalization", "venue": "In 7th International Conference on Learning Representations", "year": 2019 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Neural architecture search: A survey", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Chrisantha Fernando", "Dylan Banarse", "Charles Blundell", "Yori Zwols", "David Ha", "Andrei A Rusu", "Alexander Pritzel", "Daan Wierstra" ], "title": "PathNet: Evolution channels gradient descent in super neural networks", "venue": "arXiv preprint arXiv:1701.08734,", "year": 2017 }, { "authors": [ "Alexander L Gaunt", "Marc Brockschmidt", "Nate Kushman", "Daniel Tarlow" ], "title": "Differentiable programs with neural libraries", "venue": "In Proceedings of the 34th International Conference on Machine Learning (ICML-17),", "year": 2017 }, { "authors": [ "Aidan N Gomez", "Ivan Zhang", "Siddhartha Rao Kamalakara", "Divyam Madaan", "Kevin Swersky", "Yarin Gal", "Geoffrey E Hinton" ], "title": "Learning sparse networks using targeted dropout", "venue": null, "year": 1905 }, { "authors": [ "Geoffrey E Hinton", "Nitish Srivastava", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan R Salakhutdinov" ], "title": "Improving neural networks by preventing co-adaptation of feature detectors", "venue": "arXiv preprint arXiv:1207.0580,", "year": 2012 }, { "authors": [ "Ronghang Hu", "Jacob Andreas", "Marcus Rohrbach", "Trevor Darrell", "Kate Saenko" ], "title": "Learning to reason: End-to-end module networks for visual question answering", "venue": "In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV-17),", "year": 2017 }, { "authors": [ "David Isele", "Akansel Cosgun" ], "title": "Selective experience replay for lifelong learning", "venue": "In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI-18),", "year": 2018 }, { "authors": [ "Justin Johnson", "Bharath Hariharan", "Laurens van der Maaten", "Judy Hoffman", "Li Fei-Fei", "C Lawrence Zitnick", "Ross Girshick" ], "title": "Inferring and executing programs for visual reasoning", "venue": "In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV-17),", "year": 2017 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the National Academy of Sciences (PNAS),", "year": 2017 }, { "authors": [ "Abhishek Kumar", "Hal Daumé III" ], "title": "Learning task grouping and overlap in multi-task learning", "venue": "In Proceedings of the 29th International Coference on International Conference on Machine Learning", "year": 2012 }, { "authors": [ "Seungwon Lee", "James Stokes", "Eric Eaton" ], "title": "Learning shared knowledge for deep lifelong learning using deconvolutional networks", "venue": "In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI-19),", "year": 2019 }, { "authors": [ "Xilai Li", "Yingbo Zhou", "Tianfu Wu", "Richard Socher", "Caiming Xiong" ], "title": "Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting", "venue": "In Proceedings of the 36th International Conference on Machine Learning (ICML-19),", "year": 2019 }, { "authors": [ "Zhizhong Li", "Derek Hoiem" ], "title": "Learning without forgetting", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI),", "year": 2017 }, { "authors": [ "Michael McCloskey", "Neal J Cohen" ], "title": "Catastrophic interference in connectionist networks: The sequential learning problem", "venue": "In Psychology of Learning and Motivation,", "year": 1989 }, { "authors": [ "Elliot Meyerson", "Risto Miikkulainen" ], "title": "Beyond shared hierarchies: Deep multitask learning through soft layer ordering", "venue": "In 6th International Conference on Learning Representations", "year": 2018 }, { "authors": [ "Anusha Nagabandi", "Chelsea Finn", "Sergey Levine" ], "title": "Deep online learning via meta-learning: Continual adaptation for model-based RL", "venue": "In 7th International Conference on Learning Representations", "year": 2019 }, { "authors": [ "Cuong V. Nguyen", "Yingzhen Li", "Thang D. Bui", "Richard E. Turner" ], "title": "Variational continual learning", "venue": "In 6th International Conference on Learning Representations", "year": 2018 }, { "authors": [ "Scott Reed", "Nando de Freitas" ], "title": "Neural programmers-interpreters", "venue": "In 4th International Conference on Learning Representations", "year": 2016 }, { "authors": [ "Clemens Rosenbaum", "Tim Klinger", "Matthew Riemer" ], "title": "Routing networks: Adaptive selection of non-linear functions for multi-task learning", "venue": "In 6th International Conference on Learning Representations", "year": 2018 }, { "authors": [ "Paul Ruvolo", "Eric Eaton" ], "title": "ELLA: An efficient lifelong learning algorithm", "venue": "In Proceedings of the 30th International Conference on Machine Learning", "year": 2013 }, { "authors": [ "Noam Shazeer", "Azalia Mirhoseini", "Krzysztof Maziarz", "Andy Davis", "Quoc Le", "Geoffrey Hinton", "Jeff Dean" ], "title": "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer", "venue": "In 5th International Conference on Learning Representations", "year": 2017 }, { "authors": [ "Danfei Xu", "Suraj Nair", "Yuke Zhu", "Julian Gao", "Animesh Garg", "Li Fei-Fei", "Silvio Savarese" ], "title": "Neural task programming: Learning to generalize across hierarchical tasks", "venue": "In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA-18),", "year": 2018 }, { "authors": [ "JaeHong Yoon", "Jeongtae Lee", "Eunho Yang", "Sung Ju Hwang" ], "title": "Lifelong learning with dynamically expandable networks", "venue": "In 6th International Conference on Learning Representations", "year": 2018 }, { "authors": [ "Wojciech Zaremba", "Tomas Mikolov", "Armand Joulin", "Rob Fergus" ], "title": "Learning simple algorithms from examples", "venue": "In Proceedings of the 33rd International Conference on Machine Learning (ICML-16),", "year": 2016 }, { "authors": [ "Friedemann Zenke", "Ben Poole", "Surya Ganguli" ], "title": "Continual learning through synaptic intelligence", "venue": "In Proceedings of the 34th International Conference on Machine Learning", "year": 2017 }, { "authors": [ "Lee" ], "title": "2019), we evaluated the ratio of performance after each task was trained to after all tasks had been trained as a metric of knowledge retention. Results in Figure F.4 show that compositional methods exhibit substantially less catastrophic forgetting, particularly for the earlier tasks seen during training", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "A major goal of artificial intelligence is to create an agent capable of acquiring a general understanding of the world. Such an agent would require the ability to continually accumulate and build upon its knowledge as it encounters new experiences. Lifelong machine learning addresses this setting, whereby an agent faces a continual stream of diverse problems and must strive to capture the knowledge necessary for solving each new task it encounters. If the agent is capable of accumulating knowledge in some form of compositional representation (e.g., neural net modules), it could then selectively reuse and combine relevant pieces of knowledge to construct novel solutions.\nVarious compositional representations for multiple tasks have been proposed recently (Zaremba et al., 2016; Hu et al., 2017; Kirsch et al., 2018; Meyerson & Miikkulainen, 2018). We address the novel question of how to learn these compositional structures in a lifelong learning setting. We design a general-purpose framework that is agnostic to the specific algorithms used for learning and the form of the structures being learned. Evoking Piaget’s (1976) assimilation and accommodation stages of intellectual development, this framework embodies the benefits of dividing the lifelong learning process into two distinct stages. In the first stage, the learner strives to solve a new task by combining existing components it has already acquired. The second stage uses discoveries from the new task to improve existing components and to construct fresh components if necessary.\nOur proposed framework, which we depict visually in Appendix A, is capable of incorporating various forms of compositional structures, as well as different mechanisms for avoiding catastrophic forgetting (McCloskey & Cohen, 1989). As examples of the flexibility of our framework, it can incorporate naı̈ve fine-tuning, experience replay, and elastic weight consolidation (Kirkpatrick et al., 2017) as knowledge retention mechanisms, and linear combinations of linear models (Kumar & Daumé III, 2012; Ruvolo & Eaton, 2013), soft layer ordering (Meyerson & Miikkulainen, 2018), and a soft version of gating networks (Kirsch et al., 2018; Rosenbaum et al., 2018) as the compositional structures. We instantiate our framework with the nine combinations of these examples, and evaluate it on eight different data sets, consistently showing that separating the lifelong learning process into two stages increases the capabilities of the learning system, reducing catastrophic forgetting and achieving higher overall performance. Qualitatively, we show that the components learned by an algorithm that adheres to our framework correspond to self-contained, reusable functions." }, { "heading": "2 RELATED WORK", "text": "Lifelong learning In continual or lifelong learning, agents must handle a variety of tasks over their lifetimes, and should accumulate knowledge in a way that enables them to more efficiently learn to solve new problems. Recent efforts have mainly focused on avoiding catastrophic forgetting. At a high level, algorithms define parts of parametric models (e.g., deep neural networks) to be shared across tasks. As the agent encounters tasks sequentially, it strives to retain the knowledge that enabled it to solve earlier tasks. One common approach is to impose regularization to prevent parameters from deviating in directions that are harmful for performance on the early tasks (Kirkpatrick et al., 2017; Zenke et al., 2017; Li & Hoiem, 2017; Ritter et al., 2018). Another approach retains a small buffer of data from all tasks, and continually updates the model parameters utilizing data from all tasks, thereby maintaining the knowledge required to solve them (Lopez-Paz & Ranzato, 2017; Nguyen et al., 2018; Isele & Cosgun, 2018). A related technique is to learn a generative model to “hallucinate” data, reducing the memory footprint at the cost of using lower-quality data and increasing the cost of accessing data (Shin et al., 2017; Achille et al., 2018; Rao et al., 2019).\nThese approaches, although effective in avoiding the problem of catastrophic forgetting, make no substantial effort toward the discovery of reusable knowledge. One could argue that the model parameters are learned in such a way that they are reusable across all tasks. However, it is unclear what the reusability of these parameters means, and moreover the way in which parameters are reused is hard-coded into the architecture design. This latter issue is a major drawback when attempting to learn tasks with a high degree of variability, as the exact form in which tasks are related is often unknown. Ideally, the algorithm would be able to determine these relations autonomously.\nOther methods learn a set of models that are reusable across many tasks and automatically select how to reuse them (Ruvolo & Eaton, 2013; Nagabandi et al., 2019). However, such methods selectively reuse entire models, enabling knowledge reuse, but not explicitly in a compositional manner.\nCompositional knowledge A mostly distinct line of parallel work has explored the learning of compositional knowledge. The majority of such methods either learn the structure for piecing together a given set of components (Cai et al., 2017; Xu et al., 2018; Bunel et al., 2018) or learn the set of components given a known structure for how to compose them (Bošnjak et al., 2017).\nA more interesting case is when neither the structure nor the set of components are given, and the agent must autonomously discover the compositional structure underlying a set of tasks. Some approaches for solving this problem assume access to a solution descriptor (e.g., in natural language), which can be mapped by the agent to a solution structure (Hu et al., 2017; Johnson et al., 2017; Pahuja et al., 2019). However, many agents (e.g., service robots) are expected to learn in more autonomous settings, where this kind of supervision is not available. Other approaches instead learn the structure directly from optimization of a cost function (Rosenbaum et al., 2018; Kirsch et al., 2018; Meyerson & Miikkulainen, 2018; Alet et al., 2018; Chang et al., 2019). Many of these works can be viewed as instances of neural architecture search, a closely related area (Elsken et al., 2019).\nHowever, note that the approaches above assume that the agent will have access to a large batch of tasks, enabling it to evaluate numerous combinations of components and structures on all tasks simultaneously. More realistically, the agent faces a sequence of tasks in a lifelong learning fashion. Most work in this line assumes that each component can be fully learned by training on a single task, and then can be reused for other tasks (Reed & de Freitas, 2016; Fernando et al., 2017; Valkov et al., 2018). Unfortunately, this is infeasible in many real-world scenarios in which the agent has access to little data for each of the tasks. One notable exception was proposed by Gaunt et al. (2017), which improves early components with experience in new tasks, but is limited to very simplistic settings.\nUnlike prior work, our approach explicitly learns compositional structures in a lifelong learning setting. We do not assume access to a large batch of tasks or the ability to learn definitive components after training on a single task. Instead, we train on a small initial batch of tasks (four tasks, in our experiments), and then autonomously update the existing components to accommodate new tasks.\nOur framework also permits incorporating new components over time. Related work has increased network capacity in the non-compositional setting (Yoon et al., 2018) or in a compositional setting where previously learned parameters are kept fixed (Li et al., 2019). Another method enables adaptation of existing parameters (Rajasegaran et al., 2019), but requires expensively storing and training multiple models for each task to select the best one before adapting the existing parameters, and is designed for a specific choice of architecture, unlike our general framework." }, { "heading": "3 THE LIFELONG LEARNING PROBLEM", "text": "We frame lifelong learning as online multi-task learning. The agent will face a sequence of tasks T (1), . . . , T (T ) over its lifetime. Each task will be a learning problem defined by a cost function L(t) ( f (t) ) , where the agent must learn a prediction function f (t) ∈ F : X (t) 7→ Y(t) to minimize the\ncost, where F is a function class, and X (t) and Y(t) are the instance and label spaces, respectively. Each task’s solution is parameterized by θ(t), such that f (t) = fθ(t) . The goal of the lifelong learner is to find parameters θ(1), . . . ,θ(T ) that minimize the cost across all tasks: ∑T t=1 L(t) ( f (t) ) . The number of tasks, the order in which tasks will arrive, and the task relationships are all unknown.\nGiven limited data for each new task, the agent will strive to discover any relevant information to 1) relate it to previously stored knowledge in order to permit transfer and 2) store any new knowledge for future reuse. The agent may be evaluated on any previous task, requiring it to perform well on all tasks. In consequence, it must strive to retain knowledge from even the earliest tasks." }, { "heading": "4 THE LIFELONG COMPOSITIONAL LEARNING FRAMEWORK", "text": "Our framework for lifelong learning of compositional structures (illustrated in Appendix A) stores knowledge in a set of k shared components M = {m1, . . . ,mk} that are acquired and refined over the agent’s lifetime. Each component mi = mφi ∈ M is a self-contained, reusable function parameterized by φi that can be combined with other components. The agent reconstructs each task’s predictive function f (t) via a task-specific structure s(t) : X (t) ×Mk 7→ F , withMk being the set of possible sequences of k components, such that f (t)(x) = s(t)(x,M)(x), where s(t) is parameterized by a vector ψ(t). Note that s(t) yields a function from F . The structure functions select the components from M and the order in which to compose them to construct the model for each task (the f (t)’s). Specific examples of components and structures are described in Section 4.1.\nThe intuition behind our framework is that, at any point in time t, the agent will have acquired a set of components suitable for solving tasks it encountered previously (T (1), . . . , T (t−1)). If these components, with minor adaptations, can be combined to solve the current task T (t), then the agent should first learn how to reuse these components before making any modifications to them. The rationale for this idea of keeping components fixed during the early stages of training on the current task T (t), before the agent has acquired sufficient knowledge to perform well on T (t), is that premature modification could be catastrophically damaging to the set of existing components. Once the structure s(t) has been learned, we consider that the agent has captured sufficient knowledge about the current task, and it would be sensible to update the components to better accommodate that knowledge. If, instead, it is not possible to capture the current task with the existing components, then new components should be added. These notions loosely mirror the stages of assimilation and accommodation in Piaget’s (1976) theories on intellectual development, and so we adopt those terms. Algorithms under our framework take the form of Algorithm 1, split into the following steps.\nAlgorithm 1 Lifelong Comp. Learning Initialize components M while T (t) ← getTask()\nFreeze M for i = 1, . . . , structUpdates\nAssimilation step on structure s(t) if i mod adaptFreq = 0\nFreeze s(t), unfreeze M for j = 1, . . . , compUpdates\nAdaptation step on M Freeze M , unfreeze s(t)\nAdd components via expansion Store info. for future adaptation Initialization The components M should be initialized encouraging reusability, both across tasks and within different structural configurations of task models. The former signifies that the components should solve a particular sub-problem regardless of the objective of the task. The latter means that components may be reused multiple times within the structure for a single task’s model, or at different structural orders across different tasks. For example, in deep nets, this means that the components could be used at different depths. We achieve this by training the first few tasks the agent encounters jointly to initialize M , keeping a fixed, but random, structure that reuses components to encourage reusability.\nAssimilation Algorithms for finding compositional knowledge vary in how they optimize each task’s structure. In modular nets, component selection can be learned via reinforcement learning\n(Johnson et al., 2017; Rosenbaum et al., 2018; Chang et al., 2019; Pahuja et al., 2019), stochastic search (Fernando et al., 2017; Alet et al., 2018), or backpropagation (Shazeer et al., 2017; Kirsch et al., 2018; Meyerson & Miikkulainen, 2018). Our framework will use any of these approaches to assimilate the current task by keeping the components M fixed and learning only the structure s(t). Approaches supported by our framework must accept decoupling the learning of the structure from the learning of the components themselves; this requirement holds for all the above examples.\nAccommodation An effective approach should maintain performance on earlier tasks, while being flexible enough to incorporate new knowledge. To accommodate new knowledge from the current task, the learner may adapt existing components or expand to include new components: • Adaptation step Approaches for non-compositional structures have been to naı̈vely fine-\ntune models with data from the current task, to impose regularization to selectively freeze weights (Kirkpatrick et al., 2017; Ritter et al., 2018), or to store a portion of data from previous tasks and use experience replay (Lopez-Paz & Ranzato, 2017; Isele & Cosgun, 2018). We will instantiate our framework by using any of these methods to accommodate new knowledge into existing components once the current task has been assimilated. For this to be possible, we require that the method can be selectively applied to only the component parameters φ. • Expansion step Often, existing components, even with some adaptation, are insufficient to solve the current task. In this case, the learner would incorporate novel components, which should encode knowledge distinct from existing components and combine with those components to solve the new task. The ability to discover new components endows the learner with the flexibility required to learn over a lifetime. For this, we create component dropout, described in Section 4.2.\nConcrete instantiations of Algorithm 1 are described in Section 5.1, with pseudocode in Appendix B." }, { "heading": "4.1 COMPOSITIONAL STRUCTURES", "text": "We now present three compositional structures that can be learned within our framework.\nLinear combinations of models In the simplest setting, each component is a linear model, and they are composed via linear combinations. Specifically, we assume that X (t) ⊆ Rd, and each taskspecific function is given by fθ(t)(x) = θ(t)>x, with θ(t) ∈ Rd. The predictive functions are constructed from a set of linear component functions mφi(x) = φi\n>x, with φi ∈ Rd, by linearly combining them via a task-specific weight vector ψ(t) ∈ Rk, yielding: f (t)(x) = sψ(t)(x,M)(x) = ψ(t)>(Φ>x), where we have constructed the matrix Φ = [φ1, . . . ,φk] to collect all k components. Soft layer ordering In order to handle more complex models, we construct compositional deep nets that compute each layer’s output as a linear combination of the outputs of multiple modules. As proposed by Meyerson & Miikkulainen (2018), we assume that each module is one layer, the number of components matches the network’s depth, and all components share the input and output dimensions. Concretely, each component is a deep net layer mφi(x) = σ(φi\n>x), where σ is any nonlinear activation and φi ∈ Rd̃×d̃. A set of parameters ψ(t) ∈ Rk×k weights the output of the components at each depth: s(t) = D(t) ◦ ∑k i=1ψ (t) i,1mi ◦ · · · ◦ ∑k i=1ψ (t) i,kmi ◦ E(t), where E(t) and D(t) are task-specific input and output transformations such that E(t) : X (t) 7→ Rd̃ and D(t) : Rd̃ 7→ Y(t), and the weights are restricted to sum to one at each depth j: ∑k i=1ψ (t) i,j = 1.\nSoft gating In the presence of large data, it is often beneficial to modify the network architecture for each input x (Rosenbaum et al., 2018; Kirsch et al., 2018), unlike both approaches above which use a constant structure for each task. We modify the soft layer ordering architecture by weighting each component’s output at depth j by an input-dependent soft gating net s(t)j :X (t) 7→Rk, giving a predictive function s(t)=D(t) ◦ ∑k i=1[s (t) 1 (x)]imi ◦ · · · ◦ ∑k i=1[s (t) k (x)]imi ◦ E(t). As above, we\nrestrict the weights to sum to one at each depth: ∑k i=1[s (t) j (x)]i=1." }, { "heading": "4.2 EXPANSION OF THE SET OF COMPONENTS M VIA COMPONENT DROPOUT", "text": "To enable our deep learners to discover new components, we created an expansion step where the agent considers adding a single new component per task. In order to assess the benefit of the new\ncomponent, the agent learns two different networks: with and without the novel component. Dropout enables training multiple neural networks without additional storage (Hinton et al., 2012), and has been used to prune neural net nodes in non-compositional settings (Gomez et al., 2019). Our proposed dropout strategy deterministically alternates backpropagation steps with and without the new component, which we call component dropout. Intermittently bypassing the new component ensures that existing components can compensate for it if it is discarded. After training, we apply a post hoc criterion (in our experiments, a validation error check) to potentially prune the new component." }, { "heading": "4.3 COMPUTATIONAL COMPLEXITY", "text": "Approaches to lifelong learning tend to be computationally intensive, revisiting data or parameters from previous tasks at each training step. Our framework only carries out these expensive operations during (infrequent) adaptation steps. Table 1 summarizes the computational complexity per epoch of the algorithms described in Section 5.1. The assimilation step of our method with expansion (dynamic + compositional) is comparable to compositional baselines in the worst case (one new component per task), and our method without expansion (compositional) is always at least as fast." }, { "heading": "5 EXPERIMENTAL EVALUATION", "text": "" }, { "heading": "5.1 FRAMEWORK INSTANTIATIONS AND BASELINES", "text": "Instantiations We evaluated our framework with the three compositional structures of Section 4.1. All methods assimilate task T (t) via backpropagation on the structure’s parameters ψ(t). For each, we trained three instantiations of Algorithm 1, varying the method used for adaptation: • Naı̈ve fine-tuning (NFT) updates components via standard backpropagation, ignoring past tasks. • Elastic weight consolidation (EWC, Kirkpatrick et al., 2017) penalizes modifying model param-\neters via λ2 ∑T−1 t=1 ‖θ − θ(t)‖2F (t) , where F\n(t) is the Fisher information around θ(t). Backpropagation is carried out on the regularized loss, and we approximated F (t) with Kronecker factors. • Experience replay (ER) stores nm samples per task in a replay buffer, and during adaptation takes backpropagation steps with data from the replay buffer along with the current task’s data.\nWe explored variations with and without the expansion step: dynamic + compositional methods use component dropout to add new components, while compositional methods keep a fixed-size set.\nBaselines For every adaptation method listed above, we constructed two baselines. Joint baselines use compositional structures, but do not separate assimilation and accommodation, and instead update components and structures jointly. In contrast, no-components baselines optimize a single architecture to be used for all tasks, with additional task-specific input and output mappings, E(t) and D(t). The latter baselines correspond to the most common lifelong learning approach, which learns a monolithic structure shared across tasks, while the former are the naı̈ve extensions of those methods to a compositional setting. We also trained an ablated version of our framework that keeps all components fixed after initialization (FM), only taking assimilation steps for each new task." }, { "heading": "5.2 RESULTS", "text": "We evaluated these methods on tasks with no evident compositional structure to demonstrate that there is no strict requirement for a certain type of compositionality. Appendix D introduces a simple compositional data set, and shows that our results naturally extend to that setting. We repeated\n1While it is theoretically possible for EWC to operate in constant time w.r.t. T , practical implementations use per-task Kronecker factors due to the enormous computational requirements of the constant-time solution.\nexperiments ten times with varying random seeds. For details on data sets and hyper-parameters, see Appendix E. Code and data sets are available at https://github.com/Lifelong-ML/ Mendez2020Compositional.git. Additional results, beyond those presented in this section, are given in Appendix F." }, { "heading": "5.2.1 LINEAR COMBINATIONS OF MODELS", "text": "We first evaluated linear combinations of models on three data sets used previously for evaluating linear lifelong learning (Ruvolo & Eaton, 2013). The Facial Recognition (FERA) data set tasks involve recognizing one of three facial expression action units for one of seven people, for a total of T = 21 tasks. The Landmine data set consists of T = 29 tasks, which require detecting land mines in radar images from different regions. Finally, the London Schools (Schools) data set contains T = 139 regression tasks, each corresponding to exam score prediction in a different school.\nTable 2 summarizes the results obtained with linear models. The compositional versions of ER, EWC, and NFT clearly outperformed all the joint versions, which learn the same form of models but by jointly optimizing structures and components. This suggests that the separation of the learning process into assimilation and accommodation stages enables the agent to better capture the structure of the problem. Interestingly, the no-components variants, which learn a single linear model for all tasks, performed better than the jointly trained versions in two out of the three data sets, and even outperformed our compositional algorithms in one. This indicates that the tasks in those two data sets (Landmine and Schools) are so closely related that a single model can capture them." }, { "heading": "5.2.2 DEEP COMPOSITIONAL LEARNING WITH SOFT LAYER ORDERING", "text": "We then evaluated how the different algorithms performed when learning deep nets with soft layer ordering, using five data sets. Binary MNIST (MNIST) is a common lifelong learning benchmark, where each task is a binary classification problem between a pair of digits. We constructed T = 10 tasks by randomly sampling the digits with replacement across tasks. The Binary Fashion MNIST (Fashion) data set is similar to MNIST, but images correspond to items of clothing. For these two data sets, all models used a task-specific input transformation layer E(t) initialized at random and kept fixed throughout training, to ensure that the input spaces were sufficiently different (Meyerson & Miikkulainen, 2018). A more complex lifelong learning problem commonly used in the literature is Split CUB-200 (CUB), where the agent must classify bird species. We created T = 20 tasks by randomly sampling ten species for each, without replacement across tasks. All agents used a frozen ResNet-18 pre-trained on ImageNet as a feature extractor E(t) shared across all tasks. For these first three data sets, all architectures were fully connected networks. To show that our framework supports more complex convolutional architectures, we used two additional data sets. We constructed a lifelong learning version of CIFAR-100 (CIFAR) with T = 20 tasks by randomly sampling five classes per task, without replacement across tasks. Finally, we used the Omniglot data set, which consists of T = 50 multi-class classification problems, each corresponding to detecting handwritten symbols in a given alphabet. The inputs to all architectures for these two data sets were the images directly, without any transformation E(t). Results in Table 3 show that all the algorithms conforming to our framework outperformed the joint and no-components learners. In four out of the five data sets, the dynamic addition of new components yielded either no or marginal improvements. However, on CIFAR it was crucial for\nTable 3: Average final accuracy across tasks using soft layer ordering. Standard errors after ±.\nBase Algorithm MNIST Fashion CUB CIFAR Omniglot\nER\nDyn. + Comp. 97.6± 0.2% 96.6± 0.4% 79.0± 0.5% 77.6± 0.3% 71.7± 0.5% Compositional 96.5± 0.2% 95.9± 0.6% 80.6± 0.3% 58.7± 0.5% 71.2± 1.0% Joint 94.2± 0.3% 95.1± 0.7% 77.7± 0.5% 65.8± 0.4% 70.7± 0.3% No Comp. 91.2± 0.3% 93.6± 0.6% 44.0± 0.9% 51.6± 0.6% 43.2± 4.2%\nEWC Dyn. + Comp. 97.2± 0.2% 96.5± 0.4% 73.9± 1.0% 77.6± 0.3% 71.5± 0.5% Compositional 96.7± 0.2% 95.9± 0.6% 73.6± 0.9% 48.0± 1.7% 53.4± 5.2% Joint 66.4± 1.4% 69.6± 1.6% 65.4± 0.9% 42.9± 0.4% 58.6± 1.1% No Comp. 66.0± 1.1% 68.8± 1.1% 50.6± 1.2% 36.0± 0.7% 68.8± 0.4%\nNFT Dyn. + Comp. 97.3± 0.2% 96.4± 0.4% 73.0± 0.7% 73.0± 0.4% 69.4± 0.4% Compositional 96.5± 0.2% 95.9± 0.6% 74.5± 0.7% 54.8± 1.2% 68.9± 0.9% Joint 67.4± 1.4% 69.2± 1.9% 65.1± 0.7% 43.9± 0.6% 63.1± 0.9% No Comp. 64.4± 1.1% 67.0± 1.3% 49.1± 1.6% 36.6± 0.6% 68.9± 1.0% FM Dyn. + Comp. 99.1± 0.0% 97.3± 0.3% 78.3± 0.4% 78.4± 0.3% 71.0± 0.4%Compositional 84.1± 0.8% 86.3± 1.3% 80.1± 0.3% 48.8± 1.6% 63.0± 3.3%\nSoft ordering\nSoft gating\nD+C C J NC1.0\n1.5 Ga in\nFinal average performance\nD+C C J NC1.0\n1.5 Ga in\nFinal average performance\n(a) ER\nD+C C J NC1.0\n1.5 Ga\nin Final average performance\nD+C C J NC1.0\n1.5 Ga in\nFinal average performance\n(b) EWC\nD+C C J NC1.0\n1.5 Ga in\nFinal average performance\nD+C C J NC1.0\n1.5 Ga in\nFinal average performance\n(c) NFT\nD+C C1.0\n1.5 Ga in\nFinal average performance\nForward Final\nFigure 1: Average gain w.r.t. no-components NFT across tasks and data sets immediately after training on each task (forward) and after all tasks had been trained (final), using soft ordering (top) and soft gating (bottom). Algorithms within our framework (C and D+C) outperformed baselines. Gaps between forward and final performance indicate that our framework exhibits less forgetting.\n0 200 400 600 # epochs\n0.4\n0.6\n0.8\n1.0\nAc cu\nra cy\nERDynamic on Fashion\n(a) ER Dyn. + Comp.\n0 200 400 600 # epochs\n0.4\n0.6\n0.8\n1.0\nAc cu\nra cy\nERComposable on Fashion\n(b) ER Compositional\n0 200 400 600 # epochs\n0.4\n0.6\n0.8\n1.0\nAc cu\nra cy\nERNoncomposable on Fashion\n(c) ER Joint\n0 200 400 600 # epochs\n0.4\n0.6\n0.8\n1.0\nAc cu\nra cy\nEROriginal on Fashion\n(d) ER No Components\nFigure 2: Learning curves averaged across MNIST and Fashion using ER and soft ordering. Each curve shows a single task trained for 100 epochs and continually evaluated during and after training. Algorithms under our framework displayed no forgetting. For ER dynamic + compositional, as more tasks were seen and accommodated, assimilation performance of later tasks improved. Joint and no-components versions dropped performance of early tasks during the learning of later tasks.\nthe agent to be capable of detecting when new components were needed. This added flexibility enables our learners to handle more varied tasks, where new problems may not be solved without substantially new knowledge. Algorithms with adaptation outperformed the ablated compositional FM agent, showing that it is necessary to accommodate new knowledge into the set of components in order to handle a diversity of tasks. When FM was allowed to dynamically add new components (keeping old ones fixed), it yielded the best performance on MNIST and Fashion by adding far more components than methods with adaptation, as we show in Appendix F, as well as on CIFAR.\nTo study how flexibly our agents learn new tasks and how stably they retain knowledge about earlier tasks, Figure 1 (top) shows accuracy gains immediately after each task was learned (forward) and after all tasks had been learned (final), w.r.t. no-components NFT (final). Compositional learners with no dynamically added components struggled to match the forward performance of joint baselines, indicating that learning the ordering over existing layers during much of training is less flexible than modifying the layers themselves, as expected. However, the added stability dramatically decreased forgetting w.r.t. joint methods. The dynamic addition of new layers yielded substantial improvements in the forward stage, while still reducing catastrophic forgetting w.r.t. baselines. Figure 2 shows the learning curves of MNIST and Fashion tasks using ER, the best adaptation method. Performance jumps in 100-epoch intervals show adaptation steps incorporating knowledge about the current task into the existing components without noticeably impacting earlier tasks’ performance. Compositional and dynamic + compositional ER exhibit almost no performance drop after training\non a task, whereas accuracy for the joint and no-components versions diminishes as the agent learns subsequent tasks. Most notably, as more tasks were seen by dynamic ER, the existing components became better able to assimilate new tasks, shown by the trend of increasing performance as the number of tasks increases. This suggests that the later tasks’ accommodation stage can successfully determine which new knowledge should be incorporated into existing components (enabling them to better generalize across tasks), and which must be incorporated into a new component.\nIn Appendix F, we show that our methods do not forget early tasks, and outperform baselines even in small data settings. We also found that most components learned by our methods are reused by multiple tasks, as desired. Moreover, analysis of various accommodation schedules revealed that infrequent adaptation leads to best results, informing future algorithm design choices." }, { "heading": "5.2.3 DEEP COMPOSITIONAL LEARNING WITH SOFT GATING", "text": "Finally, we tested our algorithms when the compositional structures were given by a soft gating net. Table 4 shows a substantial improvement of compositional algorithms w.r.t. baselines. We hypothesized that the gating net granted our assimilation step more flexibility, which is confirmed in Figure 1 (bottom): the forward accuracy of compositional methods was nearly identical to that of jointly trained and no-components versions. This added flexibility enabled our simplest version of a compositional algorithm, FM, to perform better than the full versions of our algorithm on the CIFAR data set with convolutional gating nets, showing that even the components initialized with only a few tasks are sufficient for top lifelong learning performance. We attempted to run experiments with this method on the CUB data set, but found that all algorithms were incapable of generalizing to test data. This is consistent with findings in prior work, which showed that gating nets require vast amounts of data, unavailable in CUB (Rosenbaum et al., 2018; Kirsch et al., 2018)." }, { "heading": "5.2.4 DEEP COMPOSITIONAL LEARNING OF SEQUENCES OF DIVERSE TASKS", "text": "One of the key advantages of learning compositional structures is that they enable learning a more diverse set of tasks, by recombining components in novel ways to solve each problem. In this setting, non-compositional structures struggle to capture the diversity of the tasks in a single monolithic architecture. To verify that this is indeed the case, we created a novel data set that combines the MNIST, Fashion, and CUB data sets into a single Combined data set of T = 40 tasks. We trained all our instantiations and baselines with soft layer ordering, using the same architecture as used for CUB in Section 5.2.2. Agents were given no indication that each task came from a different data\nset, and they were all trained following the exact same setup of Section 5.2.2. Table 5 summarizes the results for ER-based algorithms, showing that our method greatly outperforms the baselines. In particular, ER with no components was completely incapable of learning the CUB tasks, showing that compositional architectures are required to handle this more complex setting. Even jointly training the components and structures performed poorly. Algorithms under our framework instead performed remarkably well, especially the complete version with dynamically added components. The remaining instantiations and baselines exhibited a similar behavior (see Appendix G)." }, { "heading": "5.3 VISUALIZATION OF THE LEARNED COMPONENTS", "text": "We now visually inspect the components learned by our framework to verify that they are indeed self-contained and reusable. Similar to Meyerson & Miikkulainen (2018), we created T = 10 generative tasks, where each pixel in an image of the digit “4” constitutes one data point, using the coordinates as features and the pixel intensity as the label. We trained a soft ordering net with k = 4 components via compositional ER, and shared the input and output transformations across tasks to ensure the only differences across task models are due to the structure s(t) of each task. Varying the intensity ψ(t)i,j with which component i is selected at various depths j gives information about the effect of the component in different contexts. Figure 3 shows generated digits as the intensity of component i = 0 varies at different depths, revealing that the discovered component learned to vary the thickness of the digit regardless of the task at hand, with a more extreme effect at the initial layers. Additional details and more comprehensive visualizations are included in Appendix H.\nCONCLUSION\nWe presented a general framework for learning compositional structures in a lifelong learning setting. The key piece of our framework is the separation of the learning process into two stages: assimilation of new problems with existing components, and accommodation of newly discovered knowledge into the set of components. These stages have connections to Piagetian theory of development, opening the door for future investigations that bridge between lifelong machine learning and developmental psychology. We showed the flexibility of our framework by capturing nine different concrete algorithms within our framework, and demonstrated empirically in an extensive evaluation that these algorithms are stronger lifelong learners than existing approaches. More concretely, we demonstrated that both learning traditional monolithic architectures and naı̈vely training compositional structures via existing methods lead to substantially degraded performance. Our framework is simple conceptually, easy to combine with existing continual or compositional learning techniques, and effective in trading off the flexibility and stability required for lifelong learning.\nIn this work, we showed the potential of compositional structures to enable strong lifelong learning. One major line of open work remains properly understanding how to measure the quality of the obtained compositional solutions, especially in settings without obvious decompositions, like those we consider in Section 5.2. While our visualizations in Section 5.3 and results in Table F.4 suggest that our method obtains reusable components, we currently lack a proper metric to assess the degree to which the learned structures are compositional. We leave this question open for future investigation." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank Seungwon Lee, Boyu Wang, and Oleh Rybkin for their feedback on various versions of this draft. We would also like to thank the anonymous reviewers for their valuable feedback and suggestions. The research presented in this paper was partially supported by the DARPA Lifelong Learning Machines program under grant FA8750-18-2-0117, the DARPA SAIL-ON program under contract HR001120C0040, and the Army Research Office under MURI grant W911NF20-1-0080. The views and conclusions in this paper are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency, the Army Research Office, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein." }, { "heading": "A DEPICTION OF OUR COMPOSITIONAL LIFELONG LEARNING FRAMEWORK", "text": "Section 4 in the main paper presented our general-purpose framework for lifelong learning of compositional structures. Figure A.1 illustrates our proposed framework, split into four learning stages: 1) initialization of components, 2) assimilation of new tasks with existing components, 3) adaptation of existing components with new knowledge, and 4) expansion of the set of components." }, { "heading": "B FRAMEWORK INSTANTIATIONS", "text": "In this section, we describe in more detail the specific implementations used in the experiments of Section 5 in the main paper. To simplify evaluation, we fixed structUpdates, adaptFreq, and compUpdates such that the learning process would be split into multiple epochs of assimilation followed by a single final epoch of adaptation. This is the most extreme separation of the learning into assimilation and accommodation: no knowledge is accommodated into existing components until after assimilation has finished. We study the effects of this choice in Appendix F.\nAlgorithms B.5–B.10 summarize the implementations of all instantiations of our framework used in our experiments. Shared subroutines are included as Algorithms B.1–B.4, and we leave blank lines in compositional methods to highlight missing steps from their dynamic + compositional counter-\nparts. The learner first initializes the components by jointly training on the first Tinit tasks it encounters. At every subsequent time t, during the first (numEpochs−1) epochs, the agent assimilates the new task by training the task-specific structure parameters ψ(t) via backpropagation, learning how to combine existing components for task T (t). For dynamic + compositional methods, this assimilation step is done with component dropout, and simultaneously optimizes the parameters of the newly added component φk+1. The adaptation step varies according to the base lifelong learning method, applying techniques for avoiding forgetting to the whole set of component parameters Φ for one epoch. This step is done via component dropout for dynamic + compositional methods. Finally, dynamic + compositional methods discard the fresh component if it does not improve performance by more than a threshold τ on the current task T (t), and otherwise keep it for future training.\nAlgorithm B.1 Initialization\n1: s(t) ← randomInitialization() 2: init buff← init buff ∪ T (t).train 3: if t = Tinit − 1 4: for i = 1, . . . , numEpochs 5: for t̃,x← init buff 6: Φ← Φ− η∇ΦL(t̃)(f (t̃)(x)) 7: end for 8: end for . backprop on components 9: end if\nAlgorithm B.2 Expansion\n1: a1 ← accuracy(T (t).validation) 2: hideComponent(k + 1) 3: a2 ← accuracy(T (t).validation) 4: recoverComponent(k + 1) 5: if a1−a2a2 < τ . validation error check 6: discardComponent(k + 1) 7: k ← k + 1 8: end if\nAlgorithm B.3 Assimilation (Comp.)\n1: for i = 1, . . . , numEpochs− 1 2: for x← T (t).train 3: ψ(t) ← ψ(t) − η∇ψ(t)L(t)(f (t)(x))\n4: end for 5: end for . backprop on structure\nAlgorithm B.4 Assimilation (Dyn. + Comp.) 1: Φ← [Φ; randomVector()] . new comp. 2: ψ(t)k+1,1:k ← 1 3: for i = 1, . . . , numEpochs− 1 4: for x← T (t).train 5: ψ(t) ← ψ(t) − η∇ψ(t)L(t)(f (t)(x)) 6: φk+1←φk+1 − η∇φk+1L(t)(f (t)(x)) 7: hideComponent(k + 1) 8: ψ(t) ← ψ(t) − η∇ψ(t)L(t)(f (t)(x)) 9: recoverComponent(k + 1) 10: end for 11: end for . component dropout\nAlgorithm B.5 Compositional ER\n1: while T (t) ← getTask() 2: if t < Tinit 3: Call Algorithm B.1 . initialization 4: else 5: Call Algorithm B.3 . assimilation 6: for t̃,x← (t, T (t).train) ∪ buffer 7: Φ← Φ− η∇ΦL(t̃)(f (t̃)(x))\n8: end for . adaptation\n9: end if 10: buffer[t]← sample(T (t).train, nm) 11: end while\nAlgorithm B.6 Dynamic + Compositional ER\n1: while T (t) ← getTask() 2: if t < Tinit 3: Call Algorithm B.1 . initialization 4: else 5: Call Algorithm B.4 . assimilation 6: for t̃,x← (t, T (t).train) ∪ buffer 7: Φ← Φ− η∇ΦL(t̃)(f (t̃)(x)) 8: hideComponent(k + 1) 9: Φ← Φ− η∇ΦL(t̃)(f (t̃)(x))\n10: recoverComponent(k + 1) 11: end for . adaptation 12: Call Algorithm B.2 . expansion 13: end if 14: buffer[t]← sample(T (t).train, nm) 15: end while\nAlgorithm B.7 Compositional EWC\n1: while T (t) ← getTask() 2: if t < Tinit 3: Call Algorithm B.1 . initialization 4: else 5: Call Algorithm B.3 . assimilation 6: for x← T (t).train 7: A← ∑t−1 t̃ a (t̃)Φb(t̃) 8: g ← ∇ΦL(t)(f (t)(x)) + λ(A−B) 9: Φ← Φ− ηg\n10: end for . adaptation\n11: end if 12: a(t), b(t) ← KFAC(T (t).train,Φ) 13: B ← B − a(t)Φb(t) 14: end while\nAlgorithm B.8 Dynamic + Compositional EWC\n1: while T (t) ← getTask() 2: if t < Tinit 3: Call Algorithm B.1 . initialization 4: else 5: Call Algorithm B.4 . assimilation 6: for x← T (t).train 7: A← ∑t−1 t̃ a (t̃)Φb(t̃) 8: g ← ∇ΦL(t)(f (t)(x)) + λ(A−B) 9: Φ← Φ− ηg\n10: hideComponent(k + 1) 11: A← ∑t−1 t̃ a (t̃)Φb(t̃) 12: g ← ∇ΦL(t)(f (t)(x)) + λ(A−B) 13: Φ← Φ− ηg 14: recoverComponent(k + 1) 15: end for . adaptation 16: Call Algorithm B.2 . expansion 17: end if 18: a(t), b(t) ← KFAC(T (t).train,Φ) 19: B ← B − a(t)Φb(t) 20: end while\nAlgorithm B.9 Compositional NFT\n1: while T (t) ← getTask() 2: if t < Tinit 3: Call Algorithm B.1 . initialization 4: else 5: Call Algorithm B.3 . assimilation 6: for x← T (t).train 7: Φ← Φ− η∇ΦL(t)(f (t)(x))\n8: end for . adaptation\n9: end if 10: end while\nAlgorithm B.10 Dynamic + Compositional NFT\n1: while T (t) ← getTask() 2: if t < Tinit 3: Call Algorithm B.1 . initialization 4: else 5: Call Algorithm B.4 . assimilation 6: for x← T (t).train 7: Φ← Φ− η∇ΦL(t)(f (t)(x)) 8: hideComponent(k + 1) 9: Φ← Φ− η∇ΦL(t)(f (t)(x))\n10: recoverComponent(k + 1) 11: end for . adaptation 12: Call Algorithm B.2 . expansion 13: end if 14: end while" }, { "heading": "C COMPUTATIONAL COMPLEXITY DERIVATION", "text": "We now derive asymptotic bounds for the computational complexity of all baselines and instantiations of our framework presented in Section 4.3 in the main paper. We assume the network architecture uses fully connected layers, and soft layer ordering for compositional structures. Extending these results to convolutional layers and soft gating is straightforward.\nA single forward and backward pass through a standard fully connected layer of i inputs and o outputs requires O(io) computations, and is additive across layers. Assuming a binary classification net, the no-components architecture contains one input layer E(t) with d inputs and d̃ outputs, k layers with d̃ inputs and d̃ outputs, and one output layer D(t) with d̃ inputs and one output. Training such a net in the standard single-task setting then requires O ( dd̃+ d̃2k+ d̃ ) computations per input point. For a full epoch of training on a data set with n data points, the training cost would then be O ( nd̃ ( d̃k+d )) . This is exactly the computational cost of no-components NFT, since it ignores any information from past tasks during training, and leverages only the initialization of parameters.\nOn the other hand, a soft layer ordering net evaluates all k layers of size d̃ × d̃ at every one of the k depths in the network, resulting in a cost of O ( d̃2k2 ) for those layers. This results in an\noverall cost per epoch of O ( nd̃ ( d̃k2 + d )) for single-task training, and therefore also for joint NFT training. Since compositional methods do not use information from earlier tasks during assimilation, because they only train the task-specific structure s(t) during this stage, then the cost per epoch of assimilation is also O ( nd̃ ( d̃k2 + d )) . Dynamic + compositional methods can at most contain T\ncomponents if they add one new component for every seen task. This leads to a cost of O ( d̃2kT ) for the shared layers, and an overall cost per epoch of assimilation of O ( nd̃ ( d̃kT + d )) .\nKronecker-factored EWC requires computing two O ( d̃ × d̃ ) matrices, a(t) and b(t), for every observed task. At each training iteration, EWC modifies the gradient of component i by adding λ ∑T t=1 a (t)φib (t) − a(t)φi(t)b(t), where φi(t) are the parameters of component i obtained after training on task T (t). While the second term of this sum can be pre-computed and stored in memory, it is not possible to pre-compute the first term. Theoretically, one can apply Kronecker product properties to store a (prohibitively large) O ( d̃2 × d̃2 ) matrix and avoid computing the per-task sum, but practical implementations avoid this and instead compute the sum for every task, at a cost of O ( T d̃3k ) per mini-batch. With O(n) mini-batches per epoch, we obtain an additional cost with\nrespect to joint and no-components NFT of O ( nT d̃3k ) . Note that this step is carried out after\nobtaining the gradients for each layer, and thus there is no additional k2 term for joint EWC.\nDeriving the complexity bound of ER simply requires extending the size of the batch of data from n to (Tnm + n) for a replay buffer size of nm per task.\nTo put the computational complexity of dynamic + compositional methods into perspective, we compute the number of components required to solve T tasks. We consider networks with hard layer ordering, and assume that all T tasks can be represented by different orders over the same set of components. Given a network with k depths and k̃ components, it is possible to create k̃k different layer orderings. If all T tasks require different orderings, then we require at least k̃ = k √ T components. Designing a lifelong learning algorithm that can provably attain this bound in the number of components, or any sublinear growth in T , remains an open problem.\nFor completeness, we note that the (very infrequent) adaptation steps for compositional methods incur the same computational cost as any epoch of joint methods. On the other hand, to obtain the cost of adaptation steps for dynamic + compositional methods, we need to replace k2 terms in the expressions for joint methods by kT , again noting that this corresponds to the worst case, where the agent adds a new component for every single task it encounters." }, { "heading": "D EVALUATION ON A TOY COMPOSITIONAL DATA SET", "text": "The results of Section 5.2 in the main paper were obtained on a suite of data sets that does not explicitly require any compositional structure. This deliberate choice enabled us to study the generality of our framework, and we found that algorithms that instantiate it work well across data sets with a range of feature representations, relations across tasks, number of tasks, and sample sizes. In this section, we introduce a data set that explicitly assumes a compositional structure that intuitively matches the assumptions of our soft layer ordering architecture, and we show that the results obtained for non-compositional data sets still hold for this class of problems.\nWe created the Objects data set with 48 classes, each corresponding to an object composed of a shape (circle, triangle, or square), color (orange, blue, pink, or green), and location (each of the for quadrants in the image). We generated n = 100 images of size 28 × 28 per class. We uniformly sampled the center of the object from [cx − 3, cx + 3], [cy − 3, cy + 3], where cx and cy are the centers of the quadrant for each class, respectively. The RGB values were uniformly sampled from [r− 16, r+16], [g− 16, g+16], [b− 16, g+16], where r, g, and b are the nominal RGB values for the color of the class. Finally, we uniformly sampled the size of the objects from [3, 7] pixels.\nTo test our approach in this setting, we created a lifelong version of the Objects data set by randomly splitting the data into 16 three-way classification tasks. 50% of the instances for each class were used as training data, 20% as validation data, and 30% as test data. We used soft ordering nets with k = 4\ncomponents of 64 fully connected hidden units shared across tasks, and a linear input transformation E(t) trained for each task. All agents trained for 100 epochs per task using a mini-batch of size 32, with compositional agents using 99 epochs for assimilation and a single epoch for adaptation. The regularization hyper-parameter for EWC was set to λ = 1e− 3, and ER was given a a replay buffer of size nm = 5. We ran 50 trials of each experiment with different random seeds controlling class splits for each task, training/validation/test splits for each class, and the ordering of tasks.\nWe evaluated all methods in four different settings. In the Random setting, classes were randomly split and ordered, matching the experimental setting of Section 5.2. The other, more challenging settings were created by holding out one shape, location, or color only for the final four (for color and location) or five (for shape) tasks, requiring the agents to adapt to never-seen components dynamically. Results in Table D.1 show each of our methods outperforms all baselines in all settings, showcasing the ability of our framework to discover the underlying compositional structures." }, { "heading": "E EXPERIMENTAL SETTING", "text": "Below, we give additional details describing the experimental setting used in the main paper.\nE.1 DATA SETS\nThe data sets used for linear experiments underwent the same processing and train/test split of Ruvolo & Eaton (2013). For MNIST and Fashion, we randomly sampled pairs of digits to act as the positive and negative classes in each task, and allowed digits to be reused across tasks. For CUB and CIFAR, ten and five classes were used per task, respectively, without reusing of classes across different tasks. CUB images were cropped by the provided bounding boxes and resized to 224×224. For these four data sets, we used the standard train/test split, and further divided the training set into 80% for training and 20% for validation. Finally, for Omniglot, we used each alphabet as one task, and split the data into 80% for training, 10% for validation, and 10% for test, for each task. For each of the ten trials, we varied the random seed which controlled the tasks (whenever the tasks were not fixed by definition), the random splits for training/validation/test, and the order in which the tasks were presented to the agent. Validation sets were only used by dynamic + compositional learners for selecting whether to keep a new component. Details are summarized in Table E.2.\nE.2 NETWORK ARCHITECTURES\nWe used k=4 components for all compositional algorithms with fixed k. This is the only architectural choice for linear models. Below, we describe the architectures used for other experiments.\nTable E.2: Data set details summary.\nFERA Landmine Schools MNIST Fashion CUB CIFAR Omniglot tasks 21 29 139 10 10 20 20 50 classes 2 2 — 2 2 10 5 14–55 features 100 9 27 784 784 512 32×32×3 105×105 feat. extract. PCA — — — — ResNet-18 — — train 225–499 222–345 11–125 ∼9500 ∼9500 ∼120 ∼2000 224–880 val — — — ∼2500 ∼2500 ∼30 ∼500 28–110 test 225–500 223–345 11–126 ∼2000 2000 ∼150 500 28–110\nSoft layer ordering We based our soft layer ordering architectures on those used by Meyerson & Miikkulainen (2018), whenever possible. For MNIST and Fashion, we used a random and fixed linear input transformation E(t) for each task, and each component was a fully connected layer of 64 units. For CUB, all tasks shared a fixed ResNet-18 pre-trained on ImageNet2 as an input transformation, followed by a task-specific input transformation E(t) given by a linear trained layer, and each component was a fully connected layer of 256 units. For CIFAR, there was no input transformation, and each component was a convolutional layer of 50 channels with 3 × 3 kernels and padding of 1 pixel, followed by a max-pooling layer of size 2× 2. Finally, for Omniglot, there was also no input transformation, and each component was a convolutional layer of 53 channels with 3× 3 kernels and no padding, followed by max-pooling of 2× 2 patches. The input images to the convolutional nets in CIFAR and Omniglot were padded with all-zero channels in order to match the number of channels required by all component layers (50 and 53, respectively). All component layers were followed by ReLU activation and a dropout layer with dropout probability p = 0.5. The output of each network was a linear task-specific output transformation D(t) trained individually on each task. The architectures for jointly trained baselines were identical to these, and those for no-components baselines had the same layers but no mechanism to select the order of the layers.\nSoft gating The soft gating architectures mimicked those of the soft layer ordering architectures closely, all having the same input and output transformations, as well as the same components. The only difference was in the structure architectures. For fully connected nets, at each depth, the structure function s(t) was a linear layer that took as input the previous depth’s output and whose output was a soft selection over the component layers for the current depth. For convolutional nets, there was one gating net per task with the same architecture as the main network. The structure s(t) was computed by passing the previous depth’s output in the main network through the remaining depths in the gating network (e.g., the output of depth 2 in the original network was passed through depths 3 and 4 in the gating network to compute the structure over modules at depth 3).\nE.3 ALGORITHM DETAILS\nAll agents trained for 100 epochs on each task, with a mini-batch of 32 samples. Compositional agents used the first 99 epochs solely for assimilation and the last epoch for adaptation. Dynamic + compositional agents followed this same process, but every assimilation step was done via component dropout; after the adaptation step, the agent kept the new component if its validation performance with the added component represented at least a 5% relative improvement over the performance without the additional component. Joint agents trained all components and the structure for the current task jointly during all 100 epochs, keeping the structure for the previous tasks fixed, while no-components agents trained the whole model at every epoch.\nER-based algorithms used a replay buffer of a single mini-batch per task. Similarly, EWC-based algorithms used a single mini-batch to compute the approximate Fisher information matrix required for regularization, and used a fixed regularization parameter λ = 10−3.\nTo ensure a fair comparison, all algorithms, including our baselines, used the same initialization procedure by training the first Tinit=4 tasks jointly, in order to encourage the network to generalize across tasks. For soft ordering nets, the order of modules for the initial tasks was initialized as a random one-hot vector for each task at each depth, ensuring that each component was selected at\n2The pre-trained ResNet-18 is provided by PyTorch, and we followed the pre-processing recommended at https://pytorch.org/docs/stable/torchvision/models.html.\nDy n.\n+ C om\np.\nCo mp\nosi tio\nna l Join t\nNo Co\nmp .\nDy n.\n+ C om\np.\nCo mp\nosi tio\nna l Join t\nNo Co\nmp .\nDy n.\n+ C om\np.\nCo mp\nosi tio\nna l Join t\nNo Co\nmp .\nDy n.\n+ C om\np.\nCo mp\nosi tio\nna l0.00\n0.25\n0.50\n0.75\n1.00\nAc cu\nra cy\nER EWC VAN FM Final performance on MNIST\n(a) MNIST Dy\nn. + C\nom p.\nCo mp\nosi tio\nna l Join t\nNo Co\nmp .\nDy n.\n+ C om\np.\nCo mp\nosi tio\nna l Join t\nNo Co\nmp .\nDy n.\n+ C om\np.\nCo mp\nosi tio\nna l Join t\nNo Co\nmp .\nDy n.\n+ C om\np.\nCo mp\nosi tio\nna l0.00\n0.25\n0.50\n0.75\n1.00\nAc cu\nra cy\nER EWC VAN FM Final performance on Fashion\n(b) Fashion Dy\nn. + C\nom p.\nCo mp\nosi tio\nna l Join t\nNo Co\nmp .\nDy n.\n+ C om\np.\nCo mp\nosi tio\nna l Join t\nNo Co\nmp .\nDy n.\n+ C om\np.\nCo mp\nosi tio\nna l Join t\nNo Co\nmp .\nDy n.\n+ C om\np.\nCo mp\nosi tio\nna l0.0\n0.2\n0.4\n0.6\n0.8\nAc cu\nra cy\nER EWC VAN FM Final performance on CUB\n(c) CUB\nDy n.\n+ C om\np.\nCo mp\nosi tio\nna l Join t\nNo Co\nmp .\nDy n.\n+ C om\np.\nCo mp\nosi tio\nna l Join t\nNo Co\nmp .\nDy n.\n+ C om\np.\nCo mp\nosi tio\nna l Join t\nNo Co\nmp .\nDy n.\n+ C om\np.\nCo mp\nosi tio\nna l0.0\n0.2\n0.4\n0.6\n0.8\nAc cu\nra cy\nER EWC VAN FM Final performance on CIFAR\n(d) CIFAR Dy\nn. + C\nom p.\nCo mp\nosi tio\nna l Join t\nNo Co\nmp .\nDy n.\n+ C om\np.\nCo mp\nosi tio\nna l Join t\nNo Co\nmp .\nDy n.\n+ C om\np.\nCo mp\nosi tio\nna l Join t\nNo Co\nmp .\nDy n.\n+ C om\np.\nCo mp\nosi tio\nna l0.0\n0.2\n0.4\n0.6\n0.8\nAc cu\nra cy\nER EWC VAN FM Final performance on Omniglot\n(e) Omniglot\nD+C C1.0\n1.5 Ga in\nFinal average performance\nForward Final\nFigure F.2: Soft layer ordering accuracy. Compositional agents outperformed baselines in most data sets for every adaptation method. Dyn. + comp. agents further improved performance, leading to our methods being strongest. Error bars denote standard errors.\nER Dyn. + Comp. ER Compositional ER Joint ER No Components\nMNIST\n0 200 400 600 # epochs\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nAc cu\nra cy\nERDynamic on MNIST\n0 200 400 600 # epochs\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nAc cu\nra cy\nERComposable on MNIST\n0 200 400 600 # epochs\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nAc cu\nra cy\nERNoncomposable on MNIST\n0 200 400 600 # epochs\n0.2\n0.4\n0.6\n0.8\n1.0\nAc cu\nra cy\nEROriginal on MNIST\nFashion\n0 200 400 600 # epochs\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nAc cu\nra cy\nERDynamic on Fashion\n0 200 400 600 # epochs\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nAc cu\nra cy\nERComposable on Fashion\n0 200 400 600 # epochs\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nAc cu\nra cy\nERNoncomposable on Fashion\n0 200 400 600 # epochs\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nAc cu\nra cy\nEROriginal on Fashion\nCUB\n0 500 1000 1500 # epochs\n0.0\n0.2\n0.4\n0.6\n0.8\nAc cu\nra cy\nERDynamic on CUB\n0 500 1000 1500 # epochs\n0.0\n0.2\n0.4\n0.6\n0.8\nAc cu\nra cy\nERComposable on CUB\n0 500 1000 1500 # epochs\n0.0\n0.2\n0.4\n0.6\n0.8\nAc cu\nra cy\nERNoncomposable on CUB\n0 500 1000 1500 # epochs\n0.0\n0.2\n0.4\n0.6 0.8 Ac cu ra\ncy EROriginal on CUB\nCIFAR\n0 500 1000 1500 # epochs\n0.0\n0.2\n0.4\n0.6\n0.8\nAc cu\nra cy\nERDynamic on CIFAR\n0 500 1000 1500 # epochs\n0.0\n0.2\n0.4\n0.6\n0.8\nAc cu\nra cy\nERComposable on CIFAR\n0 500 1000 1500 # epochs\n0.0\n0.2\n0.4\n0.6\n0.8\nAc cu\nra cy\nERNoncomposable on CIFAR\n0 500 1000 1500 # epochs\n0.0\n0.2\n0.4\n0.6\nAc cu\nra cy\nEROriginal on CIFAR\nOmniglot\n0 1000 2000 3000 4000 # epochs\n0.0\n0.2\n0.4\n0.6\n0.8\nAc cu\nra cy\nERDynamic on Omniglot\n0 1000 2000 3000 4000 # epochs\n0.0\n0.2\n0.4\n0.6\n0.8\nAc cu\nra cy\nERComposable on Omniglot\n0 1000 2000 3000 4000 # epochs\n0.0\n0.2\n0.4\n0.6\n0.8\nAc cu\nra cy\nERNoncomposable on Omniglot\n0 1000 2000 3000 4000 # epochs\n0.0\n0.2\n0.4\n0.6\n0.8\nAc cu\nra cy\nEROriginal on Omniglot\nFigure F.3: Smoothed learning curves with soft layer ordering using ER. Compositional methods did not exhibit decaying performance of early tasks, while joint and no-components baselines did.\nleast once, and for soft gating nets, the gating nets were randomly initialized. The structures over initial tasks were kept fixed during training, modifying only the weights of the components." }, { "heading": "F ADDITIONAL RESULTS FOR QUANTITATIVE EXPERIMENTS", "text": "We now present detailed results that expand upon those presented in Section 5.2 in the main paper.\nFor completeness, we include expanded results from Figures 1 and 2 in the main paper, corresponding to soft layer ordering. Figure F.2 is a more detailed version of Figure 1, and shows the test accuracy immediately after each task was trained and after all tasks had been trained, separately for each data set. Compositional algorithms conforming to our proposed framework achieve a better\n0 2 4 6 8 Task ID\n0.75\n0.80\n0.85\n0.90\n0.95\n1.00\nAc cu\nra cy\nR et\nen tio\nn Ra\nte\nAverage catastrophic forgetting\n(a) ER—Soft Ordering\n0 2 4 6 8 Task ID\n0.5\n0.6\n0.7\n0.8\n0.9\n1.0\nAc cu\nra cy\nR et\nen tio\nn Ra\nte\nAverage catastrophic forgetting\n(b) EWC—Soft Ordering\n0 2 4 6 8 Task ID\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1.0\nAc cu\nra cy\nR et\nen tio\nn Ra\nte\nAverage catastrophic forgetting\n(c) NFT—Soft Ordering\n0 2 4 6 8 Task ID\n0.75\n0.80\n0.85\n0.90\n0.95\n1.00\nAc cu\nra cy\nR et\nen tio\nn Ra\nte\nAverage catastrophic forgetting\n(d) ER—Soft Gating\n0 2 4 6 8 Task ID\n0.5\n0.6\n0.7\n0.8\n0.9\n1.0\nAc cu\nra cy\nR et\nen tio\nn Ra\nte\nAverage catastrophic forgetting\n(e) EWC—Soft Gating\n0 2 4 6 8 Task ID\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1.0\nAc cu\nra cy\nR et\nen tio\nn Ra\nte\nAverage catastrophic forgetting\n(f) NFT—Soft Gating\n0 2 4 6 8 Task ID\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1.0\nAc cu\nra cy\nR et\nen tio\nn Ra\nte\nAverage catastrophic forgetting\nDyn. + Comp. Compositional Joint No Comp. Figure F.4: Catastrophic forgetting across data sets. Ratio of accuracy when a task was first trained to when all tasks had been trained. For data sets with more than ten tasks, we sampled ten interleaved tasks to match all the x-axes. Compositional algorithms had practically no forgetting, whereas jointly trained and no-components baselines forgot knowledge required to solve earlier tasks.\nTable F.3: Number of learned components. Standard errors reported after the ±. Structure Base MNIST Fashion CUB CIFAR Omniglot\nSoft ordering ER 5.2± 0.3 4.9± 0.3 5.9± 0.3 19.1± 0.3 9.3± 0.3 EWC 5.0± 0.3 4.7± 0.2 5.8± 0.2 19.6± 0.2 10.1± 0.3 NFT 5.0± 0.2 4.8± 0.3 6.1± 0.3 17.7± 0.3 10.0± 0.7 FM 10.0± 0.0 8.8± 0.2 6.5± 0.4 19.1± 0.4 10.2± 0.6 Soft gating ER 4.0± 0.0 4.2± 0.1 — 4.1± 0.1 7.1± 0.4 EWC 4.1± 0.1 4.0± 0.0 — 4.8± 0.2 7.4± 0.4 NFT 4.1± 0.1 4.2± 0.1 — 4.1± 0.1 7.2± 0.3 FM 5.4± 0.2 4.7± 0.2 — 4.4± 0.2 7.3± 0.4\ntrade-off than others in flexibility and stability, leading to good adaptability to each task with little forgetting of previous tasks. Similarly, Figure F.3 shows learning curves similar to those in Figure 2 in the main paper, for each data set. Baselines that train components and structures jointly all exhibit a decay in the performance of earlier tasks as learning of future tasks progresses, whereas methods conforming to our framework do not. Results for soft gating nets display a similar behavior.\nThe gap between the first and second bar for each algorithm in Figure F.2 is an indicator of the amount of catastrophic forgetting. However, it hides details of how forgetting affects each individual task. On the other hand, the decay rate of each task in Figure F.3 shows how each task is forgotten over time, but does not measure quantitatively how much forgetting occurred. Based on prior work (Lee et al., 2019), we evaluated the ratio of performance after each task was trained to after all tasks had been trained as a metric of knowledge retention. Results in Figure F.4 show that compositional methods exhibit substantially less catastrophic forgetting, particularly for the earlier tasks seen during training.\nIn our experiments, it was in many cases necessary to incorporate an expansion step in order for our algorithm to be sufficiently flexible to handle the stream of incoming tasks. This expansion step enables our methods to dynamically add new components if the existing ones are insufficient to achieve good performance in the new task. Table F.3 shows the number of components learned by each dynamic algorithm using both soft ordering and soft gating, averaged across all ten trials. Notably, in the soft ordering case, in order for our methods to work on the CIFAR data set, they required learning almost one component per task. This explains why compositional algorithms\nPublished as a conference paper at ICLR 2021\n16 32 64 128 256 512 # data points\n0.75\n0.80\n0.85\n0.90\n0.95\nAc cu\nra cy\nLearning curves on MNIST\n(a) MNIST\n16 32 64 128 256 512 # data points\n0.825\n0.850\n0.875\n0.900\n0.925\n0.950\nAc cu\nra cy\nLearning curves on Fashion\n(b) Fashion\n4 8 16 32 64 128 # data points\n0.2\n0.4\n0.6\n0.8\nAc cu\nra cy\nLearning curves on CUB\n(c) CUB\n4 8 16 32 64 128 # data points\n0.2\n0.4\n0.6\n0.8\nAc cu\nra cy\nLearning curves on CUB\nDyn. + Comp. Compositional Joint No Comp. Figure F.5: Accuracy of ER-based methods with varying data sizes. Compositional methods performed better even with extremely little data per task. Shaded area represents standard errors.\nTable F.4: Number of tasks that reuse a component. A task reuses a component if its accuracy drops by more than 5% relative when the component is dropped. Standard errors reported after the ±. Algorithm Comp. MNIST Fashion CUB CIFAR Omniglot\nCompositional 0 6.40± 0.43 6.00± 0.24 13.40± 0.78 18.90± 0.30 46.40± 0.98 1 4.90± 0.26 4.70± 0.28 7.90± 0.41 16.20± 0.80 30.60± 2.86 2 4.10± 0.26 4.10± 0.17 5.90± 0.57 11.90± 1.14 18.80± 3.76 3 3.00± 0.32 2.60± 0.32 3.20± 0.49 5.70± 0.97 10.90± 2.17\nDyn. + Comp. 0 4.70± 0.38 5.10± 0.26 9.80± 0.98 13.30± 1.27 21.90± 1.82 1 3.60± 0.25 3.90± 0.22 6.20± 0.56 6.20± 0.61 12.30± 0.68 2 2.80± 0.28 3.30± 0.28 4.40± 0.62 4.00± 0.35 9.20± 0.61 3 1.90± 0.26 2.10± 0.22 2.70± 0.20 3.10± 0.26 7.40± 0.47 4 — — — 3.00± 0.32 6.50± 0.49 5 — — — 1.80± 0.13 5.00± 0.51 6 — — — 1.50± 0.16 4.20± 0.46 7 — — — 1.10± 0.09 3.40± 0.47 8 — — — 1.00± 0.00 1.90± 0.30 9 — — — 1.00± 0.00 — 10 — — — 1.00± 0.00 — 11 — — — 1.00± 0.00 — 12 — — — 1.00± 0.00 — 13 — — — 0.90± 0.09 — 14 — — — 0.90± 0.09 —\nwithout dynamic component additions were incapable of performing well on CIFAR. It is also worth noting that soft gating nets typically required adding fewer new components, which is to be expected, since the gating structure gives the learner substantially more flexibility. Recall that, as mentioned in Section 5.2.3, soft gating networks were unable to perform well on the CUB data set because of the small sample size, so the corresponding column is omitted from the table.\nOne of the key aspects of lifelong learning is the ability to learn in the presence of little data for each task, using knowledge acquired from previous tasks to acquire better generalization for new tasks. To evaluate the sample efficiency of our algorithm, we varied the number of data points used for training for MNIST, Fashion, and CUB using the soft ordering structure and ER. We repeated the evaluation for 50 trials, each with a different random seed controlling the selection of classes and samples for each task, and the order over tasks. Learners were trained for 1,000 epochs, with our compositional methods alternating nine epochs of assimilation and one epoch of adaptation. We used a batch size of b = 32, and limited the replay buffer size to min(max(b0.1nc, 1), b) for each data size n. Figure F.5 shows the learning accuracy for ER-based algorithms as a function of the number of training points, revealing that compositional algorithms work better than baselines even in the presence of very little data.\nOur approach was designed to discover a set of components that are reusable across multiple tasks. To verify that this effectively occurs, we evaluated how many tasks reuse each component. Taking the models pre-trained via compositional and dynamic + compositional ER with soft layer ordering, we evaluated the accuracy of the model on each task if any individual component was removed\n2 5 10 20 50 100 adaptFreq\n0.92\n0.94\n0.96\n0.98\nAc cu\nra cy\nLearning curves on MNIST\n(a) ER on MNIST\n2 5 10 20 50 100 adaptFreq\n0.7\n0.8\n0.9\nAc cu\nra cy\nLearning curves on MNIST\n(b) EWC on MNIST\n2 5 10 20 50 100 adaptFreq\n0.7\n0.8\n0.9\nAc cu\nra cy\nLearning curves on MNIST\n(c) NFT on MNIST\n2 5 10 20 50 100 adaptFreq\n0.93\n0.94\n0.95\n0.96\n0.97\nAc cu\nra cy\nLearning curves on Fashion\n(d) ER on Fashion\n2 5 10 20 50 100 adaptFreq\n0.70\n0.75\n0.80\n0.85\n0.90\n0.95\nAc cu\nra cy\nLearning curves on Fashion\n(e) EWC on Fashion\n2 5 10 20 50 100 adaptFreq\n0.65\n0.70\n0.75\n0.80\n0.85\n0.90\n0.95\nAc cu\nra cy\nLearning curves on Fashion\n(f) NFT on Fashion\n2 5 10 20 50 100 adaptFreq\n0.5\n0.6\n0.7\n0.8\nAc cu\nra cy\nLearning curves on CUB\n(g) ER on CUB\n2 5 10 20 50 100 adaptFreq\n0.45\n0.50\n0.55\n0.60\n0.65\n0.70\n0.75\nAc cu\nra cy\nLearning curves on CUB\n(h) EWC on CUB\n2 5 10 20 50 100 adaptFreq\n0.4\n0.5\n0.6\n0.7\nAc cu\nra cy\nLearning curves on CUB\n(i) NFT on CUB\n2 5 10 20 50 100 adaptFreq\n0.4\n0.5\n0.6\n0.7\nAc cu\nra cy\nLearning curves on CUB\nDyn. + Comp. Compositional Joint No Comp. Figure F.6: Effect of the assimilation and accommodation schedule. Average accuracy across tasks w.r.t. the number of assimilation epochs between accommodation epochs. Broadly, methods under our framework performed better with a scheduled that favored stability, taking more assimilation steps before accommodating any new knowledge into the set of existing components.\nfrom the model. We then considered a task to reuse a given component if removing it caused a relative drop in accuracy of more than 5%. Table F.4 shows the number of tasks that reuse each component. Since there is no fixed ordering over components across trials, we sorted each trial’s components in descending order of the number of tasks that reused each component. Moreover, for dynamic + compositional ER, we only consider components that are created across all trials for a given data set to ensure all averages are statistically significant. We found that across all data sets and algorithms, all k = 4 components available from initialization were used by multiple tasks. For the Omniglot data set, we find that this behavior persists even for components that are dynamically added in the expansion step. However, this is not the case for the CIFAR data set, for which the first few dynamically added components are indeed reused by multiple tasks, but subsequent ones are used by a single task. This indicates that those components were added merely for increasing performance of that individual task, but found no reusable knowledge useful for future tasks.\nWhen designing algorithms under our framework, one needs to choose how to alternate the processes of assimilation and accommodation. In most experiments so far, we considered the simplest case, where adaptation is entirely carried out after assimilation is finished. However, it is possible that other choices yield better results, enabling the learner to incorporate knowledge about the current task that further enables it to assimilate it better. To study this question, we carried out additional experiments using ER variants on the MNIST, Fashion, and CUB data sets with soft layer ordering. Instead of executing the adaptation step only after completing assimilation, we alternated epochs of assimilation with epochs of adaptation with various frequencies. Results are displayed in Figure F.6. Generally, we found that it was beneficial to carry out adaptation steps infrequently, with a clear increasing trend in performance as the learner took more assimilation steps before each adaptation step. For MNIST and Fashion, we found that all choices of schedule led to improved per-\nTable G.5: Average final accuracy across tasks on the Combined data set. Each column shows accuracy on the subset of tasks from each given data set, as labeled. Standard errors after ±.\nBase Algorithm All data sets MNIST Fashion CUB\nER\nDyn. + Comp. 86.5± 1.8% 99.5± 0.0% 98.0± 0.3% 74.2± 2.0% Compositional 82.1± 2.5% 99.5± 0.0% 97.8± 0.3% 65.5± 2.4% Joint 72.8± 4.1% 98.9± 0.3% 97.0± 0.7% 47.6± 6.2% No Comp. 47.4± 4.5% 91.8± 1.3% 83.5± 2.5% 7.1± 0.4%\nEWC Dyn. + Comp. 75.1± 3.2% 98.7± 0.5% 97.1± 0.7% 52.4± 2.9% Compositional 71.3± 4.0% 99.4± 0.0% 96.1± 0.9% 44.8± 3.5% Joint 52.2± 5.0% 85.1± 5.5% 88.6± 3.8% 17.5± 1.5% No Comp. 28.9± 2.8% 52.9± 1.6% 52.5± 1.4% 5.0± 0.4%\nNFT\nDyn. + Comp. 75.5± 3.2% 99.1± 0.3% 96.2± 0.9% 53.3± 2.8% Compositional 70.6± 3.8% 98.5± 0.5% 95.6± 0.8% 44.2± 3.5% Joint 52.7± 4.9% 85.5± 4.9% 88.5± 3.7% 18.4± 1.7% No Comp. 34.6± 3.7% 61.3± 3.8% 59.8± 3.6% 8.7± 0.5%\nFM Dyn. + Comp. 83.8± 2.0% 99.6± 0.0% 98.3± 0.3% 68.7± 1.5%Compositional 74.6± 3.1% 99.5± 0.0% 98.1± 0.3% 50.3± 2.0%\nformance over baselines, highlighting the benefits of splitting the learning process into assimilation and accommodation. For CUB, the results were more nuanced, with very fast accommodation rates achieving lower accuracy than the baselines. This is consistent with our results in Table 3, where compositional FM, equivalent to compositional ER with a schedule of infinite assimilation steps per accommodation step, performed nearly as well as compositional ER with a single adaptation epoch." }, { "heading": "G COMPLETE RESULTS ON SEQUENCES OF DIVERSE TASKS", "text": "We now describe in more detail the experimental setting used to obtain the results in Section 5.2.4 in the main paper, and provide the complete table of results using all our instantiations and baselines on the Combined data set.\nWe combined the 10 MNIST tasks, 10 Fashion tasks, and 20 CUB tasks into a single lifelong learning data set with T = 40 tasks, and trained all our methods on this new Combined data set. None of the methods were informed in any way that the tasks came from different data sets, and each learner was simply required to learn all tasks consecutively exactly as in the remaining experiments. We used the soft layer ordering structure, with the architecture used for the CUB tasks described in Appendix E. Only CUB images were processed with the pre-trained ResNet-18, whereas MNIST and Fashion images where fed directly to the task-specific input transformation E(t). The results are summarized in Table G.5. As expected, our methods clearly outperformed all baselines, by a much wider margin than in the single-data-set settings of Section 5.2.2 in the main paper. In particular, no-components baselines (those with monolithic architectures) were completely incapable of learning to solve the CUB tasks. Even the jointly trained variants, which do have compositional structures but learn them naı̈vely with existing lifelong methods, failed drastically. Our methods were far better, especially when using ER as the base adaptation method.\nNote that, in order to match the requirements of the CUB data set, the architecture we used gave MNIST and Fashion a higher capacity (layers of size 256 vs 64) and the ability to train the input transformation for each task individually (instead of keeping it fixed) compared to the architecture described in Appendix E. This explains the higher performance of most methods in those two data sets compared to the results in Table 3 in the main paper.\nH VISUALIZATION OF THE LEARNED COMPONENTS\nThe primary motivation for our framework was the creation of lifelong learning algorithms capable of discovering self-contained, reusable components, useful for solving a variety of tasks. In this section, provide additional details about the visualization experiment of Section 5.3, as well as a more comprehensive study of various components and a comparison to the joint ER baseline.\n4 ← −d\nep th −→\n1\n0 ←− intensity −→ 1\nTask 9\n0 ←− intensity −→ 1\nTask 9\n4 ← −d\nep th −→\n1\nTask 10 (a) ER Compositional\nTask 10 (b) ER Joint\nFigure H.7: Visualization of reconstructed MNIST “4” digits on the last two tasks seen by the compositional and joint variants of ER with soft layer ordering, varying the intensity with which component i = 0 is selected. Compositional ER learned a component that performs a functional primitive: the more intensely the component is selected (moving from left to right on each row), the thinner the lines of the digit become. The magnitude of this effect decreases with depth (moving from top to bottom), with the digit completely disappearing as the component is more intensely selected at the earliest layers, but only becoming slightly sharper with intensity at the deepest layers. This effect is consistent across both tasks. Joint ER did not exhibit this consistent behavior, with different effects observed at different depths and for the different tasks.\n4 ← −d\nep th −→\n1\n0 ←− intensity −→ 1\nTask 9\n0 ←− intensity −→ 1\nTask 9\n4 ← −d\nep th −→\n1\nTask 10 (a) ER Compositional\nTask 10 (b) ER Joint\nFigure H.8: Visualization of reconstructed MNIST “4” digits, varying the intensity of component i = 1. The component learned via compositional ER consistently decreases the length of the left side of the digit and increases that of the right side. Again, we were unable to detect any consistency in the effect of the component learned via joint ER.\nWe followed the visualization experiment of Meyerson & Miikkulainen (2018), where each task corresponded to a single image of the digit “4”, and each pixel in the image constituted one datapoint. The x, y coordinates of the pixel were used as features, and the pixel’s intensity was the associated label. Pixel coordinates and intensities were normalized to [0, 1]. All pixels in the image were treated as training data, since we were interested in understanding the learned representations, as opposed to generalizing to unseen data. Our network had k = 4 components shared across all tasks, and used soft layer ordering to learn the structure s(t) for each task. We used a linear input transformation layer E(t) shared across all tasks, and a shared sigmoid output transformation layer\n4 ← −d\nep th −→\n1\n0 ←− intensity −→ 1\nTask 9\n0 ←− intensity −→ 1\nTask 9\n4 ← −d\nep th −→\n1\nTask 10 (a) ER Compositional\nTask 10 (b) ER Joint\nFigure H.9: Visualization of reconstructed MNIST “4” digits, varying the intensity of component i = 2. As the intensity of the component learned via compositional ER increased, the digit changed from very sharp to very smooth. Joint ER again did not exhibit any consistent behavior.\n4 ← −d\nep th −→\n1\n0 ←− intensity −→ 1\nTask 9\n0 ←− intensity −→ 1\nTask 9\n4 ← −d\nep th −→\n1\nTask 10 (a) ER Compositional\nTask 10 (b) ER Joint\nFigure H.10: Visualization of reconstructed MNIST “4” digits, varying the intensity of component i = 3. This component also interpolates between sharper and smoother digits, while also rotating the digit. There was no consistency in the behavior of the component learned by Joint ER.\nD(t). Sharing the input and output transformations across tasks ensures that the only differences across the models of the different tasks are due to the structure of each task over the components. We trained the network to minimize the binary cross-entropy loss on T = 10 tasks for 1,000 epochs via the compositional and jointly trained versions of ER with a replay buffer and batch size of 32 pixel instances, updating the components of the compositional version every 100 epochs.\nTo evaluate the ability of compositional ER to capture reusable functional primitives, we observed the reconstructed images output by our network as we varied the intensity ψ(t)i,j with which one specific component i is chosen at different depths j in the network. We focused our evaluation on the last two tasks seen by the learner, in order to disregard the effects of catastrophic forgetting, which rendered the visualizations of the outputs of the joint ER baseline incomprehensible for earlier tasks. Figures H.7–H.10 show these reconstructions as the intensity of each component individually varies at different depths. The components learned with compositional ER learned to produce effects on the digits consistent across tasks, with more extreme effects at the initial layers. In contrast, joint ER learned components whose effects are different for different tasks and at different depths." } ]
2,021
LIFELONG LEARNING OF COMPOSITIONAL STRUCTURES
SP:0147099ac2866672f507e5e37383fa4f50addd0e
[ "This work proposes an RL approach SMiRL that is able to learn effective policies in unstable environments without the need for external reward. The idea at a high-level is almost the opposite of intrinsic motivation RL approaches, which encourage novelty-seeking behaviors. The proposed method instead aims to minimize surprise or state entropy. To train the agent, rewards come from state marginal estimates, but because this distribution is changing, the authors create an augmented MDP. Through experiments on game domains and robot control tasks, the authors show that SMiRL outperforms intrinsic motivation methods. The authors also show that SMiRL can be used to do imitation and can be combined with regular reward signals.", "The authors target the unsupervised reinforcement learning problem. An opposite idea from the existing approaches by maximizing state entropy is adopted to minimize state entropy. It is interesting that such an idea has achieved good performance in unstable environments. A state distribution is fitted during the interaction with an environment and the probability of the current state is used as a virtual reward. The parameters or sufficient statistics are also applied to the policy. The motivation is clear and verified. It is generally a good paper." ]
Every living organism struggles against disruptive environmental forces to carve out and maintain an orderly niche. We propose that such a struggle to achieve and preserve order might offer a principle for the emergence of useful behaviors in artificial agents. We formalize this idea into an unsupervised reinforcement learning method called surprise minimizing reinforcement learning (SMiRL). SMiRL alternates between learning a density model to evaluate the surprise of a stimulus, and improving the policy to seek more predictable stimuli. The policy seeks out stable and repeatable situations that counteract the environment’s prevailing sources of entropy. This might include avoiding other hostile agents, or finding a stable, balanced pose for a bipedal robot in the face of disturbance forces. We demonstrate that our surprise minimizing agents can successfully play Tetris, Doom, control a humanoid to avoid falls, and navigate to escape enemies in a maze without any task-specific reward supervision. We further show that SMiRL can be used together with standard task rewards to accelerate reward-driven learning.
[ { "affiliations": [], "name": "Glen Berseth" }, { "affiliations": [], "name": "Daniel Geng" } ]
[ { "authors": [ "Joshua Achiam", "Shankar Sastry" ], "title": "Surprise-based intrinsic motivation for deep reinforcement learning", "venue": "CoRR, abs/1703.01732,", "year": 2017 }, { "authors": [ "Joshua Achiam", "Shankar Sastry" ], "title": "Surprise-based intrinsic motivation for deep reinforcement learning", "venue": "arXiv preprint arXiv:1703.01732,", "year": 2017 }, { "authors": [ "Louis Annabi", "Alexandre Pitti", "Mathias Quoy" ], "title": "Autonomous learning and chaining of motor primitives using the free energy principle", "venue": "arXiv preprint arXiv:2005.05151,", "year": 2020 }, { "authors": [ "Yusuf Aytar", "Tobias Pfaff", "David Budden", "Thomas Paine", "Ziyu Wang", "Nando de Freitas" ], "title": "Playing hard exploration games by watching youtube", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Bowen Baker", "Ingmar Kanitscheider", "Todor Markov", "Yi Wu", "Glenn Powell", "Bob McGrew", "Igor Mordatch" ], "title": "Emergent tool use from multi-agent autocurricula", "venue": null, "year": 1909 }, { "authors": [ "Trapit Bansal", "Jakub Pachocki", "Szymon Sidor", "Ilya Sutskever", "Igor Mordatch" ], "title": "Emergent complexity via multi-agent competition", "venue": "arXiv preprint arXiv:1710.03748,", "year": 2017 }, { "authors": [ "Marc Bellemare", "Sriram Srinivasan", "Georg Ostrovski", "Tom Schaul", "David Saxton", "Remi Munos" ], "title": "Unifying count-based exploration and intrinsic motivation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "M. Biehl", "C. Guckelsberger", "C. Salge", "S. Smith", "D. Polani" ], "title": "Free energy , empowerment , and predictive information", "venue": null, "year": 2018 }, { "authors": [ "Yuri Burda", "Harri Edwards", "Deepak Pathak", "Amos Storkey", "Trevor Darrell", "Alexei A. Efros" ], "title": "Large-Scale Study of Curiosity-Driven Learning", "venue": "2018a. URL http://arxiv.org/abs/", "year": 2018 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Amos Storkey", "Oleg Klimov" ], "title": "Exploration by random network distillation", "venue": null, "year": 2018 }, { "authors": [ "Boyuan Chen", "Shuran Song", "Hod Lipson", "Carl Vondrick" ], "title": "Visual hide and seek", "venue": "In Artificial Life Conference Proceedings,", "year": 2020 }, { "authors": [ "Nuttapong Chentanez", "Andrew G Barto", "Satinder P Singh" ], "title": "Intrinsically motivated reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2005 }, { "authors": [ "Maxime Chevalier-Boisvert", "Lucas Willems", "Suman Pal" ], "title": "Minimalistic gridworld environment for openai gym", "venue": "https://github.com/maximecb/gym-minigrid,", "year": 2018 }, { "authors": [ "Ashley D Edwards", "Himanshu Sahni", "Yannick Schroecker", "Charles L Isbell" ], "title": "Imitating latent policies from observation", "venue": "arXiv preprint arXiv:1805.07914,", "year": 2018 }, { "authors": [ "Mohammadjavad Faraji", "Kerstin Preuschoff", "Wulfram Gerstner" ], "title": "Balancing new against old information: the role of puzzlement surprise in learning", "venue": "Neural computation,", "year": 2018 }, { "authors": [ "Karl Friston" ], "title": "The free-energy principle: a rough guide to the brain", "venue": "Trends in cognitive sciences,", "year": 2009 }, { "authors": [ "Karl Friston", "Thomas FitzGerald", "Francesco Rigoli", "Philipp Schwartenbeck", "Giovanni Pezzulo" ], "title": "Active inference and learning", "venue": "Neuroscience & Biobehavioral Reviews,", "year": 2016 }, { "authors": [ "Karl J. Friston", "Jean Daunizeau", "Stefan J. Kiebel" ], "title": "Reinforcement learning or active inference", "venue": "PLOS ONE, 4(7):1–13,", "year": 2009 }, { "authors": [ "Elad Hazan", "Sham Kakade", "Karan Singh", "Abby Van Soest" ], "title": "Provably efficient maximum entropy exploration", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Rein Houthooft", "Xi Chen", "Yan Duan", "John Schulman", "Filip De Turck", "Pieter Abbeel" ], "title": "VIME: Variational Information Maximizing Exploration", "venue": null, "year": 2016 }, { "authors": [ "Michał Kempka", "Marek Wydmuch", "Grzegorz Runc", "Jakub Toczek", "Wojciech Jaśkowski" ], "title": "Vizdoom: A doom-based ai research platform for visual reinforcement learning", "venue": "IEEE Conference on Computational Intelligence and Games (CIG),", "year": 2016 }, { "authors": [ "Kuno Kim", "Megumi Sano", "Julian De Freitas", "Nick Haber", "Daniel Yamins" ], "title": "Active world model learning with progress curiosity", "venue": "arXiv preprint arXiv:2007.07853,", "year": 2020 }, { "authors": [ "Youngjin Kim", "Wontae Nam", "Hyunwoo Kim", "Ji-Hoon Kim", "Gunhee Kim" ], "title": "Curiosity-bottleneck: Exploration by distilling task-specific novelty", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "ICLR,", "year": 2014 }, { "authors": [ "Alexander S. Klyubin", "Daniel Polani", "Chrystopher L. Nehaniv" ], "title": "All else being equal be empowered", "venue": "Advances in Artificial Life,", "year": 2005 }, { "authors": [ "Lisa Lee", "Benjamin Eysenbach", "Emilio Parisotto", "Eric Xing", "Sergey Levine", "Ruslan Salakhutdinov" ], "title": "Efficient exploration via state marginal matching", "venue": null, "year": 1906 }, { "authors": [ "Joel Lehman", "Kenneth O Stanley" ], "title": "Abandoning objectives: Evolution through the search for novelty alone", "venue": "Evolutionary computation,", "year": 2011 }, { "authors": [ "YuXuan Liu", "Abhishek Gupta", "Pieter Abbeel", "Sergey Levine" ], "title": "Imitation from observation: Learning to imitate behaviors from raw video via context translation", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Manuel Lopes", "Tobias Lang", "Marc Toussaint", "Pierre-Yves Oudeyer" ], "title": "Exploration in modelbased reinforcement learning by empirically estimating learning progress", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "arXiv preprint arXiv:1312.5602,", "year": 2013 }, { "authors": [ "Shakir Mohamed", "Danilo Jimenez Rezende" ], "title": "Variational information maximisation for intrinsically motivated reinforcement learning", "venue": "Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Pierre-Yves Oudeyer", "Frederic Kaplan" ], "title": "What is intrinsic motivation? a typology of computational approaches", "venue": "Frontiers in neurorobotics,", "year": 2009 }, { "authors": [ "Pierre-Yves Oudeyer", "Frdric Kaplan", "Verena V Hafner" ], "title": "Intrinsic motivation systems for autonomous mental development", "venue": "IEEE transactions on evolutionary computation,", "year": 2007 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A Efros", "Trevor Darrell" ], "title": "Curiosity-driven Exploration by Self-supervised Prediction", "venue": null, "year": 2017 }, { "authors": [ "Deepak Pathak", "Dhiraj Gandhi", "Abhinav Gupta" ], "title": "Self-Supervised Exploration via Disagreement", "venue": null, "year": 2019 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Curious model-building control systems", "venue": "In Proc. international joint conference on neural networks,", "year": 1991 }, { "authors": [ "Eric D Schneider", "James J Kay" ], "title": "Life as a manifestation of the second law of thermodynamics", "venue": "Mathematical and computer modelling,", "year": 1994 }, { "authors": [ "Erwin Schrödinger" ], "title": "What is life? The physical aspect of the living cell and mind", "venue": null, "year": 1944 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Pranav Shyam", "Wojciech Jaśkowski", "Faustino Gomez" ], "title": "Model-based active exploration", "venue": "arXiv preprint arXiv:1810.12162,", "year": 2018 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel" ], "title": "Mastering chess and shogi by self-play with a general reinforcement learning algorithm", "venue": "arXiv preprint arXiv:1712.01815,", "year": 2017 }, { "authors": [ "Susanne Still", "Doina Precup" ], "title": "An information-theoretic approach to curiosity-driven reinforcement learning", "venue": "Theory in Biosciences,", "year": 2012 }, { "authors": [ "Sainbayar Sukhbaatar", "Zeming Lin", "Ilya Kostrikov", "Gabriel Synnaeve", "Arthur Szlam", "Rob Fergus" ], "title": "Intrinsic motivation and automatic curricula via asymmetric self-play", "venue": "arXiv preprint arXiv:1703.05407,", "year": 2017 }, { "authors": [ "Yi Sun", "Faustino Gomez", "Jürgen Schmidhuber" ], "title": "Planning to be surprised: Optimal bayesian exploration in dynamic environments", "venue": "In International Conference on Artificial General Intelligence,", "year": 2011 }, { "authors": [ "Faraz Torabi", "Garrett Warnell", "Peter Stone" ], "title": "Behavioral cloning from observation", "venue": "arXiv preprint arXiv:1805.01954,", "year": 2018 }, { "authors": [ "Faraz Torabi", "Garrett Warnell", "Peter Stone" ], "title": "Generative adversarial imitation from observation", "venue": "arXiv preprint arXiv:1807.06158,", "year": 2018 }, { "authors": [ "Alexander Tschantz", "Manuel Baltieri", "Anil K Seth", "Christopher L Buckley" ], "title": "Scaling active inference", "venue": "In 2020 International Joint Conference on Neural Networks (IJCNN),", "year": 2020 }, { "authors": [ "Alexander Tschantz", "Beren Millidge", "Anil K Seth", "Christopher L Buckley" ], "title": "Reinforcement learning through active inference", "venue": "arXiv preprint arXiv:2002.12636,", "year": 2020 }, { "authors": [ "Kai Ueltzhöffer" ], "title": "Deep active inference", "venue": "Biological Cybernetics,", "year": 2018 }, { "authors": [ "Luca Weihs", "Aniruddha Kembhavi", "Winson Han", "Alvaro Herrasti", "Eric Kolve", "Dustin Schwenk", "Roozbeh Mottaghi", "Ali Farhadi" ], "title": "Artificial agents learn flexible visual representations by playing a hiding", "venue": null, "year": 1912 }, { "authors": [ "Tschantz" ], "title": "2020) are interesting and discuss connections to active inference and RL. However, these methods and many based on active inference “encode” the task reward function as a “global prior” and minimizing a KL between the agents state distribution this “global prior”. Our work instead actively estimates a marginal over the distribution of states the agent visits", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Organisms can carve out environmental niches within which they can maintain relative predictability amidst the entropy around them (Boltzmann, 1886; Schrödinger, 1944; Schneider & Kay, 1994; Friston, 2009). For example, humans go to great lengths to shield themselves from surprise — we band together to build cities with homes, supplying water, food, gas, and electricity to control the deterioration of our bodies and living spaces amidst heat, cold, wind and storm. These activities exercise sophisticated control over the environment, which makes the environment more predictable and less “surprising” (Friston, 2009; Friston et al., 2009). Could the motive of preserving order guide the automatic acquisition of useful behaviors in artificial agents?\nWe study this question in the context of unsupervised reinforcement learning, which deals with the problem of acquiring complex behaviors and skills with no supervision (labels) or incentives (external rewards). Many previously proposed unsupervised reinforcement learning methods focus on noveltyseeking behaviors (Schmidhuber, 1991; Lehman & Stanley, 2011; Still & Precup, 2012; Bellemare et al., 2016; Houthooft et al., 2016; Pathak et al., 2017). Such methods can lead to meaningful behavior in simulated environments, such as video games, where interesting and novel events mainly happen when the agent executes a specific and coherent pattern of behavior. However, we posit that in more realistic open-world environments, natural forces outside of the agent’s control already offer an excellent source of novelty: from other agents to unexpected natural forces, agents in these settings must contend with a constant stream of unexpected events. In such settings, rejecting perturbations and maintaining a steady equilibrium may pose a greater challenge than novelty seeking. Based on this observation, we devise an algorithm, surprise minimizing reinforcement learning (SMiRL), that specifically aims to reduce the entropy of the states visited by the agent.\nSMiRL maintains an estimate of the distribution of visited states, pθ(s), and a policy that seeks to reach likely future states under pθ(s). After each action, pθ(s) is updated with the new state, while the policy is conditioned on the parameters of this distribution to construct a stationary MDP. We illustrate this with a diagram in Figure 1a. We empirically evaluate SMiRL in a range of domains that\nare characterized by naturally increasing entropy, including video game environments based on Tetris and Doom, and simulated robot tasks that require controlling a humanoid robot to balance and walk. Our experiments show that, in environments that satisfy the assumptions of our method, SMiRL automatically discovers complex and coordinated behaviors without any reward signal, learning to successfully play Tetris, shoot enemies in Doom, and balance a humanoid robot at the edge of a cliff. We also show that SMiRL can provide an effective auxiliary objective when a reward signal is provided, accelerating learning in these domains substantially more effectively than pure novelty-seeking methods. Videos of our results are available online1" }, { "heading": "2 RELATED WORK", "text": "Prior work on unsupervised learning has proposed algorithms that learn without a reward function, such as empowerment (Klyubin et al., 2005; Mohamed & Jimenez Rezende, 2015) or intrinsic motivation (Chentanez et al., 2005; Oudeyer & Kaplan, 2009; Oudeyer et al., 2007). Intrinsic motivation has typically focused on encouraging novelty-seeking behaviors by maximizing model uncertainty (Houthooft et al., 2016; Still & Precup, 2012; Shyam et al., 2018; Pathak et al., 2019), by maximizing model prediction error or improvement (Lopes et al., 2012; Pathak et al., 2017), through state visitation counts (Bellemare et al., 2016), via surprise maximization (Achiam & Sastry, 2017b; Schmidhuber, 1991; Sun et al., 2011), and through other novelty-based reward bonuses (Lehman & Stanley, 2011; Achiam & Sastry, 2017a; Burda et al., 2018a; Kim et al., 2019). We do the opposite. Inspired by the free energy principle (Friston, 2009; Friston et al., 2009; Ueltzhöffer, 2018; Faraji et al., 2018; Friston et al., 2016) including recent methods that train policies using RL (Tschantz et al., 2020a;b; Annabi et al., 2020) that encode a prior over desired observations, we instead incentivize an agent to minimize surprise over the distribution of states generated by the policy in unstable environments, and study the resulting behaviors. In such environments it is non-trivial to achieve low entropy state distributions, which we believe are more reflective of the real world. Learning progress methods that minimize model parameter entropy (Lopes et al., 2012; Kim et al., 2020) avoid the issues novelty-based methods have with noisy distractors. These methods are based on learning the parameters of the dynamics where our method is learning to control the marginal state distribution.\nSeveral works aim to maximize state entropy to encourage exploration (Lee et al., 2019; Hazan et al., 2019). Our method aims to do the opposite, minimizing state entropy. Recent work connects the free energy principle, empowerment and predictive information maximization under the same framework to understand their differences (Biehl et al., 2018). Existing work has also studied how competitive self-play and competitive, multi-agent environments can lead to complex behaviors with minimal reward information (Silver et al., 2017; Bansal et al., 2017; Sukhbaatar et al., 2017; Baker et al., 2019; Weihs et al., 2019; Chen et al., 2020). Like these works, we also consider how complex behaviors can emerge in resource-constrained environments, but instead of multi-agent competition, we utilize surprise minimization to drive the emergence of complex skills.\n1https://sites.google.com/view/surpriseminimization" }, { "heading": "3 SURPRISE MINIMIZING AGENTS", "text": "We propose surprise minimization as a means to operationalize the idea of learning useful behaviors by seeking out low entropy state distributions. The long term effects of actions on surprise can be subtle, since actions change both (i) the state that the agent is in, and (ii) its beliefs, represented by a model pθ(s), about which states are likely under its current policy. SMiRL induces the agent to modify its policy π so that it encounters states s with high pθ(s), as well as to seek out states that will change the model pθ(s) so that future states are more likely. In this section, we will first describe what we mean by unstable environments and provide the surprise minimization problem statement, and then present our practical deep reinforcement learning algorithm for learning policies that minimize surprise.\nMany commonly used reinforcement learning benchmark environments are stable, in the sense the agent remains in a narrow range of starting states unless it takes coordinated and purposeful actions. In such settings, unsupervised RL algorithms that seek out novelty can discover meaningful behaviors. However, many environments – including, as we argue, those that reflect properties commonly found in the real world, – are unstable, in the sense that unexpected and disruptive events naturally lead to novelty and increased state entropy even if the agent does not carry out any particularly meaningful or purposeful behavior. In unstable environments, minimizing cumulative surprise requires taking actions to reach a stable distribution of states, and then acting continually and purposefully to stay in this distribution. An example of this is illustrated in Figure 1b: the agent’s environment is unstable due to varied weather. If the robot builds a shelter, it will initially experience unfamiliar states, but in the long term the observations inside the shelter are more stable and less surprising than those outside. Another example is the game of Tetris (Figure 2), where the environment spawns new blocks and drops them into random configurations, unless a skilled agent takes actions to control the board. The challenge of maintaining low entropy in unstable settings forces the SMiRL agent to acquire meaningful skills. We defer a more precise definition of unstable environments to Section 4, where we describe several unstable environments and contrast them with the static environments that are more commonly found in RL benchmark tasks. In static environments, novelty seeking methods must discover complex behaviors to increase entropy, leading to interesting behavior, while SMiRL may trivially find low entropy policies. We show that the reverse is true for unstable environments: a novelty seeking agent is satisfied with watching the environment change around it, while a surprise minimizing agent must develop meaningful skills to lower entropy.\nProblem statement. To instantiate SMiRL, we design a reinforcement learning agent that receives larger rewards for experiencing more familiar states, based on the history of states it has experienced during the current episode. This translates to learning a policy with the lowest state entropy. We assume a fully-observed controlled Markov process (CMP), where we use st to denote the state at time t, at to denote the agent’s action, p(s0) to denote the initial state distribution, and T (st+1|st, at) to denote the transition probabilities. The agent learns a policy πφ(a|s), parameterized by φ. The goal is to minimize the entropy of its state marginal distribution under its current policy πφ at each time step of the episode. We can estimate this entropy by fitting an estimate of the state marginal dπφ(st) at each time step t, given by pθt−1(st), using the states seen so far during the episode, τt = {s1, . . . , st} that is stationary. The sum of the entropies of the state distributions over an episode can then be estimated as\nT∑ t=0 H(st) = − T∑ t=0 Est∼dπφ (st)[log d πφ(st)] ≤ − T∑ t=0 Est∼dπφ (st)[log pθt−1(st)], (1)\nwhere the inequality becomes an equality if pθt−1(st) accurately models d πφ(st). Minimizing the right-hand side of this equation corresponds to maximizing an RL objective with rewards:\nr(st) = log pθt−1(st). (2)\nHowever, an optimal policy for solving this problem must take changes in the distribution pθt−1(st) into account when selecting actions, since this distribution changes at each step. To ensure that the underlying RL optimization corresponds to a stationary and Markovian problem, we construct an augmented MDP to instantiate SMiRL in practice, which we describe in the following section.\nAlgorithm 1 SMiRL 1: while not converged do 2: β ← {} . Reset experience 3: for episode = 0, . . . ,M do 4: s0 ∼ p(s0); τ0 ← {s0} . Initialize state 5: s̄0 ← (s0,0, 0) . Initialize aug. state 6: for each t = 0, . . . , T do 7: at ∼ πφ(at|st, θt, t) . Get action 8: st+1 ∼ T (st+1|st, at) . Step dynamics 9: rt ← log pθt(st+1) . SMiRL reward 10: τt+1←τt ∪ {st+1} . Record state 11: θt+1 ← U(τt+1) . Fit model 12: s̄t+1 ← {(st+1, θt+1, tt+1)} 13: β ← β ∪ {(s̄t, at, rt, s̄t+1)} 14: end for 15: end for each 16: φ← RL(φ, β) . Update policy 17: end while Training SMiRL agents. In order to instantiate SMiRL, we construct an augmented MDP out of the original CMP, where the reward in Equation (2) can be expressed entirely as a function of the state. This augmented MDP has a state space that includes the original state st, as well as the sufficient statistics of pθt(s). For example, if pθt(s) is a normal distribution with parameters θt, then (θt, t) – the parameters of the distribution and the number of states seen so far – represents a sufficient statistic. Note that it is possible to use other, more complicated, methods to summarize the statistics, including reading in the entirety of τt using a recurrent model. The policy conditioned on the augmented state is then given by πφ(at|st, θt, t). The parameters of the sufficient statistics are updated θt =U(τt) using a maximum likelihood state density estimation process θt=arg max θ ∑t n=0 log pθ(sn) over the experience within the episode τt. When (θt, t) is a sufficient statistic, the update may be written as θt = U(st, θt−1, t − 1). Specific update functions U(τt) used in our experiments are described in Appendix C and at the end of the section. Since the reward is given by r(st, θt−1, t− 1) = log pθt−1(st), and θt is a function of st and (θt−1, t− 1), the resulting RL problem is fully Markovian and stationary, and as a result standard RL algorithms will converge to locally optimal solutions. Appendix D include details on the MDP dynamics. In Figure 8, we illustrate the evolution of pθt(s) during an episode of the game Tetris. The pseudocode for this algorithm is presented in Algorithm 1.\nDensity estimation with learned representations. SMiRL may, in principle, be used with any choice of model class for the density model pθt(s). As we show in our experiments, relatively simple distribution classes, such as products of independent marginals, suffice to run SMiRL in many environments. However, it may be desirable in more complex environments to use more sophisticated density estimators, especially when learning directly from high-dimensional observations such as images. In these cases, we can use variational autoencoders (VAEs) (Kingma & Welling, 2014) to learn a non-linear state representation. A VAE is trained using the standard ELBO objective to reconstruct states s after encoding them into a latent representation z via an encoder qω(z|s), with parameters ω. Thus, z can be viewed as a compressed representation of the state.\nWhen using VAE representations, we train the VAE online together with the policy. This approach necessitates two changes to the procedure described Algorithm 1. First, training a VAE requires more data than the simpler independent models, which can easily be fitted to data from individual episodes. We propose to overcome this by not resetting the VAE parameters between training episodes, and instead training the VAE across episodes. Second, instead of passing the VAE model parameters to the policy, we only update a distribution over the VAE latent state, given by pθt(z), such that pθt(z) replaces pθt(s) in the SMiRL algorithm, and is fitted to only that episode’s (encoded) state history. We represent pθt(z) as a normal distribution with a diagonal covariance, and fit it to the VAE encoder outputs. Thus, the mean and variance of pθt(z) are passed to the policy at each time step, along with t. This implements the density estimate in line 9 of Algorithm 1. The corresponding update U(τt) is:\nz0, . . . , zt = E[qω(z|s)] for s ∈ τt, µ = 1/t+1 t∑\nj=0\nzj , σ = 1/t+1 t∑ j=0 (µ− zj)2,θt = [µ, σ].\nTraining the VAE online, over all previously seen data, deviates from the recipe in the previous section, where the density model was only updated within an episode. In this case the model is updated after a collection of episodes. This makes the objective for RL somewhat non-stationary and could theoretically cause issues for convergence, however we found in practice that the increased representational capacity provides significant improvement in performance." }, { "heading": "4 EVALUATION ENVIRONMENTS", "text": "We evaluate SMiRL on a range of environments, from video game domains to simulated robotic control scenarios. In these unstable environments, the world evolves automatically, without the goaldriven behavior of the agent, due to disruptive forces and adversaries. Standard RL benchmark tasks are typically static, in the sense that unexpected events don not happen unless the agent carries out a specific and coordinated sequence of actions. We therefore selected these environments specifically to be unstable, as we discuss below. This section describes each environment, with details of the corresponding MDPs in Appendix B. Illustrations of the environments are shown in Figure 2.\nTetris. The classic game offers a naturally unstable environment — the world evolves according to its own dynamics even in the absence of coordinated agent actions, piling pieces and filling the board. The agent’s task is to place randomly supplied blocks to construct and eliminate complete rows. The environment gives a reward of −1 when the agent fails or dies by stacking a column too high. Otherwise, the agent gets 0.\nVizDoom. We consider two VizDoom environments from Kempka et al. (2016): TakeCover and DefendTheLine where enemies throw fireballs at the agent, which can move around to avoid damage. TakeCover is unstable and evolving, with new enemies appearing over time and firing at the player. The agent is evaluated on how many fireballs hit it, which we term the “damage\" taken by the agent.\nHauntedHouse. This is a partially observed navigation task. The agent (red) starts on the left of the map, and is pursued by “enemies\" (blue). To escape, the agent can navigate down the hallways and through randomly placed doors (green) to reach the safe room on the right, which the enemies cannot enter. To get to the safe room the agent must endure increased surprise early on, since the doors appear in different locations in each episode.\nSimulated Humanoid robots. A simulated planar Humanoid agent must avoid falling in the face of external disturbances (Berseth et al., 2018). We evaluate four versions of this task. For Cliff the agent is initialized sliding towards a cliff, for Treadmill, the agent is on a small platform moving backwards at 1 m/s. In Pedestal, random forces are applied to it, and objects are thrown at it. In Walk, we evaluate how the SMiRL reward stabilizes an agent that is learning to walk. In all four tasks, we evaluate the proportion of episodes the robot does not fall.\nTraining Details. For discrete action environments, the RL algorithm used is DQN (Mnih et al., 2013) with a target network. For the Humanoid domains, we use TRPO (Schulman et al., 2015). For Tetris and the Humanoid domains, the policies are parameterized by fully connected neural networks, while VizDoom uses a convolutional network. Additional details are in Appendix Section B.\nEnvironment Stability. In Section 3, we described the connection between SMiRL and unstable environments. We can quantify how unstable an environment is by computing a relative entropy gap. We compare the entropy between three methods: entropy minimizing (SMiRL), entropy maximizing (RND) methods, and an initial random (Random) policy (or, more generally, an uninformed policy, such as a randomly initialized neural network). In stable environments, an uninformed random policy would only attain slightly higher state entropy than one that minimizes the entropy explicitly (SMiRL - Random∼ 0) , whereas a novelty-seeking policy should attain much higher entropy (RND - Random > 0), indicating a relative entropy gap in the positive direction. In an unstable environment, we\nexpect random policies and novelty-seeking policies should attain similar entropies, whereas entropy minimization should result in much lower entropy (SMiRL - Rand < 0), indicating a negative entropy gap. To compute the entropy used in this evaluation, we used the approximation in Eq. 1 multiplied by −1 for three of our tasks as well as many Atari games studied in the RND paper (Burda et al., 2018b), with numbers shown in Table 1 and full results in Appendix E. Our environments have a large negative entropy gap, whereas most Atari games lack this clear entropy gap.2 We therefore expect SMiRL to perform well on these tasks, which we use in the next section, but poorly on most Atari games. We show animations of the resulting policies on our anonymous project website.\n5 EXPERIMENTAL RESULTS\nOur experiments aim to answer the following questions: (1) Can SMiRL learn meaningful and complex emergent behaviors without supervision? (2) Can we improve SMiRL by incorporating representation learning via VAEs, as described in Section 3? (3) Can SMiRL serve as a joint training objective to accelerate the acquisition of reward-guided behavior, and does it outperform prior intrinsic motivation methods in this role? We also illustrate several applications of SMiRL, showing that it can accelerate task learning, facilitate exploration, and implement a form of imitation learning. Video results of learned behaviors are available at https://sites.google.com/view/ surpriseminimization" }, { "heading": "5.1 EMERGENT BEHAVIOR WITH UNSUPERVISED LEARNING", "text": "To answer (1), we evaluate SMiRL on the Tetris, VizDoom and Humanoid tasks, studying its ability to generate purposeful coordinated behaviors without engineered task-specific rewards. We compare SMiRL to two intrinsic motivation methods, ICM (Pathak et al., 2017) and RND (Burda et al., 2018b), which seek out states that maximize surprise or novelty. For reference, we also include an Oracle baseline that directly optimizes the task reward. We find that SMiRL acquires meaningful emergent behaviors across these domains. In both the Tetris and VizDoom environments, stochastic and chaotic events force the SMiRL agent to take a coordinated course of action to avoid unusual states, such as full Tetris boards or fireball explosions. On Tetris, after training for 3000 epochs, SMiRL achieves near-perfect play, on par with the oracle baseline, with no deaths, indicating that SMiRL may provide better dense rewards than the Oracle reward, as shown in Figure 3 (top-left, top-middle). Figure 3 top-left and top-center show data from the same experiment that plots two different metrics, where the Oracle is optimized for minimizing deaths. We include another oracle, Oracle (rows cleared) where the reward function is the number of rows cleared. ICM and RND seek novelty by creating more and more distinct patterns of blocks rather than clearing them, leading to deteriorating game scores over time. The SMiRL agent also learns emergent game playing behavior in VizDoom, acquiring an effective policy for dodging the fireballs thrown by the enemies, illustrated in Figure 3 (top-right and bottom-left). Novelty-seeking seeking methods once again yield deteriorating rewards over time. In Cliff , the SMiRL agent learns to brace against the ground and stabilize itself at the edge, as shown in Figure 2. In Treadmill, SMiRL learns to jump forward to increase the time it stays on the treadmill. In Pedestal, the agent must actively respond to persistent disturbances. We find that SMiRL learns a policy that can reliably keep the agent atop the pedestal, as shown in Figure 2. Figure 4 plots the reduction in falls in the Humanoid environments. Novelty-seeking methods learn irregular behaviors that cause the humanoid to jump off the Cliff and Pedestal tasks and roll around on the Treadmill, maximizing the variety (and quantity) of falls.\nNext, we study how representation learning with a VAE improves the SMiRL algorithm (question (2)). In these experiments, we train a VAE model and estimate surprise in the VAE latent space. This leads to faster acquisition of the emergent behaviors for TakeCover (Figure 3, top-right), Cliff (Figure 4, left), and Treadmill (Figure 4, middle), where it also leads to a more successful locomotion behavior.\nAt first glance, the SMiRL surprise minimization objective appears to be the opposite of standard intrinsic motivation objectives (Bellemare et al., 2016; Pathak et al., 2017; Burda et al., 2018b) that seek out states with maximal surprise (i.e., novel states). However, while those approaches measure surprise with respect to all prior experience, SMiRL minimizes surprise over each episode. We demonstrate that these two approaches are in fact complementary. SMiRL can use conventional intrinsic motivation methods to aid in exploration so as to discover more effective policies for minimizing surprise. We can, therefore, combine these two methods and learn more sophisticated behaviors. While SMiRL on its own does not successfully produce a good walking gait on Treadmill, the addition of novelty-seeking intrinsic motivation allows increased exploration, which results in an improved walking gait that remains on the treadmill longer, as shown in Figure 4 (middle). We\n2We expect that in all cases, random policies will have somewhat higher state entropy than SMiRL, so the entropy gap should be interpreted in a relative sense.\nevaluate this combined approach across environments including Pedestal and Cliff as well, where learning to avoid falls is also a challenge. For these two tasks SMiRL can already discover strong surprise minimizing policies and adding exploration bonuses does not provide additional benefit. In Figure 5 adding a bonus enables the agent to discover improved surprise minimizing strategies.\nSMiRL and long term surprise. Although the SMiRL objective by itself does not specifically encourage exploration, we observe that optimal SMiRL policies exhibit active “searching” behaviors, seeking out objects in the environment that would allow for reduced long-term surprise. For example, in HauntedHouse, the positions of the doors leading to the safe room change between episodes, and the policy trained with SMiRL learns to search for the doors to facilitate lower future surprise, even if finding the doors themselves yields higher short-term surprise. This behavior is illustrated in Figure 5, along with the “delayed gratification” plot, which shows that the SMiRL agent incurs higher surprise early in the episode, for the sake of much lower surprise later.\n5.2 APPLICATIONS OF SMIRL\nWhile the focus of this paper is on the emergent behaviors obtained by SMiRL, here we study more pragmatic applications. We show that SMiRL can be used for basic imitation and joint training to accelerate reward-driven learning.\nImitation. SMiRL can be adapted to perform imitation by initializing the prior via the buffer D0 with states from demonstrations, or individual desired outcome states. We initialize the buffer D0 in Tetris with user-specified desired board states. An illustration of the Tetris imitation task is presented in Figure 6, showing imitation of a box pattern (top) and a checkerboard pattern (bottom), with the leftmost frame showing the user-specified example, and the other\nframes showing actual states reached by the SMiRL agent. While several prior works have studied imitation without example actions (Liu et al., 2018; Torabi et al., 2018a; Aytar et al., 2018; Torabi et al., 2018b; Edwards et al., 2018; Lee et al.), this capability emerges automatically in SMiRL, without any further modification to the algorithm.\nSMiRL as an auxiliary reward. We explore how combining SMiRL with a task reward can lead to faster learning. We hypothesize that, when the task reward is aligned with avoiding unpredictable situations (e.g., falling or dying), adding SMiRL as an auxiliary reward can accelerate learning by providing a dense intermediate signal. The full reward is given by rcombined(s) = rtask(s)+αrSMiRL(s), where α is chosen to put the two reward terms at a similar magnitude. We study this application of SMiRL in the tasks: Tetris in Figure 3 (bottom-center), TakeCover in Figure 3 (bottom-right), DefendTheLine and Walk. On the easier tasks Tetris and TakeCover task (Figure 7), prior exploration methods generally lead to significantly worse performance and SMiRL improves learning speed. On the harder Walk and DefendTheLine tasks, the SMiRL reward accelerates learning substantially, and also significantly reduces the number of falls or deaths. We found that increasing the difficulty of TakeCover and DefendTheLine (via the environment’s difficulty setting (Kempka et al., 2016)) resulted in a clearer separation between SMiRL and other methods\nIn Walk, we include a version of SMiRL with prior data, where pθ(s) is initialized with 8 walking trajectories (256 timesteps each), similar to the imitation setting. Incorporating prior data requires no modification to the SMiRL algorithm, and we can see in Figure 7 (middle and right) that this variant (“Reward + SMiRL + prior data”) further accelerates learning and reduces the number of falls. This shows that while SMiRL can learn from scratch, it is possible to encode prior knowledge in pθ(s) to improve learning." }, { "heading": "6 DISCUSSION", "text": "We presented an unsupervised reinforcement learning method based on minimizing surprise. We show that surprise minimization can be used to learn a variety of behaviors that reach “homeostasis,” putting the agent into stable state distributions in its environment. Across a range of tasks, these cycles correspond to useful, semantically meaningful, and complex behaviors: clearing rows in Tetris, avoiding fireballs in VizDoom, and learning to balance and hop with a bipedal robot. The key insight utilized by our method is that, in contrast to simple simulated domains, realistic environments exhibit unstable phenomena that gradually increase entropy over time. An agent that resists this growth in entropy must take effective and coordinated actions, thus learning increasingly complex behaviors. This stands in contrast to commonly proposed intrinsic exploration methods based on novelty.\nBesides fully unsupervised reinforcement learning, where we show that our method can give rise to intelligent and sophisticated policies, we also illustrate several more practical applications of our approach. We show that surprise minimization can provide a general-purpose auxiliary reward that, when combined with task rewards, can improve learning in environments where avoiding catastrophic (and surprising) outcomes is desirable. We also show that SMiRL can be adapted to perform a rudimentary form of imitation.\nOur investigation of surprise minimization suggests several directions for future work. The particular behavior of a surprise minimizing agent is strongly influenced by the choice of state representation: by including or excluding particular observation modalities, the agent will be more or less surprised. Thus, tasks may be designed by choosing an appropriate state or observation representations. Exploring this direction may lead to new ways of specifying behaviors for RL agents without explicit reward design. Other applications of surprise minimization may also be explored in future work, possibly for mitigating reward misspecification by disincentivizing any unusual behavior that likely deviates from what the reward designer intended. The experiments in this work make use of available or easy to learn state representations. Using these learned representations does not address the difficulty of estimating and minimizing surprise across episodes or more generally over long sequences (possibly a single episode) which is a challenge for surprise minimization-based methods. We believe that non-episodic surprise minimization is a promising direction for future research to study how surprise minimization can result in intelligent and sophisticated behavior that maintains homeostasis by acquiring increasingly complex behaviors.\nAcknowledgments The authors thank Aviral Kumar and Michael Janner for discussion. This research was supported by a DARPA Young Faculty Award #D13AP0046, Office of Naval Research, the National Science Foundation, NVIDIA, Amazon, and ARL DCIST CRA W911NF-17-2-0181." }, { "heading": "A STATE ENTROPY MINIMIZATION DERIVATION", "text": "Here we will show that the SMiRL reward function leads to a policy objective that lowerbounds the negative entropy of the state marginal distribution, −H(dπφ). In the infinite horizon setting, the value of a trajectory τ = (s0, a0, s1, a1, . . . ) is given as the discounted cumulative rewards: R(τ) = (1 − γ) ∑∞ t=0 γ\ntr(st, at). In our case, r(st, at) is a function only of state: r(st, at) = r(st) = log pθ(st). The policy and dynamics define a trajectory distribution p(τ |φ) = p(s0) ∏∞ t=1 p(st+1|st, at)πφ(at|st). The value of a policy is its expected cumulative reward:\nV πφ = Eτ∼p(τ |πφ)R(τ) = (1−γ)Eτ∼p(τ |πφ) ∞∑ t=0 γtr(st).\nUsing the indicator function 1(a = b) , 1 if a = b; 0 if a 6= b, the t-step state distribution and the discounted state marginal are given as:\nd πφ t (s) = p(st = s|πφ) = Eτ∼P (τ |πφ)1(st = s)\ndπφ(s) = (1− γ) ∞∑ t=0 γtd πφ t (s)\nThe expected reward under the discounted state marginal is equivalent to the policy value V π: Es∼dπφ (s)[r(s)] = ∫ dπφ(s)r(s)ds\n= (1−γ)Eτ∼P (τ |πφ) ∞∑ t=0 γt ∫ 1(st = s)r(s)ds\n= (1−γ)Eτ∼P (τ |πφ) ∞∑ t=0 γtr(st) = V πφ\nAfter incorporating the rewards, the policy value becomes:\nV πφ =Es∼dπφ (s)[r(s)]=Es∼dπφ (s)[log pθ(s)]=J(φ, θ) J(φ, θ) = −H(dπφ , pθ) ≤ −H(dπφ),\nwhere H(dπ, pθ) denotes the cross-entropy between dπφ and pθ. Thus, by optimizing πφ with reward function log pθ(s) via RL, we maximize the policy value, equivalent to the negative cross-entropy from the discounted state marginal and the model. By optimizing pθ with maximum-likelihood density estimation (minimizing forward cross-entropy) of states induced by πφ, we tighten the bound towards −H(dπφ(s)). When the model is perfect (i.e., pθ = dπφ), the inequality becomes tight. As discussed in the main text, we cannot draw samples from dπφ(s). We can only sample trajectories of finite length T by rolling out the policy πφ. In this case, the finite-horizon discounted state marginal can be written as:\nd̂πφ,T (s) , 1− γ 1− γT T−1∑ t=0 γtp(st = s|πφ, t < T )\n= 1− γ 1− γT T−1∑ t=0 γtEτ∼p(τ |πφ)1(st = s, t < T ).\nNote that dπφ,T (s) ≥ 0 ∀s, and ∑\ns d πφ,T (s)= 1−γ 1−γT ∑T−1 t=0 γ t ∑ s p(st=s|πφ,t < T )=1.\ndπφ,T (s) converges to dπφ(s) as T →∞: limT→∞ d̂πφ,T =(1−γ) ∑∞ t=0 γ tEP (τ |πφ)1(st=s)=dπφ .\nThus, by using dπφ,T (s) in place of dπφ(s), we obtain an objective, −H(d̂πφ,T (s), pθ(s)), that we can approximate with a sample of finite-length trajectories and optimize with respect to φ using a\npolicy-gradient reinforcement learning algorithm on the equivalent finite-horizon value function:\nJ̄(φ; θ) = −H(d̂πφ,T (s), pθ(s)) = V πφ,T\n= 1− γ\n1− γT Eτ∼P (τ |πφ) T−1∑ t=0 γt log pθ(st).\nThe approximation to J(φ; θ) improves as T →∞, since limT→∞ d̂πφ,T (s) = dπφ ." }, { "heading": "B ADDITIONAL IMPLEMENTATION DETAILS", "text": "Additional Training Details. The experiments in the paper used two different RL algorithms for discrete action environemnts (Double DQN) and continuous action environments (TRPO). For all environments trained with Double-DQN (Tetris, VizDoom, HauntedHouse) we use a fixed episode length of 500 for training and collect 1000 sample between training rounds that perform 1000 gradient steps on the network. The replay buffer size that is used is 50000. The same size is used for additional data buffers for RND and ICM. For Tetris and HauntedHouse network with layer sizes [128, 64, 32] is used for both Q-networks. For VizDoom the network include 3 additional convolutional layers with [64, 32, 8] filters with strides [5, 4, 3], all using relu activations. A learning rate of 0.003 is used to train the Q networks.\nFor the Humanoid environments the network uses relu activations with hidden layer sizes [256, 128]. TRPO is used to train the policy with the advantage estimated with Generalize Advantage Estimation. The training collects 4098 sample at a time, performs 64 gradient steps on the value function and one step with TRPO. A fixed variance is used for the policy of 0.2 which is scaled according to the action dimensions from the environment. Each episode consisted of 4 rounds of training and it typically take 20 hours to train one of the SMiRL policies using 8 threads. A kl constraint of 0.2 is used for TRPO and a learning rate of 0.001 is used for training the value function. Next, we provide additional details on the state and action spaces of the environments and how θ was represented for each environment.\nTetris We consider a 4 × 10 Tetris board with tromino shapes (composed of 3 squares). The observation is a binary image of the current board with one pixel per square, as well as an indicator integer for the shape that will appear next. A Bernoulli distribution is used to represent the sufficient statistics θ given the to policy for SMiRL. This distribution models the probability density of a block being in each of the boad locations. Double-DQN is used to train the policy for this environment. The reward function used for this environment is based on the Tetris game which gives more points for eliminating more rows at a single time.\nVizDoom For the VizDoom environment the images are scaled down to be 48× 64 grayscale. Then a history of the latest 4 images are stacked together to use as in separate channels. To greatly reduce the number of parameters θ, SMiRL needs to estimate in order to compute the state entropy the image is further reduces to 20. A Gaussian distribution is used to model the mean and variance over this state input. This same design is used for TakeCover and DefendTheLine. An episode timelimit of 500 is used for each environent. Double-DQN is used to train the policy for this environment.\nSimulated Humanoid robots. A simulated planar Humanoid agent must avoid falling in the face of external disturbances (Berseth et al., 2018). The state-space comprises the rotation of each joint and the linear velocity of each link. We evaluate four versions of this task: Cliff , Treadmill, Pedestal, and Walk. The Cliff task initializes the agent at the edge of a cliff, in a random pose and with a forward velocity of 1 m/s. Falling off the cliff leads to highly irregular and unpredictable configurations, so a surprise minimizing agent will want to learn to stay on the cliff. In Treadmill, the agent starts on a platform that is moving backwards at 1 m/s. In Pedestal, random forces are applied to it, and objects are thrown at it. In this environment, the agent starts on a thin pedestal and random forces are applied to the robot’s links and boxes of random size are thrown at the agent. In Walk, we evaluate how the SMiRL reward stabilizes an agent that is learning to walk. In all four tasks, we evaluate the proportion of episodes the robot does not fall. A state is classified a fall if the agent’s links, except for the feet, touch the ground, or if the agent is −5 meters or more below the platform or cliff. Since the state is continuous, We model pθ(s) as independent Gaussian for these tasks. The full pose and link velocity state is used for the Humanoid environments θ. The simulated robot has a control frequency\nof 30hz. TRPO is used to train the policy for this environment. Similar to VizDoom p(s) is modeled as an independent Gaussian distribution for each dimension in the observation. Then, the SMiRL reward can be computed as:\nrSMiRL(s) = − ∑ i ( log σi + (si − µi)2 2σ2i ) ,\nwhere s is a single state, µi and σi are calculated as the sample mean and standard deviation from Dt and si is the ith observation feature of s.\nHauntedHouse. This partially observed navigation environment is based on the gym_minigrid toolkit (Chevalier-Boisvert et al., 2018). The agent vision if changed to be centered around the agent. The experiments in the paper combine SMiRL with curiosity measures for Counts that are computed using the agent locations in the discrete environment. Similar, to the VizDoom and Humanoid environments a Gaussian distribution over the agents observations is used to estimate θ. Double-DQN is used to train the policy for this environment.\nSMiRL VAE training The encoders and decoders of the VAEs used for VizDoom and Humanoid experiments are implemented as fully connected networks. The coefficient for the KL-divergence term in the VAE loss was 0.1 and 1.0 for the VizDoom and Humanoid experiments, respectively. We found it very helpful to train the VAE in batches. For the Humanoid experiments where TRPO is used to train the policy the VAE is trained every 4 data collection phases for TRPO. This helped make the learning process more stationary, increasing convergence. The design of the networks used for the VAE mirrors the size and shapes of the policies used for training described earlier in this section.\nFixed Length Episodes For SMiRL it helped to used fixed length episodes during training to help keep SMiRL from terminating early. For example, in the VizDoom environments SMiRL would result in policies that would terminate as soon as possible so the agent would return to a similar initial state. In fact, for training we need to turn on god mode to prevent this behaviour. Similarly, to discourage SMiRL from terminating Tetris early by quickly stacking pieces in the same tower (resulting in low entropy) we added \"soft resets\" where the simulation will reset when the game fails and the episode will continue on forcing the SMiRL agent to learn how to eliminate rows to reduce the number of blocks in the scene." }, { "heading": "C SMIRL DISTRIBUTIONS", "text": "SMiRL on Tetris. In Tetris, since the state is a binary image, we model p(s) as a product of independent Bernoulli distributions for each board location. The SMiRL reward log pθ(s) becomes:\nrSMiRL(s) = ∑ i si log θi + (1− si) log(1− θi),\nwhere s is a single state, the update procedure θi = U(Dt) returns the sample mean of Dt, indicating the proportion of datapoints where location i has been occupied by a block, and si is a binary variable indicating the presence of a block at location i. If the blocks stack to the top, the game board resets, but the episode continues and the dataset Dt continues to accumulate states." }, { "heading": "D SMIRL MDP", "text": "Note that the RL algorithm in SMiRL is provided with a standard stationary MDP (except in the VAE setting, more on that below), where the state is augmented with the parameters of the belief over states θ and the timestep t. We emphasize that this MDP is Markovian, and therefore it is reasonable to expect any convergent reinforcement learning (RL) algorithm to converge to a near-optimal solution. Consider the augmented state transition p(st+1, θt+1, t+ 1|st, at, θt, t). This transition model does not change over time because the updates to θ are deterministic when given st and t. The reward function r(st, θt, t) is also stationary, and is in fact deterministic given st and θt. Because SMiRL uses RL in an MDP, we benefit from the same convergence properties as other RL methods.\nTransition dynamics of θt. Given the augmented state (st, θt, t), we show that the transition dynamics of the MDP are Markovian. The st portion of the augmented state are from the environment, therefore all convergence properties of RL hold. Here we show that (θt, t) is also Markovian given st+1. To this end, we describe the transition dynamics of (θt, t) for an incremental estimation of a Gaussian distribution, which is used in most experiments. Here we outline θt+1 = U(st, θt, t).\nθt = (µt, σ 2 t )\nµt+1 = tµt + st t+ 1\nσ2t+1 = t(σ2t + µ 2 t ) + st\nt+ 1 − µ2t+1\nθt+1 = (µt+1, σ 2 t+1)\ntt+1 = tt + 1\nThese dynamics are dependant on the current augmented state (st, θt, t) and the next state st+1 of the RL environment and do not require an independent model fitting process.\nHowever, the version of SMiRL that uses a representation learned from a VAE is not Markovian due to not adding the VAE parameters to the state s, and thus the reward function changes over time. We find that this does not hurt results, and note that many intrinsic reward methods such as ICM and RND also lack stationary reward functions." }, { "heading": "E MORE ENVIRONMENT STABILITY DETAILS", "text": "Here we include the full data on the stability analysis in Figure 2. From this data and the additional results on the website we can see the SMiRL can reduce the entropy of a few of the Atari environments as well. These include Assault, where SMiRL hides on the left but is good at shooting ships and Carnival, where SMiRL also reduces the number of moving objects. RND on the other hand tends to induce entropy and cause many game flashes." }, { "heading": "F ADDITION NOTES ON UNSUPERVISED RL RELATED WORK", "text": "The works in Tschantz et al. (2020b); Annabi et al. (2020) are interesting and discuss connections to active inference and RL. However, these methods and many based on active inference “encode” the task reward function as a “global prior” and minimizing a KL between the agents state distribution this “global prior”. Our work instead actively estimates a marginal over the distribution of states the agent visits (with no prior data) and then minimizes this “online” estimate of the marginal, as is described in Section 3. Our work differs from LP-based methods (Kim et al., 2020; Lopes et al., 2012; Schmidhuber, 1991) because SMiRL is learning to control the marginal state distribution rather than identifying the system parameters." }, { "heading": "G ADDITIONAL RESULTS", "text": "To better understand the types of behaviors SMiRL produces we conducted an experiment with fixed episode lengths on the Humanoid environments (Figure 9). This shows that SMiRL results in surprise minimizing behaviors independent of how long the episode is." } ]
2,021
SMIRL: SURPRISE MINIMIZING REINFORCEMENT LEARNING IN UNSTABLE ENVIRONMENTS
SP:eb3a644606a97c248271782c2d9c83e699a329b9
[ "Nonsymmetric determinantal point processes (NDPPs) received some attention recently because they allow modeling of both negative and positive correlations between items. This paper developed scalable learning and MAP inference algorithms with space and time complexity linear in ground set size, which is a huge improvement compared to previous approaches. Experimental results show that the algorithms scale significantly better, and can roughly match the predictive performance of prior work.", "This paper propose a decomposition for non-symmetric determinantal point process (NDPP) kernels (M*M) which reduces the requirements of storage and running to linear in cardinality (M). Additionally, they derive a NDPP maximum a posteriori inference algorithm that applies to both their proposed kernel and the previous work (NDPP). In their experiments, they show both learning kernels and the MAP inference for subset selection on real-life datasets. " ]
Determinantal point processes (DPPs) have attracted significant attention in machine learning for their ability to model subsets drawn from a large item collection. Recent work shows that nonsymmetric DPP (NDPP) kernels have significant advantages over symmetric kernels in terms of modeling power and predictive performance. However, for an item collection of size M , existing NDPP learning and inference algorithms require memory quadratic in M and runtime cubic (for learning) or quadratic (for inference) in M , making them impractical for many typical subset selection tasks. In this work, we develop a learning algorithm with space and time requirements linear in M by introducing a new NDPP kernel decomposition. We also derive a linear-complexity NDPP maximum a posteriori (MAP) inference algorithm that applies not only to our new kernel but also to that of prior work. Through evaluation on real-world datasets, we show that our algorithms scale significantly better, and can match the predictive performance of prior work.
[ { "affiliations": [], "name": "Mike Gartrell" }, { "affiliations": [], "name": "Insu Han" }, { "affiliations": [], "name": "Elvis Dohmatob" }, { "affiliations": [], "name": "Jennifer Gillenwater" } ]
[ { "authors": [ "Nima Anari", "Shayan Oveis Gharan", "Alireza Rezaei" ], "title": "Monte Carlo Markov Chain Algorithms for Sampling Strongly Rayleigh Distributions and Determinantal Point Processes", "venue": "In Conference on Learning Theory (COLT),", "year": 2016 }, { "authors": [ "Andrew An Bian", "Joachim M. Buhmann", "Andreas Krause", "Sebastian Tschiatschek" ], "title": "Guarantees for Greedy Maximization of Non-submodular Functions with Applications", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Victor-Emmanuel Brunel" ], "title": "Learning Signed Determinantal Point Processes through the Principal Minor Assignment Problem", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Daqing Chen", "Sai Laing Sain", "Kun Guo" ], "title": "Data mining for the online retail industry: A case study of RFM model-based customer segmentation using data mining", "venue": "Journal of Database Marketing & Customer Strategy Management,", "year": 2012 }, { "authors": [ "Laming Chen", "Guoxin Zhang", "Eric Zhou" ], "title": "Fast greedy MAP inference for Determinantal Point Process to improve recommendation diversity", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Michał Dereziński" ], "title": "Fast determinantal point processes via distortion-free intermediate sampling", "venue": "In Conference on Learning Theory (COLT),", "year": 2019 }, { "authors": [ "Mohamed Elfeki", "Camille Couprie", "Morgane Riviere", "Mohamed Elhoseiny" ], "title": "GDPP: Learning Diverse Generations using Determinantal Point Processes", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Li Fang" ], "title": "On the Spectra of P - and P0-Matrices", "venue": "In Linear Algebra and its Applications,", "year": 1989 }, { "authors": [ "Mike Gartrell", "Ulrich Paquet", "Noam Koenigstein" ], "title": "Low-Rank Factorization of Determinantal Point Processes", "venue": "In Conference on Artificial Intelligence (AAAI),", "year": 2017 }, { "authors": [ "Mike Gartrell", "Victor-Emmanuel Brunel", "Elvis Dohmatob", "Syrine Krichene" ], "title": "Learning Nonsymmetric Determinantal Point Processes", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Jennifer Gillenwater", "Alex Kulesza", "Ben Taskar" ], "title": "Discovering Diverse and Salient Threads in Document Collections", "venue": "In Empirical Methods in Natural Language Processing (EMNLP),", "year": 2012 }, { "authors": [ "Jennifer Gillenwater", "Alex Kulesza", "Ben Taskar" ], "title": "Near-Optimal MAP Inference for Determinantal Point Processes", "venue": "In Neural Information Processing Systems (NIPS),", "year": 2012 }, { "authors": [ "Jennifer Gillenwater", "Alex Kulesza", "Emily Fox", "Ben Taskar" ], "title": "Expectation-Maximization for learning Determinantal Point Processes", "venue": "In Neural Information Processing Systems (NIPS),", "year": 2014 }, { "authors": [ "Jennifer Gillenwater", "Alex Kulesza", "Zelda Mariet", "Sergei Vassilvtiskii" ], "title": "A Tree-Based Method for Fast Repeated Sampling of Determinantal Point Processes", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Insu Han", "Jennifer Gillenwater" ], "title": "MAP Inference for Customized Determinantal Point Processes via Maximum Inner Product Search", "venue": "In Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2020 }, { "authors": [ "Insu Han", "Prabhanjan Kambadur", "Kyoungsoo Park", "Jinwoo Shin" ], "title": "Faster greedy MAP inference for determinantal point processes", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Yifan Hu", "Yehuda Koren", "Chris Volinsky" ], "title": "Collaborative Filtering for Implicit Feedback Datasets", "venue": "In International Conference on Data Mining (ICDM),", "year": 2008 }, { "authors": [ "Tarun Kathuria", "Amit Deshpande" ], "title": "On sampling and greedy map inference of constrained determinantal point processes", "venue": "arXiv preprint arXiv:1607.01551,", "year": 2016 }, { "authors": [ "A.K. Kelmans", "B.N. Kimelfeld" ], "title": "Multiplicative submodularity of a matrix’s principal minor as a function of the set of its rows and some combinatorial applications", "venue": "Discrete Mathematics,", "year": 1983 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Chun-Wa Ko", "Jon Lee", "Maurice Queyranne" ], "title": "An Exact Algorithm for Maximum Entropy Sampling", "venue": "Operations Research,", "year": 1995 }, { "authors": [ "Alex Kulesza", "Ben Taskar" ], "title": "Learning determinantal point processes", "venue": "In Conference on Uncertainty in Artificial Intelligence (UAI),", "year": 2011 }, { "authors": [ "Alex Kulesza", "Ben Taskar" ], "title": "Determinantal Point Processes for Machine Learning", "venue": "Foundations and Trends® in Machine Learning,", "year": 2012 }, { "authors": [ "Claire Launay", "Bruno Galerne", "Agnès Desolneux" ], "title": "Exact Sampling of Determinantal Point Processes without Eigendecomposition", "venue": "arXiv preprint arXiv:1802.08429,", "year": 2018 }, { "authors": [ "Chengtao Li", "Stefanie Jegelka", "Suvrit Sra" ], "title": "Fast DPP Sampling for Nystrom with Application to Kernel Methods", "venue": "In International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Yanen Li", "Jia Hu", "ChengXiang Zhai", "Ye Chen" ], "title": "Improving One-class Collaborative Filtering by Incorporating Rich User Information", "venue": "In Conference on Information and Knowledge Management (CIKM),", "year": 2010 }, { "authors": [ "Zelda Mariet", "Suvrit Sra" ], "title": "Fixed-point algorithms for learning Determinantal Point Processes", "venue": "In International Conference on Machine Learning (ICML),", "year": 2015 }, { "authors": [ "Zelda Mariet", "Suvrit Sra" ], "title": "Diversity Networks: Neural Network Compression Using Determinantal Point Processes", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "Zelda Mariet", "Mike Gartrell", "Suvrit Sra" ], "title": "Learning Determinantal Point Processes by Sampling Inferred Negatives", "venue": "In Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2019 }, { "authors": [ "Brian McFee", "Thierry Bertin-Mahieux", "Daniel PW Ellis", "Gert RG Lanckriet" ], "title": "The million song dataset challenge", "venue": "In Proceedings of the 21st International Conference on World Wide Web,", "year": 2012 }, { "authors": [ "G. Nemhauser", "L. Wolsey", "M. Fisher" ], "title": "An Analysis of Approximations for Maximizing Submodular Set Functions I", "venue": "Mathematical Programming,", "year": 1978 }, { "authors": [ "Jack Poulson" ], "title": "High-performance sampling of generic Determinantal Point Processes", "venue": "arXiv preprint arXiv:1905.00165,", "year": 2019 }, { "authors": [ "John E. Prussing" ], "title": "The Principal Minor Test for Semidefinite Matrices", "venue": "Journal of Guidance, Control, and Dynamics,", "year": 1986 }, { "authors": [ "Aidean Sharghi", "Ali Borji", "Chengtao Li", "Tianbao Yang", "Boqing Gong" ], "title": "Improving Sequential Determinantal Point Processes for Supervised Video Summarization", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "G Thompson" ], "title": "Normal forms for skew-symmetric matrices and Hamiltonian systems with first integrals linear in momenta", "venue": "In Proceedings of the American Mathematical Society,", "year": 1988 }, { "authors": [ "Robert C Thompson" ], "title": "Principal submatrices IX: Interlacing inequalities for singular values of submatrices", "venue": "Linear Algebra and its Applications,", "year": 1972 }, { "authors": [ "Michael J. Tsatsomeros" ], "title": "Generating and Detecting Matrices with Positive Principal Minors", "venue": "In Focus on Computational Neurobiology,", "year": 2004 }, { "authors": [ "Mark Wilhelm", "Ajith Ramanathan", "Alexander Bonomo", "Sagar Jain", "Ed H. Chi", "Jennifer Gillenwater" ], "title": "Practical Diversified Recommendations on YouTube with Determinantal Point Processes", "venue": "In Conference on Information and Knowledge Management (CIKM),", "year": 2018 }, { "authors": [ "Cheng Zhang", "Hedvig Kjellström", "Stephan Mandt" ], "title": "Determinantal Point Processes for Mini-Batch Diversification", "venue": "In Conference on Uncertainty in Artificial Intelligence (UAI),", "year": 2017 }, { "authors": [ "Gartrell" ], "title": "PRi,Y \\{i}. B HYPERPARAMETERS FOR EXPERIMENTS IN TABLE 2 Preventing numerical instabilities: The first term on the right side of Eq. 2 will be singular whenever |Yi| > K, where Yi is an observed subset. Therefore, to address this in practice we set K to the size of the largest subset observed in the data, K", "venue": null, "year": 2017 }, { "authors": [ "Gartrell" ], "title": "2019), we use D to denote the number of item feature dimensions for the symmetric component V , and D′ to denote the number of item feature dimensions for the nonsymmetric components, B and C", "venue": "Baseline NDPP (Gartrell et al.,", "year": 2019 }, { "authors": [], "title": "G that gives the maximum improvement of the determinant, if such i, j exist. Kathuria & Deshpande (2016) showed that running the search for such a swap O(k2 log(k/ε)) times", "venue": null, "year": 2016 }, { "authors": [ "Mirzasoleiman" ], "title": "M/k) log(1/ε) samples are enough to guarantee an (1− 1/e− ε)-approximation ratio for submodular functions (i.e., symmetric DPPs). We choose ε = 0.1 and set the number of samples to b(M/k) log(10)c. Under this setting, the time complexity of stochastic greedy is O(MKk2 log(1/ε))", "venue": null, "year": 2015 }, { "authors": [ "Li" ], "title": "2016) showed that O(Nk log(k/ε)) swaps", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Determinantal point processes (DPPs) have proven useful for numerous machine learning tasks. For example, recent uses include summarization (Sharghi et al., 2018), recommender systems (Wilhelm et al., 2018), neural network compression (Mariet & Sra, 2016), kernel approximation (Li et al., 2016), multi-modal output generation (Elfeki et al., 2019), and batch selection, both for stochastic optimization (Zhang et al., 2017) and for active learning (Bıyık et al., 2019). For subset selection problems where the ground set of items to select from has cardinality M , the typical DPP is parameterized by an M ×M kernel matrix. Most prior work has been concerned with symmetric DPPs, where the kernel must equal its transpose. However, recent work has considered the more general class of nonsymmetric DPPs (NDPPs) and shown that these have additional useful modeling power (Brunel, 2018; Gartrell et al., 2019). In particular, unlike symmetric DPPs, which can only model negative correlations between items, NDPPs allow modeling of positive correlations, where the presence of item i in the selected set increases the probability that some other item j will also be selected. There are many intuitive examples of how positive correlations can be of practical importance. For example, consider a product recommendation task for a retail website, where a camera is found in a user’s shopping cart, and the goal is to display several other items that might be purchased. Relative to an empty cart, the presence of the camera probably increases the probability of buying an accessory like a tripod.\nAlthough NDPPs can theoretically model such behavior, the existing approach for NDPP learning and inference (Gartrell et al., 2019) is often impractical in terms of both storage and runtime requirements. These algorithms require memory quadratic in M and time quadratic (for inference) or cubic (for learning) in M ; for the not-unusual M of 1 million, this requires storing 8TB-size objects in memory, with runtime millions or billions of times slower than that of a linear-complexity method.\nIn this work, we make the following contributions:\nLearning: We propose a new decomposition of the NDPP kernel which reduces the storage and runtime requirements of learning and inference to linear in M . Fortuitously, the modified decomposition retains all of the previous decomposition’s modeling power, as it covers the same part of the NDPP kernel space. The algebraic manipulations we apply to get linear complexity for this decomposition cannot be applied to prior work, meaning that our new decomposition is crucial for scalability.\nInference: After learning, prior NDPP work applies a DPP conditioning algorithm to do subset expansion (Gartrell et al., 2019), with quadratic runtime in M . However, prior work does not examine the general problem of MAP inference for NDPPs, i.e., solving the problem of finding the highestprobability subset under a DPP. For symmetric DPPs, there exists a standard greedy MAP inference algorithm that is linear in M . In this work, we develop a version of this algorithm that is also linear for low-rank NDPPs. The low-rank requirement is unique to NDPPs, and highlights the fact that the transformation of the algorithm from the symmetric to the nonsymmetric space is non-trivial. To the best of our knowledge, this is the first MAP algorithm proposed for NDPPs.\nWe combine the above contributions through experiments that involve learning NDPP kernels and applying MAP inference to these kernels to do subset selection for several real-world datasets. These experiments demonstrate that our algorithms are much more scalable, and that the new kernel decomposition matches the predictive performance of the decomposition from prior work." }, { "heading": "2 BACKGROUND", "text": "Consider a finite set Y = {1, 2, . . . ,M} of cardinalityM , which we will also denote by [[M ]]. A DPP on [[M ]] defines a probability distribution over all of its 2M subsets. It is parameterized by a matrix L ∈ RM×M , called the kernel, such that the probability of each subset Y ⊆ [[M ]] is proportional to the determinant of its corresponding principal submatrix: Pr(Y ) ∝ det(LY ). The normalization constant for this distribution can be expressed as a single M ×M determinant: ∑ Y⊆[[M ]] det(LY ) = det(L + I) (Kulesza et al., 2012, Theorem 2.1). Hence, Pr(Y ) = det(LY )/ det(L + I). We will use PL to denote this distribution.\nFor intuition about the kernel parameters, notice that the probabilities of singletons {i} and {j} are proportional to Lii and Ljj , respectively. Hence, it is common to think of L’s diagonal as representing item qualities. The probability of a pair {i, j} is proportional to det(L{i,j}) = LiiLjj − LijLji. Thus, if −LijLji < 0, this indicates i and j interact negatively. Similarly, if −LijLji > 0, then i and j interact positively. Therefore, off-diagonal terms determine item interactions. (The vague term “interactions” can be replaced by the more precise term “correlations” if we consider the DPP’s marginal kernel instead; see Gartrell et al. (2019, Section 2.1) for an extensive discussion.)\nIn order to ensure that PL defines a probability distribution, all principal minors of L must be non-negative: det(LY ) ≥ 0. Matrices that satisfy this property are called P0-matrices (Fang, 1989, Definition 1). There is no known generative method or matrix decomposition that fully covers the space of all P0 matrices, although there are many that partially cover the space (Tsatsomeros, 2004).\nOne common partial solution is to use a decomposition that covers the space of symmetric P0 matrices. By restricting to the space of symmetric matrices, one can exploit the fact that L ∈ P0 if L is positive semidefinite (PSD)* (Prussing, 1986). Any symmetric PSD matrix can be written as the Gramian matrix of some set of vectors: L := V V >, where V ∈ RM×K . Hence, the V V > decomposition provides an easy means of generating the entire space of symmetric P0 matrices. It also has a nice intuitive interpretation: we can view the i-th row of V as a length-K feature vector describing item i.\nUnfortunately, the symmetry requirement limits the types of correlations that a DPP can capture. A symmetric model is able to capture only nonpositive interactions between items, since LijLji = L2ij ≥ 0, whereas a nonsymmetric L can also capture positive correlations. (Again, see Gartrell et al. (2019, Section 2.1) for more intuition.) To expand coverage to nonsymmetric matrices in P0, it is natural to consider nonsymmetric PSD matrices. In what follows, we denote by P+0 the set of all nonsymmetric (and symmetric) PSD matrices. Any nonsymmetric PSD matrix is in P0 (Gartrell et al., 2019, Lemma 1), so P+0 ⊆ P0. However, unlike in the symmetric case, the set of nonsymmetric PSD\n*Recall that a matrix L ∈ RM×M is defined to be PSD if and only if x>Lx ≥ 0, for all x ∈ RM .\nmatrices does not fully cover the set of nonsymmetric P0 matrices. For example, consider\nL =\n( 1 5/3\n1/2 1\n) with det(L{1}),det(L{2}),det(L{1,2}) ≥ 0, but x>Lx < 0 for x = ( −1 1 ) .\nStill, nonsymmetric PSD matrices cover a large enough portion of the P0 space to be useful in practice, as evidenced by the experiments of Gartrell et al. (2019). This work covered the P+0 space by using the following decomposition: L := S + A, with S := V V > for V ∈ RM×K , and A := BC> −CB> for B,C ∈ RM×K . This decomposition makes use of the fact that any matrix L can be decomposed uniquely as the sum of a symmetric matrix S = (L + LT )/2 and a skew-symmetric matrix A = (L−LT )/2. All skew-symmetric matrices A are trivially PSD, since x>Ax = 0 for all x ∈ RM . Hence, the L here is guaranteed to be PSD simply because its S uses the standard Gramian decomposition V V >.\nIn this work we will also only consider P+0 , and leave to future work the problem of finding tractable ways to cover the rest of P0. We propose a new decomposition of L that also covers the P+0 space, but allows for more scalable learning. As in prior work, our decomposition has inner dimension K that could be as large as M , but is usually much smaller in practice. Our algorithms work well for modest values of K. In cases where the natural K is larger (e.g., natural language processing), random projections can often be used to significantly reduce K (Gillenwater et al., 2012a)." }, { "heading": "3 NEW KERNEL DECOMPOSITION AND SCALABLE LEARNING", "text": "Prior work on NDPPs proposed a maximum likelihood estimation (MLE) algorithm (Gartrell et al., 2019). Due to that work’s particular kernel decomposition, this algorithm had complexity cubic in the number of items M . Here, we propose a kernel decomposition that reduces this to linear in M .\nWe begin by showing that our new decomposition covers the space of P+0 matrices. Before diving in, let us define Σi := (\n0 λi −λi 0\n) as shorthand for a 2× 2 block matrix with zeros on-diagonal and\nopposite values off-diagonal. Then, our proposed decomposition is as follows:\nL := S + A, with S := V V > and A := BCB>, (1)\nwhere V ,B ∈ RM×K , and C ∈ RK×K is a block-diagonal matrix with some diagonal blocks of the form Σi, with λi > 0, and zeros elsewhere. The following lemma shows that this decomposition covers the space of P+0 matrices. Lemma 1. Let A ∈ RM×M be a skew-symmetric matrix with rank ` ≤ M . Then, there exist B ∈ RM×` and positive numbers λ1, . . . , λb`/2c, such that A = BCB>, where C ∈ R`×` is the block-diagonal matrix with b`/2c diagonal blocks of size 2 given by Σi, i = 1, . . . , b`/2c and zero elsewhere.\nThe proof of Lemma 1 and all subsequent results can be found in Appendix F. With this decomposition in hand, we now proceed to show that it can be used for linear-time MLE learning. To do so, we must show that corresponding NDPP log-likelihood objective and gradient can be computed in time linear in M . Given a collection of n observed subsets {Y1, ..., Yn} composed of items from Y = [[M ]], the full formulation of the regularized log-likelihood is:\nφ(V ,B,C) = 1\nn n∑ i=1 log det ( VYiV > Yi +BYiCB > Yi ) − log det ( V V > +BCB> + I ) −R(V ,B),\n(2)\nwhere VYi ∈ R|Yi|×K denotes a matrix composed of the rows of V that correspond to the items in Yi. The regularization term, R(V ,B), is defined as follows:\nR(V ,B) = α M∑ i=1 1 µi ‖vi‖22 + β M∑ i=1 1 µi ‖bi‖22, (3)\nwhere µi counts the number of occurrences of item i in the training set, vi and bi are rows of V and B, respectively, and α, β > 0 are tunable hyperparameters. This regularization is similar to that of prior works (Gartrell et al., 2017; 2019). We omit regularization for C.\nTheorem 1 shows that computing the regularized log-likelihood and its gradient both have time complexity linear in M . The complexities also depend on K, the rank of the NDPP, and K ′, the size of the largest observed subset in the data. For many real-world datasets we observe that K ′ M , and we set K = K ′. Hence, linearity in M means that we can efficiently perform learning for datasets with very large ground sets, which is impossible with the cubic-complexity L decomposition in prior work (Gartrell et al., 2019).\nTheorem 1. Given an NDPP with kernel L = V V > +BCB>, parameterized by V of rank K, B of rank K, and a K ×K matrix C, we can compute the regularized log-likelihood (Eq. 2) and its gradient in O(MK2 +K3 +nK ′3) time, where K ′ is the size of the largest of the n training subsets." }, { "heading": "4 MAP INFERENCE", "text": "After learning an NDPP, one can then use it to infer the most probable item subsets in various situations. Several inference algorithms have been well-studied for symmetric DPPs, including sampling (Kulesza & Taskar, 2011; Anari et al., 2016; Li et al., 2016; Launay et al., 2018; Gillenwater et al., 2019; Poulson, 2019; Dereziński, 2019) and MAP inference (Gillenwater et al., 2012b; Han et al., 2017; Chen et al., 2018; Han & Gillenwater, 2020). We focus on MAP inference:\nargmax Y⊆Y\ndet(LY ) such that |Y | = k, (4)\nfor cardinality budget k ≤M . MAP inference is a better fit than sampling when the end application requires the generation of a single output set, which is usually the case in practice (e.g., this is usually true for recommender systems). MAP inference for DPPs is known to be NP-hard even in the symmetric case (Ko et al., 1995; Kulesza et al., 2012). For symmetric DPPs, one usually approximates the MAP via the standard greedy algorithm for submodular maximization (Nemhauser et al., 1978). First, we describe how to efficiently implement this for NDPPs. Then, in Section 4.1 we prove a lower bound on its approximation quality. To the best of our knowledge, this is the first investigation of how to apply the greedy algorithm to NDPPs.\nGreedy begins with an empty set and repeatedly adds the item that maximizes the marginal gain until the chosen set is size k. Here, we design an efficient greedy algorithm for the case where the NDPP kernel is low-rank. For generality, in what follows we write the kernel as L = BCB>, since one can easily rewrite our matrix decomposition (Eq. 1), as well as that of Gartrell et al. (2019), to take this\nform. For example, for our decomposition: L = V V > + BCB> = (V B) ( I 0 0 C )( V > B> ) .\nUsing Schur’s determinant identity, we first observe that, for Y ⊆ [[M ]] and i ∈ [[M ]], the marginal gain of a NDPP can be written as\ndet(LY ∪{i})\ndet(LY ) = Lii −LiY (LY )−1LY i = biCb>i − biC\n( B>Y (BYCB > Y ) −1BY ) Cb>i , (5)\nwhere bi ∈ R1×K and BY ∈ R|Y |×K . A naïve computation of Eq. 5 is O(K2 + k3), since we must invert a |Y | × |Y | matrix, where |Y | ≤ k. However, one can compute Eq. 5 more efficiently by observing that its B>Y (BYCB > Y ) −1BY component can actually be expressed without an inverse, as a rank-|Y | matrix, that can be computed in O(K2) time. Lemma 2. Given B ∈ RM×K , C ∈ RK×K , and Y = {a1, . . . , ak} ⊆ [[M ]], let bi ∈ R1×K be the i-th row in B and BY ∈ R|Y |×K be a matrix containing rows in B indexed by Y . Then, it holds that\nB>Y (BYCB > Y ) −1BY = k∑ j=1 p>j qj , (6)\nwhere row vectors pj , qj ∈ R1×K for j = 1, . . . , k satisfy p1 = ba1/(ba1Cb>a1), q1 = ba1 , and\npj+1 = baj − bajC>\n∑j i=1 q > i pi\nbajC(baj − bajC> ∑j i=1 q > i pi) > , qj+1 = baj − bajC j∑ i=1 p>i qi. (7)\nAlgorithm 1 Greedy MAP inference/conditioning for low-rank NDPPs 1: Input: B ∈ RM×K , C ∈ RK×K , the cardinality k . And {a1, . . . , ak} for conditioning 2: initialize P ← [ ], Q← [ ] and Y ← ∅ 3: ∆i ← biCb>i for i ∈ [[M ]] where bi ∈ R1×K is the i-th row in B 4: a← argmaxi ∆i and Y ← Y ∪ {a} . a← a1 for conditioning 5: while |Y | ≤ k do 6: p← ( ba − baC>Q>P ) /∆a\n7: q ← ba − baCP>Q 8: P ← [P ;p] and Q← [Q; q] 9: ∆i ← ∆i − ( biCp >) (biC>q>) for i ∈ [[M ]], i /∈ Y 10: a← argmaxi ∆i and Y ← Y ∪ {a} . a← a|Y |+1 for conditioning 11: end while 12: return Y . return {∆i}Mi=1 for conditioning\nPlugging Eq. 6 into Eq. 5, the marginal gain with respect to Y ∪ {a} can be computed by simply updating from the previous gain with respect to Y . That is,\ndet(LY ∪{a,i})\ndet(LY ∪{a}) = biCb\n> i − |Y |+1∑ j=1 ( biCp > j ) ( biC >q>j )\n(8)\n= det(LY ∪{i}) det(LY ) − ( biCp > |Y |+1 )( biC >q>|Y |+1 ) . (9)\nThe marginal gains when Y = ∅ are equal to diagonals of L and require O(MK2) operations. Then, computing the update terms in Eq. 9 for all i ∈ [[M ]] needs O(MK) operations. Since the total number of updates is k, the overall complexity becomes O(MK2 + MKk). We provide a full description of the implied greedy algorithm for low-rank NDPPs in Algorithm 1.\nTable 1 summarizes the complexitiy of our methods and those of previous work. Note that the full M ×M L + I matrix is used to compute the DPP normalization constant in Gartrell et al. (2019), which is why this approach has memory complexity of O(M2) for MLE learning." }, { "heading": "4.1 APPROXIMATION GUARANTEE FOR GREEDY NDPP MAP INFERENCE", "text": "As mentioned above, Algorithm 1 is an instantiation of the standard greedy algorithm used for submodular maximization (Nemhauser et al., 1978). This algorithm has a (1− 1/e)-approximation guarantee for the problem of maximizing nonnegative, monotone submodular functions. While the function f(Y ) = log det(LY ) is submodular for a symmetric PSD L (Kelmans & Kimelfeld, 1983), it is not monotone. Often, as in Han & Gillenwater (2020), it is assumed that the smallest eigenvalue of L is greater than 1, which guarantees montonicity. There is no particular evidence that this assumption is true for practical models, but nevertheless the greedy algorithm tends to\n†The exact memory complexity for MAP inference is 3MK, since V , B, and C used in this model are all M ×K matrices.\nperform well in practice for symmetric DPPs. Here, we prove a similar approximation guarantee that covers NDPPs as well, even though the function f(Y ) = log det(LY ) is non-submodular when L is nonsymmetric. In Section 5.5, we further observe that, as for symmetric DPPs, the greedy algorithm seems to work well in practice for NDPPs.\nWe leverage a recent result of Bian et al. (2017), who proposed an extension of greedy algorithm guarantees to non-submodular functions. Their result is based on the submodularity ratio and curvature of the objective function, which measure to what extent it has submodular properties. Theorem 2 extends this to provide an approximation ratio for greedy MAP inference of NDPPs. Theorem 2. Consider a nonsymmetric low-rank DPP L = V V > + BCB>, where V ,B are of rank K, and C ∈ RK×K . Given a cardinality budget k, let σmin and σmax denote the smallest and largest singular values of LY for all Y ⊆ [[M ]] and |Y | ≤ 2k. Assume that σmin > 1. Then,\nlog det(LY G) ≥ 4(1− e−1/4)\n2(log σmax/log σmin)− 1 log det(LY ∗) (10)\nwhere Y G is the output of Algorithm 1 and Y ∗ is the optimal solution of MAP inference in Eq. 4.\nThus, when the kernel has a small value of log σmax/ log σmin, the greedy algorithm finds a nearoptimal solution. In practice, we observe that the greedy algorithm finds a near-optimal solution even for large values of this ratio (see Section 5.5). As remarked above, there is no evidence that the condition σmin > 1 is usually true in practice. While this condition can be achieved by multiplying L by a constant, this leads to a (potentially large) additive term in Eq. 10. We provide Corollary 1 in Appendix D, which excludes the σmin > 1 assumption, and quantifies this additive term." }, { "heading": "4.2 GREEDY CONDITIONING FOR NEXT-ITEM PREDICTION", "text": "We briefly describe here a small modification to the greedy algorithm that is necessary if one wants to use it as a tool for next-item prediction. Given a set Y ⊆ [[M ]], Kulesza et al. (2012) showed that a DPP with L conditioned on the inclusion of the items in Y forms another DPP with kernel LY := LȲ −LȲ ,YL−1Y LȲ ,Y where Ȳ = [[M ]]\\Y . The singleton probability Pr(Y ∪{i} | Y ) ∝ LYii can be useful for doing next-item prediction. We can use the same machinery from the greedy algorithm’s marginal gain computations to effectively compute these singletons. More concretely, suppose that we are doing next-item prediction as a shopper adds items to a digital cart. We predict the item that maximizes the marginal gain, conditioned on the current cart contents (the set Y ). When the shopper adds the next item to the cart, we update Y to include this item, rather than our predicted item (line 10 in Algorithm 1). We then iterate until the shopper checks out. The comments on the righthand side of Algorithm 1 summarize this procedure. The runtime of this prediction is the same that of the greedy algorithm, O(MK2 +MK|Y |). We note that this cost is comparable to that of an approach based on the DPP dual kernel from prior work (Mariet et al., 2019), which has O(MK2 +K3 + |Y |3) complexity. However, since it is non-trivial to define the dual kernel for NDPPs, the greedy algorithm may be the simpler choice for next-item prediction for NDPPs." }, { "heading": "5 EXPERIMENTS", "text": "To further simplify learning and MAP inference, we set B = V , which results in L = V V > + V CV > = V (I + C)V >. This change also simplifies regularization, so that we only perform regularization on V , as indicated in the first term of Eq. 3, leaving us with the single regularization hyperparameter of α. While setting B = V restricts the class of nonsymmetric L kernels that can be represented, we compensate for this restriction by relaxing the block-diagonal structure imposed on C, so that we learn a full skew-symmetric K ×K matrix C. To ensure that C and thus A is skew-symmetric, we parameterize C by setting C = D −DT , were D varies over RK×K . Code for all experiments is available at https://github.com/cgartrel/scalable-nonsymmetric-DPPs." }, { "heading": "5.1 DATASETS", "text": "We perform experiments on several real-world public datasets composed of subsets:\n1. Amazon Baby Registries: This dataset consists of registries or \"baskets\" of baby products, and has been used in prior work on DPP learning (Gartrell et al., 2016; 2019; Gillenwater et al., 2014; Mariet & Sra, 2015). The registries contain items from 15 different categories, such as “apparel”, with a catalog of up to 100 items per category. Our evaluation mirrors that of Gartrell et al. (2019); we evaluate on the popular apparel category, which contains 14,970 registries, as well as on a dataset composed of the three most popular categories: apparel, diaper, and feeding, which contains a total of 31,218 registries. 2. UK Retail: This dataset (Chen et al., 2012) contains baskets representing transactions from an online retail company that sells unique all-occasion gifts. We omit baskets with more than 100 items, leaving us with a dataset containing 19,762 baskets drawn from a catalog of M = 3,941 products. Baskets containing more than 100 items are in the long tail of the basket-size distribution of the data, so omitting larger baskets is reasonable, and allows us to use a low-rank factorization of the DPP with K = 100. 3. Instacart: This dataset (Instacart, 2017) contains baskets purchased by Instacart users. We omit baskets with more than 100 items, resulting in 3.2 million baskets and a catalog of 49,677 products. 4. Million Song: This dataset (McFee et al., 2012) contains playlists (“baskets”) of songs played by Echo Nest users. We trim playlists with more than 150 items, leaving us with 968,674 baskets and a catalog of 371,410 songs." }, { "heading": "5.2 EXPERIMENTAL SETUP AND METRICS", "text": "We use a small held-out validation set, consisting of 300 randomly-selected baskets, for tracking convergence during training and for tuning hyperparameters. A random selection of 2000 of the remaining baskets are used for testing, and the rest are used for training. Convergence is reached during training when the relative change in validation log-likelihood is below a predetermined threshold. We use PyTorch with Adam (Kingma & Ba, 2015) for optimization. We initialize C from the standard Gaussian distribution with mean 0 and variance 1, and B (which we set equal to V ) is initialized from the uniform(0, 1) distribution.\nSubset expansion task. We use greedy conditioning to do next-item prediction, as described in Section 4.2. We compare methods using a standard recommender system metric: mean percentile rank (MPR) (Hu et al., 2008; Li et al., 2010). MPR of 50 is equivalent to random selection; MPR of 100 means that the model perfectly predicts the next item. See Appendix A for a complete description of the MPR metric.\nSubset discrimination task. We also test the ability of a model to discriminate observed subsets from randomly generated ones. For each subset in the test set, we generate a subset of the same length by drawing items uniformly at random (and we ensure that the same item is not drawn more than once for a subset). We compute the AUC for the model on these observed and random subsets, where the score for each subset is the log-likelihood that the model assigns to the subset." }, { "heading": "5.3 PREDICTIVE PERFORMANCE RESULTS FOR LEARNING", "text": "Since the focus of our work is on improving NDPP scalability, we use the low-rank symmetric DPP (Gartrell et al., 2017) and the low-rank NDPP of prior work (Gartrell et al., 2019) as baselines for our experiments. Table 2 compares these approaches and our scalable low-rank NDPP. We see that NDPPs generally outperform symmetric DPPs. Furthermore, we see that our scalable NDPP matches or exceeds the predictive quality of the baseline NDPP. We believe that our model sometimes improves upon this baseline NDPP due to the use of a simpler kernel decomposition with fewer parameters, likely leading to a simplified optimization landscape." }, { "heading": "5.4 TIME COMPARISON FOR LEARNING", "text": "In Fig. 1, we report the wall-clock training time of the decomposition of Gartrell et al. (2019) (NDPP) and our scalable NDPP for the Amazon: 3-category (Fig. 1(a)) and UK Retail (Fig. 1(b)) datasets. As expected, we observe that the scalable NDPP trains far faster than the NDPP for datasets with large ground sets. For the Amazon: 3-category dataset, both approaches show comparable results, with the scalable NDPP converging 1.07× faster than NDPP. But for the UK Retail dataset, which has a much larger ground set, our scalable NDPP achieves convergence about 8.31× faster. Notice\nthat our scalable NDPP also opens to the door to training on datasets with large M , such as the Instacart and Million Song dataset, which is infeasible for the baseline NDPP due to high memory and compute costs. For example, NDPP learning using Gartrell et al. (2019) for the Million Song dataset would require approximately 1.1 TB of memory, while using our scalable NDPP approach requires approximately 445.9 MB." }, { "heading": "5.5 PERFORMANCE RESULTS FOR MAP INFERENCE", "text": "We run various approximatation algorithms for MAP inference, including the greedy algorithm (Algorithm 1), stochastic greedy algorithm (Mirzasoleiman et al., 2015), MCMC-based DPP sampling (Li et al., 2016), and greedy local search (Kathuria & Deshpande, 2016). The stochastic greedy algorithm computes marginal gains of a few items chosen uniformly at random and selects the best among them. The MCMC sampling begins with a random subset Y of size k and picks i ∈ Y and j /∈ Y uniformly at random. Then, it swaps them with probability det(LY ∪{j}\\{i})/(det(LY ∪{j}\\{i}) + det(LY )) and iterates this process. The greedy local search algorithm (Kathuria & Deshpande, 2016) starts from the output from the greedy algorithm, Y G, and replaces i ∈ Y G with j /∈ Y G that gives the maximum improvement, if such i, j exist. This replacement process iterates until no improvement exists, or at\nmost k2 log(10k) steps have been completed, to guarantee a tight approximation (Kathuria & Deshpande, 2016). We use greedy local search as a baseline since it always returns a better solution than greedy. However, it is the slowest among all algorithms, as its time complexity is O(MKk4 log k). We choose k = 10, and provide more details of all algorithms in Appendix C.\nTo evaluate the performance of MAP inference, we report the relative log-determinant ratio defined as∣∣∣∣ log det(LY ∗)− log det(LY )log det(LY ∗) ∣∣∣∣\nwhere Y is the output of benchmark algorithms and Y ∗ is the greedy local search result. Results are reported in Table 3. We observe that the greedy (Algorithm 1) achieves performance close to that of the significantly more expensive greedy local search algorithm, with relative errors of up to 0.045. Stochastic greedy and MCMC sampling have significantly larger errors.\nFor completeness, in Appendix E we also present experiments comparing the performance of greedy and exact MAP on small synthetic NDPPs, for which the exact MAP can be feasibly computed." }, { "heading": "5.6 TIME COMPARISON FOR MAP INFERENCE", "text": "We provide the wall-clock time of the above algorithms for real-world datasets in Table 4. Observe that the greedy algorithm is the fastest method for all datasets except Million Song. For Million Song, MCMC sampling is faster than other approaches, but it has much larger relative errors in terms of log-determinant (see Table 3), which is not suitable for our purposes." }, { "heading": "6 CONCLUSION", "text": "We have presented a new decomposition for nonsymmetric DPP kernels that can be learned in time linear in the size of the ground set, which is a significant improvement over the complexity of prior work. Empirical results indicate that this decomposition matches the predictive performance of the prior decomposition. We have also derived the first MAP algorithm for nonsymmetric DPPs and proved a lower bound on the quality of its approximation. In future work we hope to develop intuition about the meaning of the parameters in the C matrix and consider kernel decompositions that cover other parts of the nonsymmetric P0 space." }, { "heading": "A MEAN PERCENTILE RANK", "text": "We begin our definition of MPR by defining percentile rank (PR). First, given a set J , let pi,J = Pr(J ∪ {i} | J). The percentile rank of an item i given a set J is defined as\nPRi,J =\n∑ i′ 6∈J 1(pi,J ≥ pi′,J)\n|Y\\J | × 100%\nwhere Y\\J indicates those elements in the ground set Y that are not found in J . For our evaluation, given a test set Y , we select a random element i ∈ Y and compute PRi,Y \\{i}. We then average over the set of all test instances T to compute the mean percentile rank (MPR):\nMPR = 1 |T | ∑ Y ∈T PRi,Y \\{i}." }, { "heading": "B HYPERPARAMETERS FOR EXPERIMENTS IN TABLE 2", "text": "Preventing numerical instabilities: The first term on the right side of Eq. 2 will be singular whenever |Yi| > K, where Yi is an observed subset. Therefore, to address this in practice we set K to the size of the largest subset observed in the data, K ′, as in Gartrell et al. (2017). However, this does not entirely address the issue, as the first term on the right side of Eq. 2 may still be singular even when |Yi| ≤ K. In this case though, we know that we are not at a maximum, since the value of the objective function is −∞. Numerically, to prevent such singularities, in our implementation we add a small I correction to each LYi when optimizing Eq. 2 (we set = 10 −5 in our experiments).\nWe perform a grid search using a held-out validation set to select the best performing hyperparameters for each model and dataset. The hyperparameter settings used for each model and dataset are described below.\nSymmetric low-rank DPP (Gartrell et al., 2016). For this model, we use K for the number of item feature dimensions for the symmetric component V , and α for the regularization hyperparameter for V . We use the following hyperparameter settings:\n• Both Amazon datasets: K = 30, α = 0. • UK Retail dataset: K = 100, α = 1. • Instacart dataset: K = 100, α = 0.001. • Million Song dataset: K = 150, α = 0.0001.\nBaseline NDPP (Gartrell et al., 2019). For this model, to ensure consistency with the notation used in Gartrell et al. (2019), we use D to denote the number of item feature dimensions for the symmetric component V , and D′ to denote the number of item feature dimensions for the nonsymmetric components, B and C. As described in Gartrell et al. (2019), α is the regularization hyperparameter for the V , while β and γ are the regularization hyperparameters for B and C, respectively. We use the following hyperparameter settings:\n• Both Amazon datasets: D = 30, α = 0. • Amazon apparel dataset: D′ = 30. • Amazon three-category dataset: D′ = 100. • UK Retail dataset: D = 100, D′ = 20, α = 1. • All datasets: β = γ = 0.\nScalable NDPP. As described in Section 3, we useK to denote the number of item feature dimensions for the symmetric component V and the dimensionality of the nonsymmetric component C. α is the regularization hyperparameter. We use the following hyperparameter settings:\n• Amazon apparel dataset: K = 30, α = 0. • Amazon three-category dataset: K = 100, α = 1. • UK dataset: K = 100, α = 0.01.\n• Instacart dataset: K = 100, α = 0.001.\n• Million Song dataset: K = 150, α = 0.01.\nFor all of the above model configurations we use a batch size of 200 during training, except for the scalable NDPPs trained on the Amazon apparel, Amazon three-category, Instacart, and Million Song datasets, where a batch size of 800 is used." }, { "heading": "C BENCHMARK ALGORITHMS FOR MAP INFERENCE", "text": "We test the following approximate algorithms for MAP inference:\nGreedy local search. This algorithm starts from the output of greedy, Y G, and replaces i ∈ Y G with j /∈ Y G that gives the maximum improvement of the determinant, if such i, j exist. Kathuria & Deshpande (2016) showed that running the search for such a swap O(k2 log(k/ε)) times with an accuracy parameter ε gives a tight approximation guarantee for MAP inference for symmetric DPPs. We set the number of swaps to bk2 log(10k)c for ε = 0.1 and use greedy local search as a baseline, since it is strictly an improvement on the greedy solution. The proposed greedy conditioning can be used for fast greedy local search. Specifically, for each i ∈ Y G, Algorithm 1 can compute marginal improvements conditioned by Y G \\ {i} in time O(MKk), and thus its runtime can be O(MKk4 log(k/ε)). However, it is the slowest among all of our benchmark algorithms.\nStochastic greedy. This algorithm computes marginal gains of a few items chosen uniformly at random and selects the best among them. Mirzasoleiman et al. (2015) proved that (M/k) log(1/ε) samples are enough to guarantee an (1− 1/e− ε)-approximation ratio for submodular functions (i.e., symmetric DPPs). We choose ε = 0.1 and set the number of samples to b(M/k) log(10)c. Under this setting, the time complexity of stochastic greedy is O(MKk2 log(1/ε)), which is better than the naïve exact greedy algorithm. However, we note that it is worse than that of our efficient greedy implement (Algorithm 1). This is because the stochastic greedy uses different random samples for every iteration and this does not take advantage of the amortized computations in Lemma 2. In our experiments, we simply modify line 10 in Algorithm 1 for stochastic greedy (argmax is operated on a random subset of marginal gains), hence it can run in O(MKk + (M/k) log(1/ε)) time. In practice, we observe that stochastic greedy is slightly slower than exact greedy due to the additional costs of the random sampling process.\nMCMC sampling. We also compare inference algorithms with sampling from a nonsymmetric DPP. To the best of our knowledge, exact sampling of a non-Hermitian DPP was studied in Poulson (2019), which requires the Cholesky decomposition with O(M3) complexity. This is infeasible for a large M . To resolve this, Markov Chain Monte-Carlo (MCMC) based sampling is preferred (Li et al., 2016) for symmetric DPPs. In particular, we consider a Gibbs sampling for k-DPP, which begins with a random subset Y with size k, and picks i ∈ Y and j /∈ Y uniformly at random. Then, it swaps them with probability\ndet(LY ∪{j}\\{i})\ndet(LY ∪{j}\\{i}) + det(LY ) (11)\nand repeat this process for several steps. Li et al. (2016) showed that O(Nk log(k/ε)) swaps are enough to approximate the ground-truth distribution under symmetric DPPs. However, for a fair runtime comparison to Algorithm 1, we set the number of swaps to b3N/Kc." }, { "heading": "D COROLLARY OF THEOREM 2", "text": "Theorem 2 requires the technical condition σmin > 1, but in practice there is no particular evidence that this condition holds. While this condition can be achieved by multiplying L by a constant, this leads to a (potentially large) additive term in Eq. 10. Here, we provide Corollary 1 which excludes the σmin > 1 assumption from Theorem 2, and quantifies this additive term. Corollary 1. Consider a nonsymmetric low-rank DPP L = V V > + BCB>, where V ,B are of rank K, and C ∈ RK×K . Given a cardinality budget k, let σmin and σmax denote the smallest and\nlargest singular values of LY for all Y ⊆ [[M ]] and |Y | ≤ 2k. Let κ := σmax/σmin. Then,\nlog det(LY G) ≥ 4(1− e−1/4) 2 log κ+ 1\nlog det(LY ∗)− ( 1− 4(1− e −1/4)\n2 log κ+ 1\n) k (1− log σmin) (12)\nwhere Y G is the output of Algorithm 1 and Y ∗ is the optimal solution of MAP inference in Eq. 4.\nThe proof of Corollary 1 is provided in Appendix F.5. Note that instead of log(σmax)/ log(σmin), Corollary 1 has a log(σmax/σmin) term in the denominator." }, { "heading": "E PERFORMANCE GUARANTEE FOR GREEDY MAP INFERENCE", "text": "The matrices learned on real datasets are too large to compute the exact MAP solution, but we can compute exact MAP for small matrices. In this section, we explore the performance of the greedy algorithm studied in Theorem 2 for 5 × 5 synthetic kernel matrices. More formally, we first pick K = 3 singular values s1, s2, s3 from a kernel learned for the “Amazon: 3-category” dataset (a plot of these singular values can be seen in Fig. 2(c)) and generate L = V1diag([s1, s2, s3])V >2 , where V1,V2 ∈ R5×3 are random orthonormal matrices. To ensure that L is a P0 matrix, we repeatedly sample V1,V2 until all principal minors of L are nonnegative. We also evaluate the performance of the symmetric DPP, where the kernel matrices are generated similarly to the NDPP, except we set V1 = V2. We set k = 3 and generate 10,000 random kernels for both symmetric DPPs and NDPPs.\nThe results for symmetric and nonsymmetric DPPs are shown in Fig. 2(a) and Fig. 2(b), respectively. We plot the approximation ratio of Algorithm 1, i.e., log det(LY G)/ log det(LY ∗), with respect to log(σmax/σmin), from Corollary 1. We observe that the greedy algorithm for both often shows approximation ratios close to 1. However, the worst-case ratio for NDPPs is worse than that of symmetric DPPs; log det(LY ) for L ∈ P+0 is non-submodular, and the greedy algorithm with a nonsubmodular function does not have as tight of a worst-case bound as in the symmetric case." }, { "heading": "F PROOFS", "text": "F.1 PROOF OF LEMMA 1\nLemma 1. Let A ∈ RM×M be a skew-symmetric matrix with rank ` ≤ M . Then, there exist B ∈ RM×` and positive numbers λ1, . . . , λb`/2c, such that A = BCB>, where C ∈ R`×` is the block-diagonal matrix with b`/2c diagonal blocks of size 2 given by Σi, i = 1, . . . , b`/2c and zero elsewhere.\nProof. First, we note that rank of a nonsingular skew-symmetric matrix is always even, because all of its eigenvalues are purely imaginary and come in conjugate pairs. There exists some orthogonal matrix P ∈ RM×M and\nΣ = 0 λ1 −λ1 0 0 λ2 0 −λ2 0 . . . 0 λ`/2 0 −λ`/2 0 0\n. . . 0\n\n(13)\nsuch that A = PΣP> (see, e.g.,(Thompson, 1988, Proposition 2.1)). Let C be the `×` supmatrix of Σ obtained by keeping its first ` rows and columns and let Q = ( I` 0 ) , where I` is the ` × ` identity matrix. Then, Σ = QCQ> and one can write A = PQCQ>P>. Setting B = PQ proves the lemma.\nF.2 PROOF OF THEOREM 1\nTheorem 1. Given an NDPP with kernel L = V V > +BCB>, parameterized by V of rank K, B of rank K, and a K ×K matrix C, we can compute the regularized log-likelihood (Eq. 2) and its gradient in O(MK2 +K3 +nK ′3) time, where K ′ is the size of the largest of the n training subsets.\nProof. We first show that the log-likelihood can be computed in time linear in M . Using the matrix determinant lemma, one can easily verify that the DPP normalization term can be computed as\ndet(I + L) = det ( I + (V BC) ( V >\nB>\n)) = det ( I2K + ( V >\nB>\n) (V BC) ) (14)\nwhere I2K is the identity matrix with dimension 2K. As Eq. 14 requires a matrix-multiplication between (2K)×M matrices and the determinant of (2K)×(2K) matrices, this allows us to transform a O(M3) operation into an O(MK2 +K3) one.\nHaving established that the normalization term in the likelihood can be computed in O(MK2 +K3) time, we proceed with characterizing the complexity of the other terms in the likelihood. The first term in Eq. 2 consists of determinants of size |Yi|. Assuming that these never exceed size K ′, each can be computed in at most O(K ′3) time. The regularization term is a simple sum of norms that can be computed in O(MK) time. Therefore, the full regularized log-likelihood can be computed in O(MK2 +K3 + nK ′3) time.\nTo prove that the gradient of the log-likelihood can be computed in time linear in M , we begin by showing that the logarithm of DPP normalization term can be factorized as follows:\nZ = log det(I + L) (15)\n= log det ( I2K + ( V >\nB>\n) (V B) ( IK 0 0 C )) (16)\n= log det (( IK 0 0 C−1 ) + ( V > B> ) (V B) ) + log det ( IK 0 0 C ) (17)\n= log det\n( IK + V\n>V V >B B>V C−1 + B>B\n) + log det(C) (18)\n= log det ( IK + V >V ) + log det ( C−1 + B>(I − V (IK + V >V )−1V >)B ) + log det(C)\n(19)\nwhere Eq. 17 follows from the determinant commutativity (i.e., det(AB) = det(A) det(B)) and Eq. 18 and Eq. 19 come from the Schur’s determinant identity†. For simplicity, we write X = I − V (IK + V >V )−1V > and (C−1)> = C−>, and note that X depends only on V . The gradient of Z has three parts: ∇Z = (∇V Z,∇BZ,∇CZ) where each can be computed as\n∇V Z = ∇V log det(IK + V >V ) +∇V log det(C−1 + B>XB) (20) = 2V (IK + V >V )−1\n−XB((C−1 + B>XB)−1 + (C−> + B>XB)−1)B>XV (21) ∇BZ = ∇B log det(C−1 + B>XB) (22)\n= XB ( (C−1 + B>XB)−1 + (C−> + B>XB)−1 ) (23)\n∇CZ = ∇C log det(C) +∇C log det(C−1 + B>XB) (24) = C−> −C−>(C−1 + B>XB)−>C−> (25)\nObserve that X combines a M ×M identity matrix with M ×K matrices, hence multiplying it with aM×K matrix (e.g., XV or XB) can be computed inO(MK2) time. Since each of the remaining matrix inverses in Eq. 21, Eq. 23, and Eq. 25 involve a K ×K matrix inverse, with a cost of O(K3) operations, we have a net computational cost of O(MK2 +K3) for computing ∇ log det(I + L). The gradient of the first term in Eq. 2 involves computing gradients of determinants of size at most K ′, which results in size K ′ matrix inverses, since for a matrix A, ∂∂Aij (log det(A)) = (A\n−1)>ij . Each of these inverses can be computed in O(K ′3) time. The gradient of the simple sum-of-norms regularization term can be computed in O(MK) time. Therefore, combining these results with the results above for the complexity of the gradient of the normalization term, we have the following overall complexity of the gradient for the full log-likelihood: O(MK2 +K3 + nK ′3).\nF.3 PROOF OF LEMMA 2\nLemma 2. Given B ∈ RM×K , C ∈ RK×K , and Y = {a1, . . . , ak} ⊆ [[M ]], let bi ∈ R1×K be the i-th row in B and BY ∈ R|Y |×K be a matrix containing rows in B indexed by Y . Then, it holds that\nB>Y (BYCB > Y ) −1BY = k∑ j=1 p>j qj , (6)\nwhere row vectors pj , qj ∈ R1×K for j = 1, . . . , k satisfy p1 = ba1/(ba1Cb>a1), q1 = ba1 , and\npj+1 = baj − bajC>\n∑j i=1 q > i pi\nbajC(baj − bajC> ∑j i=1 q > i pi) > , qj+1 = baj − bajC j∑ i=1 p>i qi. (7)\n†det ( A B C D ) = det(A) det(D −CA−1B).\nProof. We prove by induction on k. When k = 1, the result is trivial because\nB>Y (BYCB > Y ) −1BY = b > a1(ba1Cb > a1) −1ba1 = p > 1 q1. (26)\nNow we assume that the statement holds for k − 1. Let Y := {a1, . . . , ak−1} and a := ak. From the inductive hypothesis, it holds\nB>Y (BYCB > Y ) −1BY = k−1∑ j=1 p>j qj . (27)\nNow we write\nB>Y ∪{a} ( BY ∪{a}CB > Y ∪{a} )−1 BY ∪{a} (28)\n= B>Y ∪{a} (( BY ba ) C ( B>Y b > a ))−1 BY ∪{a} (29) = ( B>Y b > a )(BYCB>Y BYCb>a baCB > Y baCb > a )−1( BY ba ) . (30)\nTo handle the inverse matrix we employ the Schur complement, which yields( X y z w )−1 = ( X−1 0 0 0 ) + 1 w − zX−1y ( X−1yzX−1 −X−1y −zX−1 1 ) (31)\nfor any non-singular square matrix X ∈ Rk×k, column vector y ∈ Rk and row vector z ∈ R1×k, unless (w − zX−1y) = 0. Applying this, we have( BYCB > Y BYCb > a\nbaCB > Y baCb > a\n)−1 = ( (BYCB > Y ) −1 0\n0 0\n) +\n1\nbaCb>a − baCB>Y (BYCB>Y )−1BYCb>a( (BYCB > Y ) −1BYCb > a baCB > Y (BYCB > Y ) −1 −(BYCB>Y )−1BYCb>a\n−baCB>Y (BYCB>Y )−1 1.\n) (32)\nSubstituting Eq. 32 into Eq. 30, we obtain\nB>Y ∪{a} ( BY ∪{a}CB > Y ∪{a} )−1 BY ∪{a} (33)\n= B>Y ( BYCB > Y )−1 BY +\n( b>a −B>Y (BYCB>Y )−1BYCb>a ) ( ba − baCB>Y (BYCB>Y )−1BY ) baC ( b>a −B>Y (BYCB>Y )−1BYCb>a\n) (34)\n= k−1∑ j=1 p>j qj +\n( b>a − ∑k−1 j=1 p > j qjCb > a )( ba − baC ∑k−1 j=1 p > j qj ) baC ( b>a − ∑k−1 j=1 p > j qjCb > a\n) (35) =\nk−1∑ j=1 p>j qj + p > k qk (36)\nwhere the third line holds from the inductive hypothesis Eq. 27 and the last line holds from the definition of pk, qk ∈ R1×K .\nF.4 PROOF OF THEOREM 2\nTheorem 2. Consider a nonsymmetric low-rank DPP L = V V > + BCB>, where V ,B are of rank K, and C ∈ RK×K . Given a cardinality budget k, let σmin and σmax denote the smallest and largest singular values of LY for all Y ⊆ [[M ]] and |Y | ≤ 2k. Assume that σmin > 1. Then,\nlog det(LY G) ≥ 4(1− e−1/4)\n2(log σmax/log σmin)− 1 log det(LY ∗) (10)\nwhere Y G is the output of Algorithm 1 and Y ∗ is the optimal solution of MAP inference in Eq. 4.\nProof. The proof of Theorem 2 relies on an approximation guarantee for nonsubmodular greedy maximization (Bian et al., 2017, Theorem 1). We introduce their result below.\nTheorem 3 ((Bian et al., 2017, Theorem 1)). Consider a set function f defined on all subsets of {1, . . . ,M} = [[M ]] is monotone nondecreasing and nonnegative, i.e., 0 ≤ f(Y ) ≤ f(X) for ∀Y ⊆ X ⊆ [[M ]]. Given a cardinality budget k ≥ 1, let Y ∗ be the optimal solution of max|Y |=k f(Y ) and Y 0 = ∅, Y t := {a1, . . . , at}, t = 1, . . . , k be the successive chosen by the greedy algorithm with budget k. Denote γ be the largest scalar such that∑\ni∈X\\Y t (f(Y t ∪ {i})− f(Y t)) ≥ γ(f(X ∪ Y t)− f(Y t)), (37)\nfor ∀X ⊆ [[M ]], |X| = k and t = 0, . . . , k − 1, and α be the smallest scalar such that\nf(Y t−1 ∪ {i} ∪X)− f(Y t−1 ∪X) ≥ (1− α) (f(Y t−1 ∪ {i})− f(Y t−1)). (38)\nfor ∀X ⊆ [[M ]], |X| = k and i ∈ Y k−1 \\X . Then, it holds that\nf(Y k) ≥ 1 α\n( 1− e−αγ ) f(Y ∗). (39)\nIn order to apply this result for MAP inference of NDPPs, the objective should be monotone nondecreasing and nonnegative. We first show that σmin > 1 is a sufficient condition for both monotonicity and nonnegativity.\nLemma 3. Given a P0 matrix L ∈ RM×M and the budget k ≥ 0, a set function f(Y ) = log det(LY ) defined on Y ⊆ [[M ]] is monotone nondecreasing and nonnegative for |Y | ≤ k when σmin > 1.\nThe proof of Lemma 3 is provided in Appendix F.6. Next, we aim to find proper bounds on α and γ. To resolve this, we provide the following upper and lower bounds of the marginal gain for f(Y ) = log det(LY ).\nLemma 4. Let f(Y ) = log det(LY ) and assume that σmin > 1. Then, for Y ⊆ [[M ]], |Y | < 2k and i /∈ Y , it holds that\nf(Y ∪ {i})− f(Y ) ≥ log σmin, (40) f(Y ∪ {i})− f(Y ) ≤ 2 log σmax − log σmin (41)\nwhere σmin and σmax are the smallest and largest singular values of LY for all Y ⊆ [[M ]], |Y | ≤ 2k.\nThe proof of Lemma 4 is provided in Appendix F.7. To bound γ, we consider X ⊆ [[M ]], |X| = k and denote X \\ Y t = {x1, . . . , xr} 6= ∅. Then,∑\ni∈X\\Y t (f(Y t ∪ {i})− f(Y )) = r∑ j=1 f(Y t ∪ {xr})− f(Y t) ≥ r log σmin (42)\nwhere the last inequality comes from Eq. 40. Similarly, we get\nf(X ∪ Y t)− f(Y t) = r∑ j=1 f({x1, . . . , xj} ∪ Y t)− f({x1, . . . , xj−1} ∪ Y t) (43)\n≤ r(2 log σmax − log σmin) (44) where the last inequality comes from Eq. 41. Combining Eq. 42 to Eq. 44, we obtain that∑\ni∈X\\Y t f(Y t ∪ {i})− f(Y t)\nf(X ∪ Y t)− f(Y t) ≥ log σmin 2 log σmax − log σmin (45)\nwhich allows us to choose γ = ( 2 log σmaxlog σmin − 1 )−1 .\nTo bound α, we similarly use Lemma 4 to obtain\nf(X ∪ Y t−1 ∪ {i})− f(X ∪ Y t−1) f(Y t−1 ∪ {i})− f(Y t−1) ≥ log σmin 2 log σmax − log σmin\n(46)\nand we can choose α = 1− log σmin2 log σmax−log σmin = 2(log σmax−log σmin) 2 log σmax−log σmin .\nNow let κ = log σmaxlog σmin . Then γ = 1 2κ−1 and α = 2(κ−1) 2κ−1 . Putting γ and α into Eq. 39, we have\n1 α (1− e−αγ) ≥ 2κ− 1 2(κ− 1)\n( 1− e− 2(κ−1) (2κ−1)2 ) (47)\n≥ 2κ− 1 2(κ− 1) 4 exp(−1/4) 2(κ− 1) (2κ− 1)2\n(48)\n= 4 exp(−1/4)\n2κ− 1 (49)\nwhere the second inequality holds from the fact that maxκ≥1 2(κ−1) (2κ−1)2 = 1 4 and 1 − e −x ≥ 4 exp(−1/4)x for x ∈ [0, 1/4].\nF.5 PROOF OF COROLLARY 1\nCorollary 1. Consider a nonsymmetric low-rank DPP L = V V > + BCB>, where V ,B are of rank K, and C ∈ RK×K . Given a cardinality budget k, let σmin and σmax denote the smallest and largest singular values of LY for all Y ⊆ [[M ]] and |Y | ≤ 2k. Let κ := σmax/σmin. Then,\nlog det(LY G) ≥ 4(1− e−1/4) 2 log κ+ 1\nlog det(LY ∗)− ( 1− 4(1− e −1/4)\n2 log κ+ 1\n) k (1− log σmin) (12)\nwhere Y G is the output of Algorithm 1 and Y ∗ is the optimal solution of MAP inference in Eq. 4.\nProof. Now consider L′ = ( eσmin )L where e is the exponential constant. Then, σ ′ min = σmin( e\nσmin ) = e and σ′max = σmax( e σmin ). Using the fact that log det(L′Y ) = log det(LY ) − |Y | log σmin, we obtain the result.\nF.6 PROOF OF LEMMA 3\nBefore stating the proof, we introduce interlacing properties of singular values.\nTheorem 4 (Interlacing Inequality for Singular Values, (Thompson, 1972, Theorem 1)). Consider a real matrix A ∈ RM×N with singular values σ1 ≥ σ2 ≥ · · · ≥ σmin(M,N) and its supmatrix B ∈ RP×Q with singular values β1 ≥ β2 ≥ · · · ≥ βmin(P,Q). Then, the singular values have the following interlacing properties:\nσi ≥ βi, for i = 1, . . . ,min(P,Q), (50) βi ≥ σi+M−P+N−Q, for i = 1, . . . ,min(P +Q−M,P +Q−N). (51)\nNote that when M = N and P = Q = N − 1, it holds that βi ≥ σi+2 for i = 1, . . . , N − 2.\nWe are now ready to prove Lemma 3.\nLemma 3. Given a P0 matrix L ∈ RM×M and the budget k ≥ 0, a set function f(Y ) = log det(LY ) defined on Y ⊆ [[M ]] is monotone nondecreasing and nonnegative for |Y | ≤ k when σmin > 1.\nProof. Since L ∈ P0, all of its principal submatrices are also in P0. By the definition of a P0 matrix, it holds that\n|det(LY )| = det(LY ) = ∏ i σi(LY ) (52)\nwhere σi(LY ) for i ∈ [[|Y |]] are singular values of LY . Then, F (Y ) = ∑ i log(σi(LY )) is nonnegative for all Y such that |Y | ≤ K due to σi(LY ) ≥ σmin > 1. Similarly, we have F (Y ∪ {a}) − F (Y ) = ∑|Y |+1 i=1 log σi(LY ∪{a}) − ∑|Y | i=1 log σi(LY ) ≥ log σmin > 0 from Eq. 50.\nF.7 PROOF OF LEMMA 4\nLemma 4. Let f(Y ) = log det(LY ) and assume that σmin > 1. Then, for Y ⊆ [[M ]], |Y | < 2k and i /∈ Y , it holds that\nf(Y ∪ {i})− f(Y ) ≥ log σmin, (40) f(Y ∪ {i})− f(Y ) ≤ 2 log σmax − log σmin (41)\nwhere σmin and σmax are the smallest and largest singular values of LY for all Y ⊆ [[M ]], |Y | ≤ 2k.\nProof. For a P0 matrix, we remark that its determinant is equivalent to the product of all singular values. For Y ⊆ [[M ]] and i /∈ Y , from the interlacing inequality of Eq. 50 we have that\nF (Y ∪ {i})− F (Y ) = |Y |+1∑ j=1 log σ′j − |Y |∑ j=1 log σj ≥ log σ′|Y |+1 ≥ log σmin (53)\nwhere σ′j and σj are the j-th largest singular value of LY ∪{i} and LY , respectively. Similarly, using Eq. 51, we get\nF (Y ∪ {i})− F (Y ) ≤ log(σ′1σ′2)− log σ|Y | ≤ 2 log σmax − log σmin. (54)" } ]
2,021
NONSYMMETRIC DETERMINANTAL POINT PROCESSES
SP:86d37b08b4c0ab21d139c57bbe3b9e5535eeb3f9
[ "This paper proposes a zero-shot voice style transfer (VST) algorithms that explicitly controls the disentanglement between content information and style information. Experiments show that the proposed algorithm can achieve significant improvement over the existing state-of-the-art VST algorithms. There are two major strengths of this paper. First, it motivates the algorithm design from an information-theoretic perspective. Second, the performance improvement is significant.", "This submission proposes a training approach for voice style transfer using encoder-decoder framework and content and style representations. The approach combines multiple mutual-information (MI) based terms into a single objective function. One of the MI based terms is the MI between content and style representations. By minimising mutual information between these representations, the training approach yields models where these representations are disentangled. Experimental results show that this approach leads to improved performance in speaker verification and speech similarity tasks. Experimental results in challenging zero-shot conditions also demonstrate improved performance in speaker verification, speech naturalness and speech similarity tasks. " ]
Voice style transfer, also called voice conversion, seeks to modify one speaker’s voice to generate speech as if it came from another (target) speaker. Previous works have made progress on voice conversion with parallel training data and pre-known speakers. However, zero-shot voice style transfer, which learns from non-parallel data and generates voices for previously unseen speakers, remains a challenging problem. We propose a novel zero-shot voice transfer method via disentangled representation learning. The proposed method first encodes speakerrelated style and voice content of each input voice into separated low-dimensional embedding spaces, and then transfers to a new voice by combining the source content embedding and target style embedding through a decoder. With informationtheoretic guidance, the style and content embedding spaces are representative and (ideally) independent of each other. On real-world VCTK datasets, our method outperforms other baselines and obtains state-of-the-art results in terms of transfer accuracy and voice naturalness for voice style transfer experiments under both many-to-many and zero-shot setups.
[ { "affiliations": [], "name": "Siyang Yuan" }, { "affiliations": [], "name": "Pengyu Cheng" }, { "affiliations": [], "name": "Ruiyi Zhang" }, { "affiliations": [], "name": "Weituo Hao" }, { "affiliations": [], "name": "Zhe Gan" }, { "affiliations": [], "name": "Lawrence Carin" } ]
[ { "authors": [ "Alexander A Alemi", "Ian Fischer", "Joshua V Dillon", "Kevin Murphy" ], "title": "Deep variational information bottleneck", "venue": "arXiv preprint arXiv:1612.00410,", "year": 2016 }, { "authors": [ "Donald J Berndt", "James Clifford" ], "title": "Using dynamic time warping to find patterns in time series", "venue": "In KDD workshop,", "year": 1994 }, { "authors": [ "Christopher P Burgess", "Irina Higgins", "Arka Pal", "Loic Matthey", "Nick Watters", "Guillaume Desjardins", "Alexander Lerchner" ], "title": "Understanding disentangling in beta-vae", "venue": "arXiv preprint arXiv:1804.03599,", "year": 2018 }, { "authors": [ "Santosh V Chapaneri" ], "title": "Spoken digits recognition using weighted mfcc and improved features for dynamic time warping", "venue": "International Journal of Computer Applications,", "year": 2012 }, { "authors": [ "Dongdong Chen", "Jing Liao", "Lu Yuan", "Nenghai Yu", "Gang Hua" ], "title": "Coherent online video style transfer", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Tian Qi Chen", "Xuechen Li", "Roger B Grosse", "David K Duvenaud" ], "title": "Isolating sources of disentanglement in variational autoencoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Xi Chen", "Yan Duan", "Rein Houthooft", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Pengyu Cheng", "Weituo Hao", "Shuyang Dai", "Jiachang Liu", "Zhe Gan", "Lawrence Carin" ], "title": "Club: A contrastive log-ratio upper bound of mutual information", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Pengyu Cheng", "Renqiang Min", "Shen Dinghan", "Christopher Malon", "Yizhe Zhang", "Li Yitong", "Lawrence Carin" ], "title": "Improving disentangled text representation learning with information-theoretic guidance", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Yunjey Choi", "Minje Choi", "Munyoung Kim", "Jung-Woo Ha", "Sunghun Kim", "Jaegul Choo" ], "title": "Stargan: Unified generative adversarial networks for multi-domain image-to-image translation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Ju-chieh Chou", "Hung-Yi Lee" ], "title": "One-shot voice conversion by separating speaker and content representations with instance normalization", "venue": "Proc. Interspeech 2019,", "year": 2019 }, { "authors": [ "Thomas M Cover", "Joy A Thomas" ], "title": "Elements of information theory", "venue": null, "year": 2012 }, { "authors": [ "Shivanker Dev Dhingra", "Geeta Nijhawan", "Poonam Pandit" ], "title": "Isolated speech recognition using mfcc and dtw", "venue": "International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering,", "year": 2013 }, { "authors": [ "Leon A Gatys", "Alexander S Ecker", "Matthias Bethge" ], "title": "Image style transfer using convolutional neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Behnam Gholami", "Pritish Sahu", "Ognjen Rudovic", "Konstantinos Bousmalis", "Vladimir Pavlovic" ], "title": "Unsupervised multi-target domain adaptation: An information theoretic approach", "venue": "IEEE Transactions on Image Processing,", "year": 2020 }, { "authors": [ "Elizabeth Godoy", "Olivier Rosec", "Thierry Chonavel" ], "title": "Voice conversion using dynamic frequency warping with amplitude scaling, for parallel or nonparallel corpora", "venue": "IEEE Transactions on Audio, Speech, and Language Processing,", "year": 2011 }, { "authors": [ "Michael Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "venue": "In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Irina Higgins", "Arka Pal", "Andrei Rusu", "Loic Matthey", "Christopher Burgess", "Alexander Pritzel", "Matthew Botvinick", "Charles Blundell", "Alexander Lerchner. Darla" ], "title": "Improving zero-shot transfer in reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "arXiv preprint arXiv:1808.06670,", "year": 2018 }, { "authors": [ "Chin-Cheng Hsu", "Hsin-Te Hwang", "Yi-Chiao Wu", "Yu Tsao", "Hsin-Min Wang" ], "title": "Voice conversion from non-parallel corpora using variational auto-encoder", "venue": "In 2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA),", "year": 2016 }, { "authors": [ "Haozhi Huang", "Hao Wang", "Wenhan Luo", "Lin Ma", "Wenhao Jiang", "Xiaolong Zhu", "Zhifeng Li", "Wei Liu" ], "title": "Real-time neural style transfer for videos", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Xun Huang", "Serge Belongie" ], "title": "Arbitrary style transfer in real-time with adaptive instance normalization", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Hirokazu Kameoka", "Takuhiro Kaneko", "Kou Tanaka", "Nobukatsu Hojo" ], "title": "Stargan-vc: Non-parallel many-to-many voice conversion using star generative adversarial networks", "venue": "IEEE Spoken Language Technology Workshop (SLT),", "year": 2018 }, { "authors": [ "Takuhiro Kaneko", "Hirokazu Kameoka" ], "title": "Cyclegan-vc: Non-parallel voice conversion using cycleconsistent adversarial networks", "venue": "In 2018 26th European Signal Processing Conference (EUSIPCO),", "year": 2018 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by factorising", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Robert Kubichek" ], "title": "Mel-cepstral distance measure for objective speech quality assessment", "venue": "In Proceedings of IEEE Pacific Rim Conference on Communications Computers and Signal Processing,", "year": 1993 }, { "authors": [ "Guillaume Lample", "Sandeep Subramanian", "Eric Smith", "Ludovic Denoyer", "Marc’Aurelio Ranzato", "Y-Lan Boureau" ], "title": "Multiple-attribute text rewriting", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Chung-Han Lee", "Chung-Hsien Wu" ], "title": "Map-based adaptation for speech conversion using adaptation data selection and non-parallel training", "venue": "In Ninth International Conference on Spoken Language Processing,", "year": 2006 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Gunnar Raetsch", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Fujun Luan", "Sylvain Paris", "Eli Shechtman", "Kavita Bala" ], "title": "Deep photo style transfer", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Seyed Hamidreza Mohammadi", "Alexander Kain" ], "title": "An overview of voice conversion systems", "venue": "Speech Communication,", "year": 2017 }, { "authors": [ "Kevin R Moon", "Alfred O Hero" ], "title": "Ensemble estimation of multivariate f-divergence", "venue": "In 2014 IEEE International Symposium on Information Theory,", "year": 2014 }, { "authors": [ "Lindasalwa Muda", "Mumtaj Begam", "Irraivan Elamvazuthi" ], "title": "Voice recognition algorithms using mel frequency cepstral coefficient (mfcc) and dynamic time warping (dtw) techniques", "venue": "arXiv preprint arXiv:1003.4083,", "year": 2010 }, { "authors": [ "Arsha Nagrani", "Joon Son Chung", "Andrew Zisserman" ], "title": "Voxceleb: a large-scale speaker identification dataset", "venue": "arXiv preprint arXiv:1706.08612,", "year": 2017 }, { "authors": [ "Keigo Nakamura", "Tomoki Toda", "Hiroshi Saruwatari", "Kiyohiro Shikano" ], "title": "Speaking aid system for total laryngectomees using voice conversion of body transmitted artificial speech", "venue": "In Ninth International Conference on Spoken Language Processing,", "year": 2006 }, { "authors": [ "XuanLong Nguyen", "Martin J Wainwright", "Michael I Jordan" ], "title": "Estimating divergence functionals and the likelihood ratio by convex risk minimization", "venue": "IEEE Transactions on Information Theory,", "year": 2010 }, { "authors": [ "Aaron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew Senior", "Koray Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": "arXiv preprint arXiv:1609.03499,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Vassil Panayotov", "Guoguo Chen", "Daniel Povey", "Sanjeev Khudanpur" ], "title": "Librispeech: an asr corpus based on public domain audio books", "venue": "In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2015 }, { "authors": [ "Kaizhi Qian", "Yang Zhang", "Shiyu Chang", "Xuesong Yang", "Mark Hasegawa-Johnson" ], "title": "Autovc: Zero-shot voice style transfer with only autoencoder loss", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Danilo Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Yuki Saito", "Yusuke Ijima", "Kyosuke Nishida", "Shinnosuke Takamichi" ], "title": "Non-parallel voice conversion using variational autoencoders conditioned by phonetic posteriorgrams and d-vectors", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2018 }, { "authors": [ "Joan Serrà", "Santiago Pascual", "Carlos Segura Perales" ], "title": "Blow: a single-scale hyperconditioned flow for non-parallel raw-audio voice conversion", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Tianxiao Shen", "Tao Lei", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Style transfer from non-parallel text by cross-alignment", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Berrak Sisman", "Mingyang Zhang", "Sakriani Sakti", "Haizhou Li", "Satoshi Nakamura" ], "title": "Adaptive wavenet vocoder for residual compensation in gan-based voice conversion", "venue": "IEEE Spoken Language Technology Workshop (SLT),", "year": 2018 }, { "authors": [ "Jiaming Song", "Pratyusha Kalluri", "Aditya Grover", "Shengjia Zhao", "Stefano Ermon" ], "title": "Learning controllable fair representations", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Instance normalization: The missing ingredient for fast stylization", "venue": "arXiv preprint arXiv:1607.08022,", "year": 2016 }, { "authors": [ "Petar Veličković", "William Fedus", "William L Hamilton", "Pietro Liò", "Yoshua Bengio", "R Devon Hjelm" ], "title": "Deep graph infomax", "venue": "arXiv preprint arXiv:1809.10341,", "year": 2018 }, { "authors": [ "Fernando Villavicencio", "Jordi Bonada" ], "title": "Applying voice conversion to concatenative singing-voice synthesis", "venue": "In Eleventh annual conference of the international speech communication association,", "year": 2010 }, { "authors": [ "Li Wan", "Quan Wang", "Alan Papir", "Ignacio Lopez Moreno" ], "title": "Generalized end-to-end loss for speaker verification", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2018 }, { "authors": [ "Mirjam Wester", "Zhizheng Wu", "Junichi Yamagishi" ], "title": "Analysis of the voice conversion challenge 2016 evaluation results", "venue": "In Interspeech,", "year": 2016 }, { "authors": [ "Junichi Yamagishi", "Christophe Veaux", "Kirsten MacDonald" ], "title": "Cstr vctk corpus: English multispeaker corpus for cstr voice cloning toolkit", "venue": null, "year": 2019 }, { "authors": [ "Zichao Yang", "Zhiting Hu", "Chris Dyer", "Eric P Xing", "Taylor Berg-Kirkpatrick" ], "title": "Unsupervised text style transfer using language models as discriminators", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Style transfer, which automatically converts a data instance into a target style, while preserving its content information, has attracted considerable attention in various machine learning domains, including computer vision (Gatys et al., 2016; Luan et al., 2017; Huang & Belongie, 2017), video processing (Huang et al., 2017; Chen et al., 2017), and natural language processing (Shen et al., 2017; Yang et al., 2018; Lample et al., 2019; Cheng et al., 2020b). In speech processing, style transfer was earlier recognized as voice conversion (VC) (Muda et al., 2010), which converts one speaker’s utterance, as if it was from another speaker but with the same semantic meaning. Voice style transfer (VST) has received long-term research interest, due to its potential for applications in security (Sisman et al., 2018), medicine (Nakamura et al., 2006), entertainment (Villavicencio & Bonada, 2010) and education (Mohammadi & Kain, 2017), among others.\nAlthough widely investigated, VST remains challenging when applied to more general application scenarios. Most of the traditional VST methods require parallel training data, i.e., paired voices from two speakers uttering the same sentence. This constraint limits the application of such models in the real world, where data are often not pair-wise available. Among the few existing models that address non-parallel data (Hsu et al., 2016; Lee & Wu, 2006; Godoy et al., 2011), most methods cannot handle many-to-many transfer (Saito et al., 2018; Kaneko & Kameoka, 2018; Kameoka et al., 2018), which prevents them from converting multiple source voices to multiple target speaker styles. Even among the few non-parallel many-to-many transfer models, to the best of our knowledge, only two models (Qian et al., 2019; Chou & Lee, 2019) allow zero-shot transfer, i.e., conversion from/to newly-coming speakers (unseen during training) without re-training the model.\nThe only two zero-shot VST models (AUTOVC (Qian et al., 2019) and AdaIN-VC (Chou & Lee, 2019)) share a common weakness. Both methods construct encoder-decoder frameworks, which extract the style and the content information into style and content embeddings, and generate a voice sample by combining a style embedding and a content embedding through the decoder. With the combination of the source content embedding and the target style embedding, the models generate\n∗Equal contribution.\nthe transferred voice, based only on source and target voice samples. AUTOVC (Qian et al., 2019) uses a GE2E (Wan et al., 2018) pre-trained style encoder to ensure rich speaker-related information in style embeddings. However, AUTOVC has no regularizer to guarantee that the content encoder does not encode any style information. AdaIN-VC (Chou & Lee, 2019) applies instance normalization (Ulyanov et al., 2016) to the feature map of content representations, which helps to eliminate the style information from content embeddings. However, AdaIN-VC fails to prevent content information from being revealed in the style embeddings. Both methods cannot assure that the style and content embeddings are disentangled without information revealed from each other.\nWith information-theoretic guidance, we propose a disentangled-representation-learning method to enhance the encoder-decoder zero-shot VST framework, for both style and content information preservation. We call the proposed method Information-theoretic Disentangled Embedding for Voice Conversion (IDE-VC). Our model successfully induces the style and content of voices into independent representation spaces by minimizing the mutual information between style and content embeddings. We also derive two new multi-group mutual information lower bounds, to further improve the representativeness of the latent embeddings. Experiments demonstrate that our method outperforms previous works under both many-to-many and zero-shot transfer setups on two objective metrics and two subjective metrics." }, { "heading": "2 BACKGROUND", "text": "In information theory, mutual information (MI) is a crucial concept that measures the dependence between two random variables. Mathematically, the MI between two variables x and y is\nI(x;y) := Ep(x,y) [ log p(x,y)\np(x)p(y)\n] , (1)\nwhere p(x) and p(y) are marginal distributions of x and y, and p(x,y) is the joint distribution. Recently, MI has attracted considerable interest in machine learning as a criterion to minimize or maximize the dependence between different parts of a model (Chen et al., 2016; Alemi et al., 2016; Hjelm et al., 2018; Veličković et al., 2018; Song et al., 2019). However, the calculation of exact MI values is challenging in practice, since the closed form of joint distribution p(x,y) in equation (1) is generally unknown. To solve this problem, several MI estimators have been proposed. For MI maximization tasks, Nguyen, Wainwright and Jordan (NWJ) (Nguyen et al., 2010) propose a lower bound by representing (1) as an f -divergence (Moon & Hero, 2014):\nINWJ := Ep(x,y)[f(x,y)]− e−1Ep(x)p(y)[ef(x,y)], (2) with a score function f(x,y). Another widely-used sample-based MI lower bound is InfoNCE (Oord et al., 2018), which is derived with Noise Contrastive Estimation (NCE) (Gutmann & Hyvärinen, 2010). With sample pairs {(xi,yi)}Ni=1 drawn from the joint distribution p(x,y), the InfoNCE lower bound is defined as\nINCE := E [ 1 N N∑ i=1 log ef(xi,yi) 1 N ∑N j=1 e f(xi,yj) ] . (3)\nFor MI minimization tasks, Cheng et al. (2020a) proposed a contrastively learned upper bound that requires the conditional distribution p(x|y):\nI(x;y) ≤ E [ 1 N N∑ i=1 [ log p(xi|yi)− 1 N N∑ j=1 log p(xj |yi) ]] . (4)\nwhere the MI is bounded by the log-ratio of conditional distribution p(x|y) between positive and negative sample pairs. In the following, we derive our information-theoretic disentangled representation learning framework for voice style transfer based on the MI estimators described above." }, { "heading": "3 PROPOSED MODEL", "text": "We assume access to N audio (voice) recordings from M speakers, where speaker u has Nu voice samples Xu = {xui}Nui=1. The proposed approach encodes each voice input x ∈ X = ∪Mu=1Xu into a speaker-related (style) embedding s = Es(x) and a content-related embedding c = Ec(x),\nusing respectively a style encoder Es(·) and a content encoder Ec(·). To transfer a source xui from speaker u to the target style of the voice of speaker v, xvj , we combine the content embedding cui = Ec(xui) and the style embedding svj = Es(xvj) to generate the transferred voice x̂u→v,i = D(svj , cui) with a decoder D(s, c). To implement this two-step transfer process, we introduce a novel mutual information (MI)-based learning objective, that induces the style embedding s and content embedding c into independent representation spaces (i.e., ideally, s contains rich style information of xwith no content information, and vice versa). In the following, we first describe our MI-based training objective in Section 3.1, and then discuss the practical estimation of the objective in Sections 3.2 and 3.3." }, { "heading": "3.1 MI-BASED DISENTANGLING OBJECTIVE", "text": "From an information-theoretic perspective, to learn representative latent embedding (s, c), it is desirable to maximize the mutual information between the embedding pair (s, c) and the input x. Meanwhile, the style embedding s and the content c are desired to be independent, so that we can control the style transfer process with different style and content attributes. Therefore, we minimize the mutual information I(s; c) to disentangle the style embedding and content embedding spaces. Consequently, our overall disentangled-representation-learning objective seeks to minimize\nL = I(s; c)− I(x; s, c) = I(s; c)− I(x; c|s)− I(x; s). (5) As discussed in Locatello et al. (Locatello et al., 2019), without inductive bias for supervision, the learned representation can be meaningless. To address this problem, we use the speaker identity u as a variable with values {1, . . . ,M} to learn representative style embedding s for speaker-related attributes. Noting that the process from speaker u to his/her voice xui to the style embedding sui (as u → x → s) is a Markov Chain, we conclude I(s;x) ≥ I(s;u) based on the MI data-processing inequality (Cover & Thomas, 2012) (as stated in the Supplementary Material). Therefore, we replace I(s;x) in L with I(s;u) and minimize an upper bound instead:\nL̄ = I(s; c)− I(x; c|s)− I(u; s) ≥ I(s; c)− I(x; c|s)− I(x; s), (6) In practice, calculating the MI is challenging, as we typically only have access to samples, and lack the required distributions (Chen et al., 2016). To solve this problem, below we provide several MI estimates to the objective terms I(s; c), I(x; c|s) and I(u; s)." }, { "heading": "3.2 MI LOWER BOUND ESTIMATION", "text": "To maximize I(u; s), we derive the following multi-group MI lower bound (Theorem 3.1) based on the NWJ bound developed in Nguyen et al. (Nguyen et al., 2010). The detailed proof is provided in the Supplementary Material. Let µ(−ui)v = µv represent the mean of all style embeddings in group Xv , constituting the style centroid of speaker v; µ(−ui)u is the mean of all style embeddings in group Xu except data point xui, representing a leave-xui-out style centroid of speaker u. Intuitively, we minimize ‖sui − µ(−ui)u ‖ to encourage the style embedding of voice xui to be more similar to the style centroid of speaker u, while maximizing ‖sui − µ(−ui)v ‖ to enlarge the margin between sui and the other speakers’ style centroids µv . We denote the right-hand side of (7) as Î1. Theorem 3.1. Let µ(−ui)v = 1Nv ∑Nv k=1 svk if u 6= v; and µ (−ui) u = 1 Nu−1 ∑ j 6=i suj . Then,\nI(u; s) ≥ E [ 1 N M∑ u=1 Nu∑ i=1 [ − ‖sui − µ(−ui)u ‖2 − e−1 N M∑ v=1 Nv exp{−‖sui − µ(−ui)v ‖2} ]] . (7)\nTo maximize I(x; c|s), we derive a conditional mutual information lower bound below: Theorem 3.2. Assume that given s = su, samples {(xui, cui)}Nui=1 are observed. With a variational distribution qφ(x|s, c), we have I(x; c|s) ≥ E[Î], where\nÎ = 1 N M∑ u=1 Nu∑ i=1 [ log qφ(xui|cui, su)− log ( 1 Nu Nu∑ j=1 qφ(xuj |cui, su) )] . (8)\nBased on the criterion for s in equation (7), a well-learned style encoder Es pulls all style embeddings sui from speaker u together. Suppose su is representative of the style embeddings of set Xu. If we parameterize the distribution qφ(x|s, c) ∝ exp(−‖x−D(s, c)‖2) with decoder D(s, c), then based on Theorem 3.2, we can estimate the lower bound of I(x; c|s) with the following objective:\nÎ2 := 1\nN M∑ u=1 Nu∑ i=1 [ − ‖xui −D(cui, su)‖2 − log ( 1 Nu Nu∑ j=1 exp{−‖xuj −D(cui, su)‖2} )] .\nWhen maximizing Î2, for speaker u with his/her given voice style su, we encourage the content embedding cui to well reconstruct the original voice xui, with small ‖xui − D(cui, su)‖. Additionally, the distance ‖xuj −D(cui, su)‖ is minimized, ensuring cui does not contain information to reconstruct other voices xuj from speaker u. With Î2, the correlation between xui and cui is amplified, which improves cui in preserving the content information." }, { "heading": "3.3 MI UPPER BOUND ESTIMATION", "text": "The crucial part of our framework is disentangling the style and the content embedding spaces, which imposes (ideally) that the style embedding s excludes any content information and vice versa. Therefore, the mutual information between s and c is expected to be minimized. To estimate I(s; c), we derive a sample-based MI upper bound in Theorem 3.3 base on (4).\nTheorem 3.3. If p(s|c) provides the conditional distribution between variables s and c, then\nI(s; c) ≤ E [ 1 N M∑ u=1 Nu∑ i=1 [ log p(sui|cui)− 1 N M∑ v=1 Nv∑ j=1 log p(sui|cvj) ]] . (9)\nThe upper bound in (9) requires the ground-truth conditional distribution p(s|c), whose closed form is unknown. Therefore, we use a probabilistic neural network qθ(s|c) to approximate p(s|c) by maximizing the log-likelihood F(θ) = ∑Mu=1∑Nui=1 log qθ(sui|cui). With the learned qθ(s|c), the\nobjective for minimizing I(s; c) becomes:\nÎ3 := 1\nN M∑ u=1 Nu∑ i=1 [ log qθ(sui|cui)− 1 N M∑ v=1 Nv∑ j=1 log qθ(sui|cvj) ] . (10)\nWhen weights of encoders Ec, Es are updated, the embedding spaces s, c change, which leads to the changing of conditional distribution p(s|c). Therefore, the neural approximation qθ(s|c) must be updated again. Consequently, during training, the encoders Ec, Es and the approximation qθ(s|c) are updated iteratively. In the Supplementary Material, we further discuss that with a good approximation qθ(s|c), Î3 remains an MI upper bound." }, { "heading": "3.4 ENCODER-DECODER FRAMEWORK", "text": "With the aforementioned MI estimates Î1, Î2, and Î3, the final training loss of our method is L∗ = [Î3 − Î1 − Î2]− βF(θ), (11)\nwhere β is a positive number re-weighting the two objective terms. Term Î3−Î1−Î2 is minimized w.r.t the parameters in encoders Ec, Es and decoder D; term F(θ) as the likelihood function of qθ(s|c) is maximized w.r.t the parameter θ. In practice, the two terms are updated iteratively with gradient descent (by fixing one and updating another). The training and transfer processes of our model are shown in Figure 1. We name this MI-guided learning framework as Information-theoretic Disentangled Embedding for Voice Conversion (IDE-VC)." }, { "heading": "4 RELATED WORK", "text": "Many-to-many Voice Conversion Traditional voice style transfer methods mainly focus on one-toone and many-to-one conversion tasks, which can only transfer voices into one target speaking style. This constraint limits the applicability of the methods. Recently, several many-to-many voice conversion methods have been proposed, to convert voices in broader application scenarios. StarGANVC (Kameoka et al., 2018) uses StarGAN (Choi et al., 2018) to enable many-to-many transfer, in which voices are fed into a unique generator conditioned on the target speaker identity. A discriminator is also used to evaluate generation quality and transfer accuracy. Blow (Serrà et al., 2019) is a flow-based generative model (Kingma & Dhariwal, 2018), that maps voices from different speakers into the same latent space via normalizing flow (Rezende & Mohamed, 2015). The conversion is accomplished by transforming the latent representation back to the observation space with the target speaker’s identifier. Two other many-to-many conversion models, AUTOVC (Qian et al., 2019) and AdaIN-VC (Chou & Lee, 2019), extend applications into zero-shot scenarios, i.e., conversion from/to a new speaker (unseen during training), based on only a few utterances. Both AUTOVC and AdaIN-VC construct an encoder-decoder framework, which extracts the style and content of one speech sample into separate latent embeddings. Then when a new voice from an unseen speaker comes, both its style and content embeddings can be extracted directly. However, as discussed in the Introduction, both methods do not have explicit regularizers to reduce the correlation between style and content embeddings, which limits their performance.\nDisentangled Representation Learning Disentangled representation learning (DRL) aims to encode data points into separate independent embedding subspaces, where different subspaces represent different data attributes. DRL methods can be classified into unsupervised and supervised approaches. Under unsupervised setups, Burgess et al. (2018), Higgins et al. (2016) and Kim & Mnih (2018) use latent embeddings to reconstruct the original data while keeping each dimension of the embeddings independent with correlation regularizers. This has been challenged by Locatello et al. (2019), in that each part of the learned embeddings may not be mapped to a meaningful data attribute. In contrast, supervised DRL methods effectively learn meaningful disentangled embedding parts by adding different supervision to different embedding components. Between the two embedding parts, the correlation is still required to be reduced to prevent the revealing of information to each other. The correlation-reducing methods mainly focus on adversarial training between embedding parts (Hjelm et al., 2018; Kim & Mnih, 2018), and mutual information minimization (Chen et al., 2018; Cheng et al., 2020b). By applying operations such as switching and combining, one can use disentangled representations to improve empirical performance on downstream tasks, e.g. conditional generation (Burgess et al., 2018), domain adaptation (Gholami et al., 2020), and few-shot learning (Higgins et al., 2017)." }, { "heading": "5 EXPERIMENTS", "text": "We evaluate our IDE-VC on real-world voice a dataset under both many-to-many and zero-shot VST setups. The selected dataset is CSTR Voice Cloning Toolkit (VCTK) (Yamagishi et al., 2019), which includes 46 hours of audio from 109 speakers. Each speaker reads a different sets of utterances, and the training voices are provided in a non-parallel manner. The audios are downsampled at 16kHz. In the following, we first describe the evaluation metrics and the implementation details, and then analyze our model’s performance relative to other baselines under many-to-many and zero-shot VST settings." }, { "heading": "5.1 EVALUATION METRICS", "text": "Objective Metrics We consider two objective metrics: Speaker verification accuracy (Verification) and the Mel-Cepstral Distance (Distance) (Kubichek, 1993). The speaker verification accuracy measures whether the transferred voice belongs to the target speaker. For fair comparison, we used a third-party pre-trained speaker encoder Resemblyzer1 to classify the speaker identity from the transferred voices. Specifically, style centroids for speakers are learned with ground-truth voice samples. For a transferred voice, we encode it via the pre-trained speaker encoder and find the speaker with the closest style centroid as the identity prediction. For the Distance, the vanilla Mel-Cepstral Distance (MCD) cannot handle the time alignment issue described in Section 2. To make reasonable comparisons between the generation and ground truth, we apply the Dynamic Time Warping (DTW) algorithm (Berndt & Clifford, 1994) to automatically align the time-evolving sequences before calculating MCD. This DTW-MCD distance measures the similarity of the transferred voice and the real voice from the target speaker. Since the calculation of DTW-MCD requires parallel data, we select voices with the same content from the VCTK dataset as testing pairs. Then we transfer one voice in the pair and calculate DTW-MCD with the other voice as reference.\nSubjective Metrics Following Wester et al. (Wester et al., 2016), we use the naturalness of the speech (Naturalness), and the similarity of the transferred speech to target identity (Similarity) as subjective metrics. For Naturalness, annotators are asked to rate the score from 1-5 for each transferred speech.For the Similarity, the annotators are presented with two audios (the converted speech and the corresponding reference), and are asked to rate the score from 1 to 4. For both scores, the higher the better. Following the setting in Blow (Serrà et al., 2019), we report Similarity defined as a total percentage from the binary rating. The evaluation of both subjective metrics is conducted on Amazon Mechanical Turk (MTurk)2. More details about evaluation metrics are provided in the Supplementary Material." }, { "heading": "5.2 IMPLEMENTATION DETAILS", "text": "Following AUTOVC (Qian et al., 2019), our model inputs are represented via mel-spectrogram. The number of mel-frequency bins is set as 80. When voices are generated, we adopt the WaveNet vocoder (Oord et al., 2016) pre-trained on the VCTK corpus to invert the spectrogram signal back to a waveform. The spectrogram is first upsampled with deconvolutional layers to match the sampling rate, and then a standard 40-layer WaveNet is applied to generate speech waveforms. Our model is implememted with Pytorch and takes 1 GPU day on an Nvidia Xp to train.\nEncoder Architecture The speaker encoder consists of a 2-layer long short-term memory (LSTM) with cell size of 768, and a fully-connected layer with output dimension 256. The speaker encoder is initialized with weights from a pretrained GE2E (Wan et al., 2018) encoder. The input of the content encoder is the concatenation of the mel-spectrogram signal and the corresponding speaker embedding. The content encoder consists of three convolutional layers with 512 channels, and two layers of a bidirectional LSTM with cell dimension 32. Following the setup in AUTOVC (Qian et al., 2019), the forward and backward outputs of the bi-directional LSTM are downsampled by 16.\nDecoder Architecture Following AUTOVC (Qian et al., 2019), the initial decoder consists of a three-layer convolutional neural network (CNN) with 512 channels, three LSTM layers with cell dimension 1024, and another convolutional layer to project the output of the LSTM to dimension of 80. To enhance the quality of the spectrogram, following AUTOVC (Qian et al., 2019), we use a post-network consisting of five convolutional layers with 512 channels for the first four layers, and\n1https://github.com/resemble-ai/Resemblyzer 2https://www.mturk.com/\n80 channels for the last layer. The output of the post-network can be viewed as a residual signal. The final conversion signal is computed by directly adding the output of the initial decoder and the post-network. The reconstruction loss is applied to both the output of the initial decoder and the final conversion signal.\nApproximation Network Architecture As described in Section 3.3, minimizing the mutual information between style and content embeddings requires an auxiliary variational approximation qθ(s|c). For implementation, we parameterize the variational distribution in the Gaussian distribution family qθ(s|c) = N (µθ(c),σ2θ(c) · I), where mean µθ(·) and variance σ2θ(·) are two-layer fully-connected networks with tanh(·) as the activation function. With the Gaussian parameterization, the likelihoods in objective Î3 can be calculated in closed form." }, { "heading": "5.3 STYLE TRANSFER PERFORMANCE", "text": "For the many-to-many VST task, we randomly select 10% of the sentences for validation and 10% of the sentences for testing from the VCTK dataset, following the setting in Blow (Serrà et al., 2019). The rest of the data are used for training in a non-parallel scheme. For evaluation, we select voice pairs from the testing set, in which each pair of voices have the same content but come from different speakers. In each testing pair, we conduct transfer from one voice to the other voice’s speaking style, and then we compare the transferred voice and the other voice as evaluating the model performance. We test our model with four competitive baselines: Blow (Serrà et al., 2019)3, AUTOVC (Qian et al., 2019), AdaIN-VC (Chou & Lee, 2019) and StarGAN-VC (Kameoka et al., 2018). The detailed implementation of these four methods are provided in the Supplementary Material. Table 1 shows the subjective and objective evaluation for the many-to-many VST task. Both methods with the encoder-decoder framework, AdaIN-VC and AUTOVC, have competitive results. However, our IDE-VC outperforms the other baselines on all metrics, demonstrating that the style-content disentanglement in the latent space improves the performance of the encoder-decoder framework.\nFor the zero-shot VST task, we use the same train-validation dataset split as in the many-to-many setup. The testing data are selected to guarantee that no test speaker has any utterance in the training set. We compare our model with the only two baselines, AUTOVC (Qian et al., 2019) and AdaINVC (Chou & Lee, 2019), that are able to handle voice transfer for newly-coming unseen speakers. We used the same implementations of AUTOVC and AdaIN-VC as in the many-to-many VST. The evaluation results of zero-shot VST are shown in Table 2, among the two baselines AdaIN-VC performs better than AUTOVC overall.Our IDE-VC outperforms both baseline methods, on all metrics. All three tested models have encoder-decoder transfer frameworks, the superior performance\n3For Blow model, we use the official implementation available on Github (https://github.com/joansj/blow). We report the best result we can obtain here, under training for 100 epochs (11.75 GPU days on Nvidia V100).\nof IDE-VC indicates the effectiveness of our disentangled representation learning scheme. More evaluation details are provided in the supplementary material." }, { "heading": "5.4 DISENTANGLEMENT DISCUSSION", "text": "Besides the performance comparison with other VST baselines, we demonstrate the capability of our information-theoretic disentangled representation learning scheme. First, we conduct a tSNE (Maaten & Hinton, 2008) visualization of the latent spaces of the IDE-VC model. As shown in the left of Figure 2, style embeddings from the same speaker are well clustered, and style embeddings from different speakers separate in a clean manner. The clear pattern indicates our style encoder Es can verify the speakers’ identity from the voice samples. In contrast, the content embeddings (in the right of Figure 2) are indistinguishable for different speakers, which means our content encoder Ec successfully eliminates speaker-related information and extracts rich semantic content from the data.\nWe also empirically evaluate the disentanglement, by predicting the speakers’ identity based on only the content embeddings. A two-layer fully-connected network is trained on the testing set with a content embedding as input, and the corresponding speaker identity as output. We compare our IDE-VC with AUTOVC and AdaIN-VC, which also output content embeddings. The classification results are shown in Table 3. Our IDE-VC reaches the lowest classification accuracy, indicating that the content embeddings learned by IDE-VC contains the least speaker-related information. Therefore, our IDE-VC learns disentangled representations with high quality compared with other baselines.\n5.5 ABLATION STUDY\nMoreover, we have considered an ablation study that addresses performance effects from different learning losses in (11), with results shown in Table 4. We compare our model with two models trained by part of the loss function in (11), while keeping the other training setups unchanged, including the model structure. From the results, when the model is trained without the style encoder loss term Î1, a transferred voice still is generated, but with a large distance to the ground truth. The verification accuracy also significantly decreases with no speaker-related\ninformation utilized. When the disentangling term Î3 is removed, the model still reaches competitive performance, because the style encoder Es and decoder D are well trained by Î1 and Î2. However, when adding term Î3, we disentangle the style and content spaces, and improve the transfer quality with higher verification accuracy and less distortion. The performance without term Î2 is not reported, because the model cannot even generate fluent speech without the reconstruction loss." }, { "heading": "6 CONCLUSIONS", "text": "We have improved the encoder-decoder voice style transfer framework by disentangled latent representation learning. To effectively induce the style and content information of speech into independent embedding latent spaces, we minimize a sample-based mutual information upper bound between style and content embeddings. The disentanglement of the two embedding spaces ensures the voice transfer accuracy without information revealed from each other. We have also derived two new multi-group mutual information lower bounds, which are maximized during training to enhance the representativeness of the latent embeddings. On the real-world VCTK dataset, our model outperforms previous works under both many-to-many and zero-shot voice style transfer setups. Our model can be naturally extended to other style transfer tasks modeling time-evolving sequences, e.g., video and music style transfer. Moreover, our general multi-group mutual information lower bounds have broader potential applications in other representation learning tasks." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This research was supported in part by the DOE, NSF and ONR." } ]
2,021
DISENTANGLED REPRESENTATION LEARNING
SP:92bb35142d496d7afaa07a298a3bffabd00ec352
[ "The authors propose a learned model specialized on learning Lagrangian fluid dynamics for incompressible fluids. The model is a hybrid between a simulator with explicit advection, collision and pressure correction stages, and a learned model, trained by supervising each of those stages. The authors demonstrate improved stability/conservation of physical properties for a model, and some flexibility to the time-step being changed at test time.", "The paper deals with the prediction of 3D Lagrangian Fluid Simulations. Therefore the problem is divided into 3 subproblems, oriented on numerical simulations. An advection part, where the acceleration of the particles is calculated, a collision step, where the boundary effects are included, and a pressure prediction part, where the pressure for maintaining the volume is determined. A graph-based network is used for each part, which is either node or edge-based according to the requirements. " ]
We present a data-driven model for fluid simulation under Lagrangian representation. Our model uses graphs to describe the fluid field, where physical quantities are encoded as node and edge features. Instead of directly predicting the acceleration or position correction given the current state, we decompose the simulation scheme into separate parts advection, collision, and pressure projection. For these different reasoning tasks, we propose two kinds of graph neural network structures, node-focused networks, and edge-focused networks. By introducing physics prior knowledge, our model can be efficient in terms of training and inference. Our tests show that the learned model can produce accurate results and remain stable in scenarios with a large number of particles and different geometries. Unlike many previous works, further tests demonstrate that our model is able to retain many important physical properties of incompressible fluids, such as minor divergence and reasonable pressure distribution. Additionally, our model can adopt a range of time step sizes different from ones using in the training set, which indicates its robust generalization capability.
[]
[ { "authors": [ "B. Ummenhofer", "L. Prantl", "N. Thuerey", "V. Koltun" ], "title": "Lagrangian fluid simulation with continuous convolutions,", "venue": "in International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "J. Monaghan" ], "title": "An introduction to sph,", "venue": "Computer Physics Communications,", "year": 1988 }, { "authors": [ "M. Müller", "D. Charypar", "M. Gross" ], "title": "Particle-based fluid simulation for interactive applications,", "venue": "Proceedings of the 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, ser. SCA ’03. Goslar, DEU: Eurographics Association,", "year": 2003 }, { "authors": [ "S. Koshizuka", "Y. Oka" ], "title": "Moving-particle semi-implicit method for fragmentation of incompressible fluid,", "venue": "Nuclear Science and Engineering,", "year": 1996 }, { "authors": [ "M. Becker", "M. Teschner" ], "title": "Weakly compressible sph for free surface flows,", "venue": "Proceedings of the 2007 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, ser. SCA ’07. Goslar, DEU: Eurographics Association,", "year": 2007 }, { "authors": [ "B. Solenthaler", "R. Pajarola" ], "title": "Predictive-corrective incompressible sph,", "venue": "ACM SIGGRAPH 2009 Papers, ser. SIGGRAPH ’09", "year": 2009 }, { "authors": [ "L. Ladický", "S. Jeong", "B. Solenthaler", "M. Pollefeys", "M. Gross" ], "title": "Data-driven fluid simulations using regression forests,", "venue": "ACM Trans. Graph.,", "year": 2015 }, { "authors": [ "J. Tompson", "K. Schlachter", "P. Sprechmann", "K. Perlin" ], "title": "Accelerating eulerian fluid simulation with convolutional networks,", "venue": "CoRR, vol. abs/1607.03597,", "year": 2016 }, { "authors": [ "X. Xiao", "Y. Zhou", "H. Wang", "X. Yang" ], "title": "A novel cnn-based poisson solver for fluid simulation,", "venue": "IEEE Transactions on Visualization Computer Graphics,", "year": 2020 }, { "authors": [ "S. Wiewel", "M. Becher" ], "title": "Thuerey, “Latent-space physics: Towards learning the temporal evolution of fluid flow,", "venue": "CoRR, vol. abs/1802.10123,", "year": 2018 }, { "authors": [ "J. Morton", "A. Jameson", "M.J. Kochenderfer", "F. Witherden" ], "title": "Deep dynamical modeling and control of unsteady fluid flows,", "venue": "Advances in Neural Information Processing Systems 31,", "year": 2018 }, { "authors": [ "F. de Avila Belbute-Peres", "T.D. Economon", "J.Z. Kolter" ], "title": "Combining differentiable pde solvers and graph neural networks for fluid flow prediction,", "venue": null, "year": 2020 }, { "authors": [ "T.N. Kipf", "M. Welling" ], "title": "Semi-supervised classification with graph convolutional networks,", "venue": "CoRR, vol. abs/1609.02907,", "year": 2016 }, { "authors": [ "W.L. Hamilton", "R. Ying", "J. Leskovec" ], "title": "Inductive representation learning on large graphs,", "venue": null, "year": 2017 }, { "authors": [ "S. Wang", "S. Suo", "W.-C. Ma", "A. Pokrovsky", "R. Urtasun" ], "title": "Deep parametric continuous convolutional neural networks,", "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "P.W. Battaglia", "R. Pascanu", "M. Lai", "D.J. Rezende", "K. Kavukcuoglu" ], "title": "Interaction networks for learning about objects, relations and physics,", "venue": "CoRR, vol. abs/1612.00222,", "year": 2016 }, { "authors": [ "M.B. Chang", "T. Ullman", "A. Torralba", "J.B. Tenenbaum" ], "title": "A compositional object-based approach to learning physical dynamics,", "venue": "CoRR, vol. abs/1612.00341,", "year": 2016 }, { "authors": [ "A. Sanchez-Gonzalez", "N. Heess", "J.T. Springenberg", "J. Merel", "M.A. Riedmiller", "R. Hadsell", "P.W. Battaglia" ], "title": "Graph networks as learnable physics engines for inference and control,", "venue": "CoRR, vol. abs/1806.01242,", "year": 2018 }, { "authors": [ "Y. Li", "J. Wu", "R. Tedrake", "J.B. Tenenbaum", "A. Torralba" ], "title": "Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids,", "venue": "CoRR, vol", "year": 2018 }, { "authors": [ "D. Mrowca", "C. Zhuang", "E. Wang", "N. Haber", "L.F. Fei-Fei", "J. Tenenbaum", "D.L. Yamins" ], "title": "Flexible neural representation for physics prediction,", "venue": "Advances in Neural Information Processing Systems 31,", "year": 2018 }, { "authors": [ "G.K. Batchelor" ], "title": "An Introduction to Fluid Dynamics, ser", "venue": null, "year": 2000 }, { "authors": [ "P.W. Battaglia", "J.B. Hamrick", "V. Bapst", "A. Sanchez-Gonzalez", "V.F. Zambaldi", "M. Malinowski", "A. Tacchetti", "D. Raposo", "A. Santoro", "R. Faulkner", "Ç. Gülçehre", "H.F. Song", "A.J. Ballard", "J. Gilmer", "G.E. Dahl", "A. Vaswani", "K.R. Allen", "C. Nash", "V. Langston", "C. Dyer", "N. Heess", "D. Wierstra", "P. Kohli", "M. Botvinick", "O. Vinyals", "Y. Li", "R. Pascanu" ], "title": "Relational inductive biases, deep learning, and graph networks,", "venue": "CoRR, vol. abs/1806.01261,", "year": 2018 }, { "authors": [ "B.-H. Lee", "J.-C. Park", "M.-H. Kim", "S.-C. Hwang" ], "title": "Step-by-step improvement of mps method in simulating violent free-surface motions and impact-loads,", "venue": "Computer Methods in Applied Mechanics and Engineering,", "year": 2011 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization,", "venue": null, "year": 2014 }, { "authors": [ "Sanchez-Gonzalez" ], "title": "2020) states \"We use GNs without global features or global updates (similar to an interaction network)\", so we implement the GN block update mechanism following the description", "venue": "For the implementation of GN block,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "For many science and engineering problems, fluids are an essential integral part. How to simulate fluid dynamics accurately has long been studied by researchers and a large class of numerical models have been developed. However, computing high-quality fluid simulation is still computationally expensive despite the advances in computing power. Also, the time of calculation usually increases drastically when the resolution of the simulating scene scales up. A common way to alleviate computing costs is using a data-driven model. Recent progress in the machine learning domain opens up the possibility of employing learning algorithms to learn and model fluid dynamics.\nIn this paper, we propose a graph-based data-driven fluid dynamics model (Fluid Graph Networks, FGN), which consists of simple multi-layer perceptron and graph inductive architectures. Our model predicts and integrates forward the movement of incompressible fluids based on observations. Compared to previous works in this domain (Ummenhofer et al., 2020; Sanchez-Gonzalez et al., 2020), our model enjoys traceability of physical properties of the system, like low velocity-divergence and constant particle density, and it can predict reasonable pressure distribution. Experiments demonstrate that our model can remain stable and accurate in long-term simulation. Although our model is entailed and customized for fluid simulation, it can be extended to simulation of other dynamics under the Lagrangian framework, as it takes universal features (positions, velocities, particle density) under the Lagrangian framework as input." }, { "heading": "2 RELATED WORKS", "text": "Our model is built upon the Lagrangian representation of fluid, where continuous fluids are discretized and approximated by a set of particles. The most prominent advantage of the Lagrangian method is that the particle boundary is the material interface, which makes boundary conditions easy to impose, especially when the material interface is large and changing violently. A well-known Lagrangian method is Smooth Particle Hydrodynamics (SPH)(Monaghan, 1988). SPH and its variants are widely used in the numerical physic simulation, especially fluid dynamics under various environments. Particle-based fluid simulation (Müller et al., 2003) introduces SPH model to simulate fluids and\ngenerate realistic visual effects. Moving particle semi-implicit method (MPS) (Koshizuka and Oka, 1996) markedly improves the accuracy and stability of incompressible fluid simulation by introducing a pressure projection procedure that emulates Eulerian grid-based methods. Weakly compressible SPH (WSPH)(Becker and Teschner, 2007) introduces equation of state to model the pressure during the simulation. Predictive-corrective incompressible SPH (Solenthaler and Pajarola, 2009) and divergence-free SPH (Bender and Koschier, 2015) use iterative method to improve the accuracy of pressure calculation in incompressible flow simulation.\nModeling fluid dynamics in a data-driven way has been explored and studied by many researchers. With advances in machine learning algorithms, many data-driven models employing machine learning algorithms have been built. Ladický et al. (2015) reformulate the Navier-Stokes equation as a regression problem and build a regressor using random forest, which significantly improves the calculation efficiency. Tompson et al. (2016), Xiao et al. (2020) learn the pressure projection under the Eulerian framework with a convolutional neural network, which accelerates the fluid simulation. Wiewel et al. (2018) bring significant speed-up by learning a reduced-order representation and predicting the pressure field with an LSTM-based model. Morton et al. (2018) learn the dynamics of airflow around a cylinder based on Koopman theory. de Avila Belbute-Peres et al. (2020) predict fluid flow by combining grid-based method with graph convolutional neural networks.\nLearning and reasoning particle dynamics under graph representation has the following benefits and conveniences. First, particle-based methods model physics phenomena as interactions between particles within a local area. This imposes an inductive bias for learning under the Lagrangian framework: dynamics have a strong locality. The locality of unstructured data under Lagrangian representation can be captured by aggregation operation on graphs, such as GCN and other variants (Kipf and Welling, 2016; Hamilton et al., 2017). Second, unlike Eulerian grid-based methods, Lagrangian particle-based methods do not have explicit and structured grid, which makes standard Convolutional Neural Network (CNN) cannot be directly applied to particles without feature processing (Wang et al., 2018; Ummenhofer et al., 2020). Third, many dynamics are based on pairwise relation between particles, like collision, which can be easily interpreted as edge attributes of a graph. Given these factors, recently there have been a rich class of works that use graph neural networks (Scarselli et al., 2009) to learn and reason about underlying physics of interacting objects and particles. (Battaglia et al., 2016; Chang et al., 2016; Sanchez-Gonzalez et al., 2018; Li et al., 2018; Mrowca et al., 2018)" }, { "heading": "3 MODEL", "text": "" }, { "heading": "3.1 FLUID DYNAMICS", "text": "The governing equation for incompressible fluids is the Navier-Stokes equation and the continuity equation as follows (Batchelor, 2000):\nDu Dt = −∇p ρ + ν∇2u + g, (1)\n∇ · u = 0. (2) To describe the fluid field, there are two kinds of systems, Eulerian and Lagrangian ones. In this work, we adopt a Lagrangian system. A common method to solve the Navier-Stokes equation and discretize fluids under the Lagrangian framework is Smooth Particle Hydrodynamics (SPH) method (Monaghan, 1988), where physical quantities at an arbitrary point in the space are approximated by the states of nearby particles.\nIn SPH, an arbitrary scalar (or vector) field A (r) at location r can be represented by a convolution:\nA (r) = ∫ A (r′)W (|r− r′| , h) dV (r′) , (3)\nwhere W is weighting function or smooth kernel as defined in SPH, h is the smoothing length, which defines the range of particles to be considered and V (r′) is the volume at r. Numerically, the interpolation can be approximated by replacing the integration with a summation.\nBased on this model, equation equation 1 and equation 2 can be discretized. The discrete equation system is usually solved under a predictor-corrector scheme, prediction based on advection and correction based on physical properties (such as divergence-free constraint)." }, { "heading": "3.2 MODEL", "text": "Fluids are time-dependent dynamical systems, where location of particles, r, is described by equation of form: dr/dt = f(r). When building a data-driven model to learn and solve this system, we assume the system is Markovian, that is, the state of the system at time step n+ 1 depends only on the state of the previous time step n. The update mechanism in our model can be represented as:\n{xn+1,vn+1} = Gθ ({xn,vn}) . (4)\nHere {xn,vn} denotes the positional information and velocity of fluid field at time step n. Datadriven model Gθ, parameterized by θ , maps the state of time step n to time step n+ 1.\nIn order to build a robust and accurate data-driven model, the structure of our model is physicinformed, which enables the model to give interpretable output without losing many physical properties of the system. In general, our model mimics the predictor-corrector scheme and includes three parts, advection net, collision net, and pressure net. They can be divided into two types of graph networks (GN) according to the network structure (Battaglia et al., 2018). Specifically, advection net and pressure net are node-focused graph networks, while collision net is edge-focused networks. As each of these networks has a specific task and different output, they are trained on different data separately.\nNode-focused Graph Network Advection net is responsible for the prediction of advection effect and pressure net is responsible for pressure projection. Considering a particle i, the node-focused graph network first aggregates node features from neighbor particles {vj |∀j ∈ N (i)} and output node embedding fi. The embedding fi will then be passed to a processor gR. gR will predict the desirable physical quantities oi (i.e. acceleration a in advection net and pressure p in pressure net).\nThe whole message passing procedure can be defined as:\noi = gR (gA (Vi)) , Vi = {vj}∀j∈N (i). (5)\nEdge-focused Graph Network To prevent particle penetration and increase model stability, we propose a graph network model that is responsible for predicting the effect of collision. As the relative position and relative velocity will have different signs with different observation perspective (i.e. relative velocity vij = −vji), thus the graph in collision net is directed. In collision net, relative features, eij (relative positions, relative velocities between particle i and j), are passed to processor fR as edge features. The processor will output the edge embedding rij between each pair of nodes. Lastly, edge embedding rij is aggregated via aggregator fA, to gather the influence from all nearby particles and predict an overall effect oi on the center particle i. The whole process is defined as:\noi = fA (fR (Ei)) , Ei = {eij}∀j∈N (i). (6)\nThe advantage of using relative position and velocity instead of global ones as input features is that this explicitly imposes a spatial invariance to the network, given that collision between two particles is invariant to the global positions they are at." }, { "heading": "4 IMPLEMENTATION", "text": "We adopt the numerical model in SPH to evaluate the physical quantities like particle density and differential operators like gradient. To construct graph representation for particles, we establish edges between particles within the control radius." }, { "heading": "4.1 UPDATE SCHEME", "text": "In general, given the state (position xn and velocity vn) of the current time step n, we derive the state of next time step n + 1 by passing the state information through advection net, collision net, and pressure net sequentially. The input features to advection net are positions and velocities of particles, [xn,vn], along with g, which indicates the external body force per mass of fluid, and viscosity parameter ν, which denotes the magnitude of fluid viscosity. The advection net predicts acceleration of particles:\naadv = Gadv (x n,vn,g, ν) , (7)\nand updates the state of fluid particles to an intermediate state [x∗,v∗].\nv∗ = vn + aadv∆t, (8) x∗ = xn + v∗∆t. (9)\nWhere aadv = [aadv1 , ...,a adv N ], x n = [xn1 , ...,v n N ], v n = [vn1 , ...,v n N ] for particles i ∈ {1, .., N}. We will use the same notation throughout illustration.\nThe collision net takes relative positions and velocities between particles, [x∗i −x∗j ,v∗i −v∗j ] as input, and predicts correction to the velocity,\n∆v = Gcol(x ∗ r ,v ∗ r), (10)\nwhere [x∗r ,v ∗ r ] denotes the relative position and velocity in intermediate state. The velocity is then updated with predicted correction: v∗∗ = v∗ + ∆v. (11)\nThe updated intermediate position and velocity are taken as input by the pressure net, along with particle number density ρ.\np̂ = Gpres(x ∗,v∗∗,ρ). (12)\nThe state of fluid field is then updated to next time step n+ 1,\nvn+1 = v∗∗ − ∇p̂ ρc ∆t, (13) xn+1 = x∗ + vn+1∆t. (14)\nWhere p̂ = [p̂1, ..., p̂N ], ρ = [ρ1, ..., ρN ], ρc is the density parameter of fluid. Predicting pressure of fluid field using particle density and velocity is based on the observation that advection will incur a temporary compression on fluid body, which means fluid density has changed. Therefore the goal of pressure net is to impose a pressure projection to mitigate these deviations.\nDuring the above calculation, the global positional information is only used to construct graph on fluid particles and will not be passed into aggregator and processor as features. The relative position and particle density are normalized before input." }, { "heading": "4.2 NETWORK ARCHITECTURES", "text": "For node-focused graph network, to derive a smooth response of the field with respect to spatial location, the aggregation from layer l − 1 to l is defined as:\na (l) i =\n∑ f\n(l−1) j W (|ri − rj | , h)∑ W (|ri − rj | , h) + f (l−1) i ,∀j ∈ N (i), (15)\nf (l) i = σ ( W · a(l)i ) , (16)\nwhere the aggregator sums up the features f (l−1)j from neighbor vertices {vj |j ∈ N (i)} using smooth kernel as weight function, and here self-connection is added to every vertex. Linear transformation W and non-linear transformation σ are then applied to the aggregated features. In practice we found that two layer of aggregations is enough for the model to produce reasonably accurate output (Adding more aggregation layers does not bring in significant improvements).\nAs for the edge-focused network, the aggregation is simply defined as: ai = ∑\nj∈N (i)\nW · rji, (17)\nwhere rji is edge feature processed by processor, W is a linear transformation matrix. The aggregation in the edge-focused network is at the last layer, so no non-linearity is included here.\nThe processors in both networks are implemented as shared MLP, where they are shared among nodes or edges depending on network types (e.g. in the node-focused network, the processor is an MLP shared among each node). In a node-focused network, the processor has three hidden layers, with the input node embedding f of size 128. In an edge-focused network, the processor has four hidden layers, with input edge attributes [xr,vr] of size 6." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 TRAINING", "text": "Dataset We use the Moving Particle Semi-implicit method (MPS) (Koshizuka and Oka, 1996) with an improved pressure solver (Lee et al., 2011) to generate high-fidelity simulation data of incompressible flow. MPS is a numerical method based on SPH which prioritizes accuracy over calculation speed. It enforces the incompressibility of the fluid field by solving the pressure Poisson equation. We created 20 scenes by randomly placing fluid blocks, solid obstacles, initializing fluid particles with random velocity (See A.2 for full detail of training dataset settings and training strategy).\nLoss Function and Optimization We train three networks, advection net, collision net, and pressure net separately. Each network is trained in a supervised way by optimizing the mean squared error between prediction ŷ and ground truth y.\nL = 1\nN N∑ i=1 ||yi − ŷi||22 (18)\nWe normalize particle density before inputting into the pressure net, which accelerates and stabilizes training. In the processor, we add LayerNorm (Ba et al., 2016) after activation to each layer (except\nfor the output layer). The parameters of the model are optimized with Adam (Kingma and Ba, 2014) optimizer. We implement the model in PyTorch. All the training and experiments are mainly carried out on NVIDIA GTX 1080Ti GPU." }, { "heading": "5.2 EVALUATION", "text": "Baselines Besides the comparison against ground truth data, we also compare our model to two recent works that use data-driven approaches to simulate fluids under the Lagrangian framework. Ummenhofer et al. (2020) use the continuous convolutional kernel to learn fluid dynamics and they reported that their model has outperformed other works in this domain. Sanchez-Gonzalez et al. (2020) propose a graph network-based simulator (GNS) as a general-purpose physic simulator under Lagrangian representation. Our model and GNS both transform and pass messages of fluid field via graph structures, but GNS consists of a far larger and deeper network with multiple sub-blocks and thus contains much more parameters than ours. For all baseline models, we adopt the same training strategy from original papers but train them on our dataset (See A.5 for full detail).\nMetrics To conduct quantitative analysis, we evaluate model performance based on several metrics. We report the asymmetric version of Chamfer Distance between the simulated results of different models and ground truth sequence. The asymmetric Chamfer distance for two collections of particles X,Y (from X to Y ) is defined as:\nL (X,Y ) = 1\nN ∑ x∈X min y∈Y d (x,y) , (19)\nwhere N is the total particle number of point cloud collection X , and distance function d(x,y) is evaluated using L2-norm ‖x−y‖2. We investigate two essential physical quantities for incompressible fluid simulation - velocity divergence and particle density deviation of fluid field. In addition, we use normalized mean absolute error (MAE) and relative tolerance to evaluate the error of advection net and pressure net on single frame inference respectively. The normalized MAE from prediction ŷ to ground truth y is defined as:\nLMAE = 1\nN ∑ |ŷ − y| |y| . (20)\nThe relative tolerance of the numerical solution x̂ to a linear system Ax = b can be defined as:\ntol = ||Ax̂− b||2 ||b||2 . (21)" }, { "heading": "6 RESULTS", "text": "Performance In order to measure the performance of our FGN model as a physic simulator, we performed simulations on several different test cases. In the first two cases, dam collapse and water\nfall, we qualitatively measure the results of different models via visualization1 of fluids in Figure 4. We report the physical property of simulation results and its Chamfer distance to ground truth data. Quantitative results over the whole simulation sequence are listed in Table 1 (See Figure 9 in A.3 for error trend figures). The results show that our model FGN gives the best accuracy in retaining physical properties and position prediction. In addition, we study the performance of each sub-network in our model as stand-alone solvers for sub-dynamical systems and report their relative error in Table 2. For the advection net, we challenge it by applying a different set of material parameters (i.e. different gravity and viscosity parameters). We report the normalized mean absolute error (MAE) between prediction and ground truth. For pressure net, we evaluate the relative tolerance of its predicted solution p̂ to the discretized pressure poisson equation (i.e. Ap = b). The relative low error demonstrates the capability of our sub-networks in learning and predicting physics.\n1Video link: https://sites.google.com/view/fluid-graph-network-video/home\nGeneralization To test out the model’s capability of generalization. We apply our model to test cases with conditions that are beyond training distributions. In the first case, we study how our model will predict the pressure distribution of circular flow around a cylinder. This scene contains an inflow on the left side which keeps emitting particles during the simulation and an outflow on the right side. We challenge our model’s robustness by applying different time step sizes to it. Figure 4 shows qualitative comparison of model output under different time step sizes. In addition, we simulate on two scenes which contain much more complex geometries. Visualizations of two test scenarios containing complex geometries are shown in Figure 5. Although our model is trained with only a fairly small dataset, it remains accurate under several different conditions. This demonstrates it capability of generalization and robustness. (More details on quantitative analysis are in A.3. Figure 10 shows position error trend under different time step sizes. Figure 11 shows position error trend under complex scenes.)\nAblation Study We test the performance of different types of aggregators used in pressure net, as pressure net has the largest impact on overall prediction accuracy. 2 We compare our aggregator against graph convolution networks (GCN) from Kipf and Welling (2016), Hamilton et al. (2017)’s graph SAGE using mean aggregator , and MLP w/o any graph aggregation operation. In general, all aggregators give a similar performance on overall position prediction (similar Chamfer distance error), yet our model significantly improves the quality of predictions in terms of maintaining a constant density (See Table 4 in A.4 for full detail).\nIn addition, we report the model size (parameter number) and runtime benchmark in Table 3. Our model is very efficient in terms of training and inference, as it has far less trainable parameters than others." }, { "heading": "7 CONCLUSION", "text": "In this paper, we present a data-driven Lagrangian fluid model for incompressible fluid simulation by decomposing simulation scheme as separate reasoning tasks based on Navier-Stokes equation. It can preserve many essential physical properties of the fluid field such as low volume compression, and predict reasonable pressure distribution. Our model also has generalization capability, where it can remain stable when extrapolating to a wide range of different geometries and adopting different time step sizes. In general, our work is an advance in learning on unstructured data with graph neural networks, and enriches the paradigm of combining learning-based methods with physical models as well.\n2During the preliminary experiment, we find that aggregators in advection net do not have significant impact on overall model performance when viscosity parameter of fluids is small, therefore ablation results on it are not listed here; Aggregator in collision net is just summing up all the predicted edge attributes and conducting linear transformation. Using more advanced structure does not bring in further improvement." }, { "heading": "APPENDIX A", "text": "A.1 IMPLEMENTATION DETAILS\nGraph construction We build the graph by establishing edges between particles within limit radius. We perform the neighborhood searching on GPU by using cell sort algorithm. All the edge attributes are stored in sparse matrices. Although the limit radius does not have significant impact on training loss and larger limit radius will increase the computing cost drastically, we found that small limit radius can influence the long-term stability of the model. For advection net and pressure projection net, we select the limit radius to be three times the particle diameter D (D = 0.050m), and 0.9D for collision net.\nNumerical model In general, we calculate all the gradient operators and smooth kernel based on numerical model of SPH.\nIn SPH model, particle density is defined as: ni = ∑ i 6=j W (|ri − rj | , h),∀j ∈ N (i). (22)\nFor scalar quantity φi at location ri, we approximate its gradient by:\n∇φi = d\nn∗ ∑ j (φj − φi)∇Wij , (23)\nwhere n∗ is the constant particle number density derived by calculating the maximum particle number density at the initial frame, d is the dimension of the problem, Wij denotes the smooth kernel function value between particle i and j. Similarly velocity divergence is defined as:\n∇ · vi = d\nn∗ ∑ j (vj − vi) · ∇Wij , (24)\nWe adopt the same smooth kernel function from Koshizuka and Oka (1996), which is very simple to evaluate.\nWij =\n{ h\n||ri−rj ||2 − 1 if ||ri − rj ||2 < h 0 if ||ri − rj ||2 ≥ h\n(25)\nA.2 TRAINING\nDataset Generation We place one or two of the following basic obstacles (as shown in Figure 6) in training scenes. In addition to obstacles, each scene is a cubic box (80x80x40) containing a fluid block (25x25x10) (as shown in Figure 7). We place the fluid block at random place in the box and initialize its velocity of one direction by uniformly sampling from U (0, 0.1). We generated 20 scenes adopting above settings and simulate each training scene to 1000 time steps with step size dt = 0.002.\nStrategy In each time step of the ground truth simulator, there are mainly three steps. First, advect fluids with body force and viscosity:\nv∗ = vn + aadv∆t, (26) x∗ = xn + v∗∆t, (27)\nand then solve the pressure poisson equation,\n∇2p = 1 ∆t ∇ · v∗, (28)\nlastly,\nvn+1 = v∗ − ∇p ρc ∆t, (29) xn+1 = x∗ + vn+1∆t. (30)\nHere, we use [x∗,v∗,g, ν] as the training features and a∗ (i.e. (v∗ − vn)/∆t) as the training label for advection net. [x∗,v∗] and particle density ρ (evaluated based on x∗) are the training features for pressure net, with pressure p as label. To train collision net, we simulate another particle system that is updated only based on elastic collision rule and applies no other dynamics. We use relative velocity and position before collision as inputs, and use velocity difference (i.e. ∆v) as output target.\nWe train each network for 100,000 iterations of gradient updates and decay learning rate from 0.001 to 0.0000625. For the training of three sub-networks (advection net, collision net, pressure net), the batch size of each network is 16, 4 and 32 respectively. To allow the mini-batching of different graph, we mini-batch the adjacency matrix of different scenes by creating a large sparse matrix and stacking adjacency matrix on the diagonals.\nA.3 METRICS EVALUATION DETAILS\nWe show the error trend of dam collapse and water fall scene under different evaluation metrics in Figure 9. The position loss analysis of different time step sizes is shown in Figure 10. Position error of complex scenarios is shown in Figure 11.\nA.4 ABLATION STUDY DETAILS\nPerformance of different graph message aggregate structures is listed in Table 4.\nA.5 BASELINES IMPLEMENTATION\nContinuous Convolution We use the open-source implementation from Ummenhofer et al. (2020) 3. To give a fair benchmark result we train their network with our dataset. Note our training dataset is much smaller than theirs and does not include complex geometries. As in our work, we model\n3https://github.com/intel-isl/DeepLagrangianFluids\nsolid obstacles and wall as virtual particles, so we transform these virtual particles into surface and corresponding normals before inputting them into CConv network. The original CConv uses a time step size of 0.02s, but given such a large time step size, the qualitative results can be distinct from ground truth and other model with smaller time step size. Hence during training and comparison we adopt a time step size of 0.002s for CConv model.\nGraph Network-based Simulator We implemented GNS following the description in SanchezGonzalez et al. (2020). We build GNS with 10 unshared GN blocks, conditioned on 5 previous velocities and input relative positions as edge features. We chose the connectivity radius to be 2.1D, so that the number of neighbors is around 20. Sanchez-Gonzalez et al. (2020) use finite difference to calculate acceleration and velocity, but in our implementation we explicitly maintain an array to store the velocities of all particles. Additionally, we do not use learned embedding but simple zero and one to indicate particle material type, as in our testing domain there are only two kinds of particles - solid and fluid. The loss function and training procedure are implemented as described in Sanchez-Gonzalez et al. (2020), including noise injection and similar normalization techniques.\nFor the implementation of GN block, as Sanchez-Gonzalez et al. (2020) states \"We use GNs without global features or global updates (similar to an interaction network)\", so we implement the GN block update mechanism following the description in Battaglia et al. (2018).\ne′k = φ e(ek,vrk ,vsk ,u) (31) v′i = φ v(ê′k,vi,u) (32) ê′i = ρ e→v (E′i) (33)\nwhere φ is the update function and implemented as MLP here, ρ is aggregation function which aggregates all the edge attributes to its center vertex, u is the global feature and here we append them to the nodal features as input feature, rk denotes receiver vertices and sk denotes sender vertices.\nIn the testing stage, as authors did not state how to set initial velocities, so we just warm start the simulation by calculating the first 5 frames using MPS method and apply GNS to the rest frames." } ]
2,020
null
SP:821ad1017b8aa20f5b6bc3fcc56844ae87d983e2
[ "This work proposes a model to generate scene layouts by treating the scene as a composition of primitives, such as instance class, coordinates or scales. The model is a Transformer architecture, that attends on all previously predicted or given instance primitives. The probability of a scene layout is defined with a joint distribution, modeled as the product of conditional distributions using the chain rule. The model predicts an end of sequence token, that allows the generated layouts to have variable size. Moreover, the model allows to either complete an existing incomplete layout or to generate one from scratch. The paper presents experiments in four datasets, spanning different data domains, including 2D and 3D data.", "This paper presents an auto-regressive method for generating layouts by sequentially synthesizing new elements. The architecture is not dramatically new, but it is well-justified and analyzed, and there are some interesting tweaks. The results are strongest in that they show good performance of essentially the same architecture and hyperpameters across quite different domains: to my knowledge such variety has not really been demonstrated for any of the assembly-based generative models I'm familiar with." ]
We address the problem of scene layout generation for diverse domains such as images, mobile applications, documents and 3D objects. Most complex scenes, natural or human-designed, can be expressed as a meaningful arrangement of simpler compositional graphical primitives. Generating a new layout or extending an existing layout requires understanding the relationships between these primitives. To do this, we propose a multimodal attention framework, MMA, that leverages self-attention to learn contextual relationships between layout elements and generate novel layouts in a given domain. Our framework allows us to generate a new layout either from an empty set or from an initial seed set of primitives, and can easily scale to support an arbitrary of primitives per layout. Further, our analyses show that the model is able to automatically capture the semantic properties of the primitives. We propose simple improvements in both representation of layout primitives, as well as training methods to demonstrate competitive performance in very diverse data domains such as object bounding boxes in natural images (COCO bounding boxes), documents (PubLayNet), mobile applications (RICO dataset) as well as 3D shapes (PartNet).
[]
[ { "authors": [ "Oron Ashual", "Lior Wolf" ], "title": "Specifying object attributes and relations in interactive scene generation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Irving Biederman" ], "title": "On the semantics of a glance at a scene", "venue": "In Perceptual organization,", "year": 2017 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "arXiv preprint arXiv:1809.11096,", "year": 2018 }, { "authors": [ "Samuele Capobianco", "Simone Marinai" ], "title": "Docemul: a toolkit to generate structured historical documents", "venue": "CoRR, abs/1710.03474,", "year": 2017 }, { "authors": [ "Angel X. Chang", "Will Monroe", "Manolis Savva", "Christopher Potts", "Christopher D. Manning" ], "title": "Text to 3d scene generation with rich lexical grounding", "venue": "CoRR, abs/1505.06289,", "year": 2015 }, { "authors": [ "Mark Chen", "Alec Radford", "Rewon Child", "Jeff Wu", "Heewoo Jun", "Prafulla Dhariwal", "David Luan", "Ilya Sutskever" ], "title": "Generative pretraining from pixels", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Zhiqin Chen", "Hao Zhang" ], "title": "Learning implicit fields for generative shape modeling", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Biplab Deka", "Zifeng Huang", "Chad Franzen", "Joshua Hibschman", "Daniel Afergan", "Yang Li", "Jeffrey Nichols", "Ranjitha Kumar" ], "title": "Rico: A mobile app dataset for building data-driven design applications", "venue": "In Proceedings of the 30th Annual Symposium on User Interface Software and Technology,", "year": 2017 }, { "authors": [ "Hao Dong", "Simiao Yu", "Chao Wu", "Yike Guo" ], "title": "Semantic image synthesis via adversarial learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Karol Gregor", "Ivo Danihelka", "Alex Graves", "Danilo Jimenez Rezende", "Daan Wierstra" ], "title": "Draw: A recurrent neural network for image generation", "venue": "arXiv preprint arXiv:1502.04623,", "year": 2015 }, { "authors": [ "Kamal Gupta", "Susmija Jabbireddy", "Ketul Shah", "Abhinav Shrivastava", "Matthias Zwicker" ], "title": "Improved modeling of 3d shapes with multi-view depth maps", "venue": "arXiv preprint arXiv:2009.03298,", "year": 2020 }, { "authors": [ "Kamal Gupta", "Saurabh Singh", "Abhinav Shrivastava" ], "title": "PatchVAE: Learning Local Latent Codes for Recognition", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Tanmay Gupta", "Alexander Schwing", "Derek Hoiem" ], "title": "Vico: Word embeddings from visual cooccurrences", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Tobias Hinz", "Stefan Heinrich", "Stefan Wermter" ], "title": "Generating multiple objects at spatially distinct locations", "venue": "CoRR, abs/1901.00686,", "year": 2019 }, { "authors": [ "Ari Holtzman", "Jan Buys", "Maxwell Forbes", "Yejin Choi" ], "title": "The curious case of neural text degeneration", "venue": "arXiv preprint arXiv:1904.09751,", "year": 2019 }, { "authors": [ "Seunghoon Hong", "Dingdong Yang", "Jongwook Choi", "Honglak Lee" ], "title": "Inferring semantic layout for hierarchical text-to-image synthesis", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei" ], "title": "A Efros. Image-to-image translation with conditional adversarial networks", "venue": "arxiv,", "year": 2016 }, { "authors": [ "Justin Johnson", "Agrim Gupta", "Li Fei-Fei" ], "title": "Image generation from scene graphs", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Akash Abdu Jyothi", "Thibaut Durand", "Jiawei He", "Leonid Sigal", "Greg Mori" ], "title": "Layoutvae: Stochastic scene layout generation from a label set", "venue": null, "year": 1907 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "arXiv preprint arXiv:1710.10196,", "year": 2017 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": null, "year": 2013 }, { "authors": [ "Donghoon Lee", "Sifei Liu", "Jinwei Gu", "Ming-Yu Liu", "Ming-Hsuan Yang", "Jan Kautz" ], "title": "Contextaware synthesis and placement of object instances", "venue": "CoRR, abs/1812.02350,", "year": 2018 }, { "authors": [ "Boren Li", "Boyu Zhuang", "Mingyang Li", "Jian Gu" ], "title": "Seq-sg2sl: Inferring semantic layout from scene graph through sequence to sequence learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp. 7435–7443,", "year": 2019 }, { "authors": [ "Jianan Li", "Jimei Yang", "Aaron Hertzmann", "Jianming Zhang", "Tingfa Xu" ], "title": "Layoutgan: Generating graphic layouts with wireframe discriminators", "venue": "arXiv preprint arXiv:1901.06767,", "year": 2019 }, { "authors": [ "Manyi Li", "Akshay Gadi Patil", "Kai Xu", "Siddhartha Chaudhuri", "Owais Khan", "Ariel Shamir", "Changhe Tu", "Baoquan Chen", "Daniel Cohen-Or", "Hao Zhang" ], "title": "Grains: Generative recursive autoencoders for indoor scenes", "venue": "ACM Transactions on Graphics (TOG),", "year": 2019 }, { "authors": [ "Wenbo Li", "Pengchuan Zhang", "Lei Zhang", "Qiuyuan Huang", "Xiaodong He", "Siwei Lyu", "Jianfeng Gao" ], "title": "Object-driven text-to-image synthesis via adversarial training", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Thomas F. Liu", "Mark Craft", "Jason Situ", "Ersin Yumer", "Radomir Mech", "Ranjitha Kumar" ], "title": "Learning design semantics for mobile apps", "venue": "In The 31st Annual ACM Symposium on User Interface Software and Technology,", "year": 2018 }, { "authors": [ "Dipu Manandhar", "Dan Ruta", "John Collomosse" ], "title": "Learning structural similarity of user interface layouts using graph networks", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Efficient estimation of word representations in vector space", "venue": "arXiv preprint arXiv:1301.3781,", "year": 2013 }, { "authors": [ "Kaichun Mo", "Paul Guerrero", "Li Yi", "Hao Su", "Peter Wonka", "Niloy Mitra", "Leonidas J Guibas" ], "title": "Structurenet: Hierarchical graph networks for 3d shape generation", "venue": null, "year": 1908 }, { "authors": [ "Rafael Müller", "Simon Kornblith", "Geoffrey E. Hinton" ], "title": "When does label smoothing help", "venue": "CoRR, abs/1906.02629,", "year": 2019 }, { "authors": [ "Charlie Nash", "Yaroslav Ganin", "SM Eslami", "Peter W Battaglia" ], "title": "Polygen: An autoregressive generative model of 3d meshes", "venue": "arXiv preprint arXiv:2002.10880,", "year": 2020 }, { "authors": [ "Aaron van den Oord", "Nal Kalchbrenner", "Koray Kavukcuoglu" ], "title": "Pixel recurrent neural networks", "venue": "arXiv preprint arXiv:1601.06759,", "year": 2016 }, { "authors": [ "Jeong Joon Park", "Peter Florence", "Julian Straub", "Richard Newcombe", "Steven Lovegrove" ], "title": "DeepSDF: Learning continuous signed distance functions for shape representation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Taesung Park", "Ming-Yu Liu", "Ting-Chun Wang", "Jun-Yan Zhu" ], "title": "Gaugan: semantic image synthesis with spatially adaptive normalization", "venue": "In ACM SIGGRAPH 2019 Real-Time Live!,", "year": 2019 }, { "authors": [ "Akshay Gadi Patil", "Omri Ben-Eliezer", "Or Perel", "Hadar Averbuch-Elor", "Cornell Tech" ], "title": "Read: Recursive autoencoders for document layout generation", "venue": null, "year": 1909 }, { "authors": [ "Scott Reed", "Zeynep Akata", "Xinchen Yan", "Lajanugen Logeswaran", "Bernt Schiele", "Honglak Lee" ], "title": "Generative adversarial text to image synthesis", "venue": "arXiv preprint arXiv:1605.05396,", "year": 2016 }, { "authors": [ "Daniel Ritchie", "Kai Wang", "Yu-an Lin" ], "title": "Fast and flexible indoor scene synthesis via deep convolutional generative models", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Tim Salimans", "Andrej Karpathy", "Xi Chen", "Diederik P Kingma" ], "title": "Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications, 2017", "venue": "arXiv preprint arXiv:1701.05517,", "year": 2017 }, { "authors": [ "Abhinav Shrivastava", "Abhinav Gupta" ], "title": "Contextual priming and feedback for faster r-cnn", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Volker Steinbiss", "Bach-Hiep Tran", "Hermann Ney" ], "title": "Improvements in beam search", "venue": "In Third International Conference on Spoken Language Processing,", "year": 1994 }, { "authors": [ "Minhyuk Sung", "Hao Su", "Vladimir G Kim", "Siddhartha Chaudhuri", "Leonidas Guibas" ], "title": "Complementme: weakly-supervised component suggestions for 3d modeling", "venue": "ACM Transactions on Graphics (TOG),", "year": 2017 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Antonio Torralba", "Pawan Sinha" ], "title": "Statistical context priming for object detection", "venue": "In Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001,", "year": 2001 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Samy Bengio", "Manjunath Kudlur" ], "title": "Order matters: Sequence to sequence for sets", "venue": "arXiv preprint arXiv:1511.06391,", "year": 2015 }, { "authors": [ "Carl Vondrick", "Hamed Pirsiavash", "Antonio Torralba" ], "title": "Generating videos with scene dynamics", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Kai Wang", "Manolis Savva", "Angel X Chang", "Daniel Ritchie" ], "title": "Deep convolutional priors for indoor scene synthesis", "venue": "ACM Transactions on Graphics (TOG),", "year": 2018 }, { "authors": [ "Kai Wang", "Yu-An Lin", "Ben Weissmann", "Manolis Savva", "Angel X Chang", "Daniel Ritchie" ], "title": "Planit: Planning and instantiating indoor scenes with relation graph and spatial prior networks", "venue": "ACM Transactions on Graphics (TOG),", "year": 2019 }, { "authors": [ "Ting-Chun Wang", "Ming-Yu Liu", "Jun-Yan Zhu", "Andrew Tao", "Jan Kautz", "Bryan Catanzaro" ], "title": "Highresolution image synthesis and semantic manipulation with conditional gans", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Jiajun Wu", "Chengkai Zhang", "Tianfan Xue", "Bill Freeman", "Josh Tenenbaum" ], "title": "Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Jiajun Wu", "Erika Lu", "Pushmeet Kohli", "William T Freeman", "Joshua B Tenenbaum" ], "title": "Learning to see physics via visual de-animation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jiajun Wu", "Joshua B Tenenbaum", "Pushmeet Kohli" ], "title": "Neural scene de-rendering", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Rundi Wu", "Yixin Zhuang", "Kai Xu", "Hao Zhang", "Baoquan Chen" ], "title": "Pq-net: A generative part seq2seq network for 3d shapes", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Guandao Yang", "Xun Huang", "Zekun Hao", "Ming-Yu Liu", "Serge Belongie", "Bharath Hariharan" ], "title": "PointFlow: 3D point cloud generation with continuous normalizing flows", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Xiao Yang", "Mehmet Ersin Yümer", "Paul Asente", "Mike Kraley", "Daniel Kifer", "C. Lee Giles" ], "title": "Learning to extract semantic structure from documents using multimodal fully convolutional neural network", "venue": "CoRR, abs/1706.02337,", "year": 2017 }, { "authors": [ "Fenggen Yu", "Kun Liu", "Yan Zhang", "Chenyang Zhu", "Kai Xu" ], "title": "Partnet: A recursive part decomposition network for fine-grained and hierarchical shape segmentation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Han Zhang", "Tao Xu", "Hongsheng Li", "Shaoting Zhang", "Xiaogang Wang", "Xiaolei Huang", "Dimitris N Metaxas" ], "title": "Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Han Zhang", "Ian Goodfellow", "Dimitris Metaxas", "Augustus Odena" ], "title": "Self-attention generative adversarial networks", "venue": "arXiv preprint arXiv:1805.08318,", "year": 2018 }, { "authors": [ "Bo Zhao", "Lili Meng", "Weidong Yin", "Leonid Sigal" ], "title": "Image generation from layout", "venue": null, "year": 2019 }, { "authors": [ "Xinru Zheng", "Xiaotian Qiao", "Ying Cao", "Rynson WH Lau" ], "title": "Content-aware generative modeling of graphic design layouts", "venue": "ACM Transactions on Graphics (TOG),", "year": 2019 }, { "authors": [ "Xu Zhong", "Jianbin Tang", "Antonio Jimeno Yepes" ], "title": "Publaynet: largest dataset ever for document layout analysis", "venue": "In 2019 International Conference on Document Analysis and Recognition (ICDAR),", "year": 2019 }, { "authors": [ "Chuhang Zou", "Ersin Yumer", "Jimei Yang", "Duygu Ceylan", "Derek Hoiem" ], "title": "3d-prnn: Generating shape primitives with recurrent neural networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 } ]
[ { "heading": null, "text": "We address the problem of scene layout generation for diverse domains such as images, mobile applications, documents and 3D objects. Most complex scenes, natural or human-designed, can be expressed as a meaningful arrangement of simpler compositional graphical primitives. Generating a new layout or extending an existing layout requires understanding the relationships between these primitives. To do this, we propose a multimodal attention framework, MMA, that leverages self-attention to learn contextual relationships between layout elements and generate novel layouts in a given domain. Our framework allows us to generate a new layout either from an empty set or from an initial seed set of primitives, and can easily scale to support an arbitrary of primitives per layout. Further, our analyses show that the model is able to automatically capture the semantic properties of the primitives. We propose simple improvements in both representation of layout primitives, as well as training methods to demonstrate competitive performance in very diverse data domains such as object bounding boxes in natural images (COCO bounding boxes), documents (PubLayNet), mobile applications (RICO dataset) as well as 3D shapes (PartNet)." }, { "heading": "1 INTRODUCTION", "text": "In the real world, there exists a strong relationship between different objects that are found in the same environment (Torralba & Sinha, 2001; Shrivastava & Gupta, 2016). For example, a dining table usually has chairs around it, a surfboard is found near the sea, horses do not ride cars, etc.. Biederman (2017) provided strong evidence in cognitive neuroscience that perceiving and understanding a scene involves two related processes: perception and comprehension. Perception deals with processing the visual signal or the appearance of a scene. Comprehension deals with understanding the schema of a scene, where this schema (or layout) can be characterized by contextual relationships (e.g., support, occlusion, and relative likelihood, position, and size) between objects. For generative models that synthesize scenes, this evidence underpins the importance of two factors that contribute to the realism or plausibility of a generated scene: layout, i.e., the arrangement of different objects, and their appearance (in terms of pixels). Therefore, generating a realistic scene necessitates both these factors to be plausible.\nThe advancements in the generative models for image synthesis have primarily targeted plausibility of the appearance signal by generating incredibly realistic images often with a single entity such as faces (Karras et al., 2019; 2017), or animals (Brock et al., 2018; Zhang et al., 2018). In the case of large and complex scenes, with a lot of strong non-local relationships between different elements, most methods require proxy representations for layouts to be provided as inputs (e.g., scene graph, segmentation mask, sentence). We argue that to plausibly generate large and complex scenes without such proxies, it is necessary to understand and generate the layout of a scene, in terms of contextual relationships between various objects present in the scene.\nThe layout of a scene, capturing what primitives occupy what parts of the scene, is an incredibly rich representation. Learning to generate layouts itself is a challenging problem due to the variability of real-world or human-designed layouts. Each layout is composed of a small fraction of possible objects, objects can be present in a wide range of locations, the number of objects varies for each scene and so do the contextual relationships between objects.\nFormally, a scene layout can be represented as an unordered set of graphical primitives. The primitive itself can be discrete or continuous depending on the data domain. For example, in the case of layout of documents, primitives can be bounding boxes from discrete classes such as ‘text’, ‘image’, or ‘caption’, and in case of 3D objects, primitives can be 3D occupancy grids of parts of the object such as ‘arm’, ‘leg’, or ‘back’ in case of chairs. Additionally, in order to make the primitives compositional, we represent each primitive by a location vector with respect to the origin, and a scale vector that defines the bounding box enclosing the primitive. Again, based on the domain, these location and scale vectors can be 2D or 3D. A generative model for layouts should be able to look at all existing primitives and propose the placement and attributes of a new one. We propose a novel Multimodal Attention framework (MMA) that first maps the different parameters of the primitive independently to a fixed-length continuous latent vector, followed by a masked Transformer decoder to look at representations of existing primitives in layout and predict the next parameter. Our generative framework can start from an empty set, or a set of primitives, and can iteratively generate a new primitive one parameter at a time. Moreover, by predicting either to stop or to generate the next primitive, our sequential approach can generate variable length layouts.\nOur approach can be readily plugged into scene generation frameworks (e.g., Layout2Image (Zhao et al., 2019), GauGAN (Park et al., 2019b)) or stand-alone applications that require generating layouts or templates with/without user interaction. For instance, in the UI design of mobile apps and websites, an automated model for generating plausible layouts can significantly decrease the manual effort and cost of building such apps and websites. Finally, a model to create layouts can potentially help generate synthetic data for various tasks tasks (Yang et al., 2017; Capobianco & Marinai, 2017; Chang et al., 2015; Wu et al., 2017b;a).\nTo the best of our knowledge, MMA is the first framework to perform competitively with the stateof-the-art approaches in 4 diverse data domains. We evaluate our model using existing metrics proposed for different domains such as Jensen-Shannon Divergence, Minimum matching distance, and Coverage in case of 3D objects, Inception Score and Fréchet Inception Distance for COCO, and Negative Log-likelihood of the test set in case of app wireframes and documents. Qualitative analysis of the framework also demonstrates that our model captures the semantic relationships between objects automatically (without explicitly using semantic embeddings like word2vec Mikolov et al. (2013))." }, { "heading": "2 RELATED WORK", "text": "Generative models. Deep generative models based on CNNs such as variational auto-encoders (VAEs) (Kingma & Welling, 2013), and generative adversarial networks (GANs) (Goodfellow et al., 2014) have recently shown a great promise in terms of faithfully learning a given data distribution and sampling from it. There has also been research on generating data sequentially (Oord et al., 2016; Chen et al., 2020) even when the data has no natural order (Vinyals et al., 2015). Many of these approaches often rely on low-level information (Gupta et al., 2020b) such as pixels while generating images (Brock et al., 2018; Karras et al., 2019), videos (Vondrick et al., 2016), or 3D objects (Wu et al., 2016; Yang et al., 2019; Park et al., 2019a; Gupta et al., 2020a) and not on semantic and geometric structure in the data.\nScene generation. Generating 2D or 3D scenes conditioned on sentence (Li et al., 2019d; Zhang et al., 2017; Reed et al., 2016), a scene graph (Johnson et al., 2018; Li et al., 2019a; Ashual & Wolf, 2019), a layout (Dong et al., 2017; Hinz et al., 2019; Isola et al., 2016; Wang et al., 2018b) or an existing image (Lee et al., 2018) has drawn a great interest in vision community. Given the input, some works generate a fixed layout and diverse scenes (Zhao et al., 2019), while other works generate diverse layouts and scenes (Johnson et al., 2018; Li et al., 2019d). These methods involve pipelines often trained and evaluated end-to-end, and surprisingly little work has been done to evaluate the layout generation component itself. Layout generation serves as a complementary task to these works and can be combined with these methods. In this work, we evaluate the layout modeling\nUnder review as a conference paper at ICLR 2021\nText\ncapabilities of two of the recent works (Johnson et al., 2018; Li et al., 2019d) that have layout generation as an intermediate step. We also demonstrate the results of our model with Layout2Im (Zhao et al., 2019) for image generation.\nLayout generation. The automatic generation of layouts is an important problem in graphic design. Many of the recent data-driven approaches use data specific constraints in order to model the layouts. For example, Wang et al. (2018a; 2019); Li et al. (2019c); Ritchie et al. (2019) generates topdown view indoor rooms layouts but make several assumptions regarding the presence of walls, roof etc., and cannot be easily extended to other datasets. In this paper, we focus on approaches that have fewer domain-specific constraints. LayoutGAN (Li et al., 2019b) uses a GAN framework to generate semantic and geometric properties of a fixed number of scene elements. LayoutVAE (Jyothi et al., 2019) starts with a label set, i.e., categories of all the elements present in the layout, and then generates a feasible layout of the scene. Zheng et al. (2019) attempt to generate document layouts given the images, keywords, and category of the document. Patil et al. (2019) proposes a method to construct hierarchies of document layouts using a recursive variational autoencoder and sample new hierarchies to generate new document layouts. Manandhar et al. (2020) develops an auto-encoding framework for layouts using Graph Networks. 3D-PRNN (Zou et al., 2017), PQNet (Wu et al., 2020) and ComplementMe Sung et al. (2017), generates 3D shapes via sequential part assembly. While 3D-PRNN generates only bounding boxes, PQ-Net and ComplementMe can synthesize complete 3D shapes starting with a partial or no input shape.\nOur approach offers several advantages over current layout generation approaches without sacrificing their benefits. By factorizing primitives into structural parameters and compositional geometric parameters, we can generate high-resolution primitives using distributed representations and consequently, complete scenes. The autoregressive nature of the model allows us to generate layouts of arbitrary lengths as well as start with partial layouts. Further, modeling the position and size of primitives as discrete values (as discussed in §3.1) helps us realize better performance on datasets, such as documents and app wireframes, where bounding boxes of layouts are typically axis-aligned. We evaluate our method both quantitatively and qualitatively with state-of-the-art methods specific to each dataset and show competitive results in very diverse domains." }, { "heading": "3 OUR APPROACH", "text": "In this section, we introduce our attention network in the context of the layout generation problem. We first discuss our representation of layouts for primitives belonging to different domains. Next,\nwe discuss the Multimodal Attention (MMA) framework and show how we can leverage previous advances such as Transformers (Vaswani et al., 2017) to model the probability distribution of layouts. MMA allows us to learn non-local semantic relationships between layout primitives and also gives us the flexibility to work with variable length layouts." }, { "heading": "3.1 LAYOUT REPRESENTATION", "text": "Given a dataset of layouts, a single layout instance can be defined as a graph G with n nodes, where each node i ∈ {1, . . . , n} is a graphical primitive. We assume that the graph is fully-connected, and let the attention network learn the relationship between nodes. The nodes can have structural or semantic information associated with them. For each node, we project the information associated with it to a dmodel-dimensional space represented by feature vector si. Note that the information itself can be discrete (e.g., part category), continuous (e.g., color), or multidimensional vectors (e.g., signed distance function of the part) on some manifold. Specifically, in our ShapeNet experiments, we use an MLP to project part embedding to dmodel-dimensional space, while in the 2D layout experiments, we use a learned dmodel-dimensional category embedding which is equivalent to using an MLP with zero bias to project one-hot encoded category vectors to the latent space.\nEach primitive also carries geometric information gi which we factorize into a position vector and a scale vector. For the layouts in R2 such as images or documents, gi = [xi, yi, hi, wi], where (x, y) are the coordinates of the centroid of primitive and (h,w) are the height and width of the bounding box containing the primitive, normalized with respect to the dimensions of the entire layout.\nRepresenting geometry with discrete variables. We apply an 8-bit uniform quantization on each of the geometric fields and model them using Categorical distribution. Discretizing continuous signals is a practice adopted in previous works for image generation such as PixelCNN++ (Salimans et al., 2017), however, to the best of our knowledge, it has been unexplored in the layout modeling task. We observe that even though discretizing coordinates introduces approximation errors, it allows us to express arbitrary distributions which we find particularly important for layouts with strong symmetries such as documents and app wireframes. We project each geometric field of the primitive independently to the same dmodel-dimension, such that ith primitive in R2 can be represented as (si,xi,yi,hi,wi). We concatenate all the elements in a flattened sequence of their parameters. We also append embeddings of two additional parameters s〈bos〉 and s〈eos〉 to denote start and end of sequence. Our layout in R2 can now be represented by a sequence of 5n+ 2 latent vectors\nG = (s〈bos〉; s1;x1;y1;h1;w1; . . . ; sn;xn;yn;hn;wn; s〈eos〉)\nFor brevity, we use θj , j ∈ {1, . . . , 5n + 2} to represent any element in the above sequence. We can now pose the problem of modeling this joint distribution as product over series of conditional distributions using chain rule:\np(θ1:5n+2) = 5n+2∏ j=1 p(θj |θ1:j−1) (1)" }, { "heading": "3.2 MODEL ARCHITECTURE AND TRAINING", "text": "Our overall architecture is surprisingly simple and shown in Fig. 2. Given an initial set of K visible primitives (where K can be 0 when generating from scratch), our attention based model takes as input, a random permutation of the visible nodes, π = (π1, . . . , πK), and consequently a sequence of dmodel-dimensional vectors (θ1, . . . ,θ5K). We find this to be an important step since by factorizing primitive representation into geometry and structure fields, our attention module can explicitly assign weights to individual coordinate dimensions. The attention module is similar to Transformer Decoder (Vaswani et al., 2017) and consists of L attention layers, each of which consists of (a) a masked multi-head attention layer (hattn), and (b) fully connected feed forward layer (hfc). Each sublayer also adds residual connections (He et al., 2016) and LayerNorm (Ba et al., 2016).\nθ̂j = LayerNorm(θl−1j + h attn(θl−11 , . . . ,θ l−1 5n+2)) (2)\nθlj = LayerNorm(θ̂j + h fc(θ̂j)) (3)\nwhere l denotes the layer index. Masking is performed such that θ only attends to all the input latent vectors as well as previous predicted latent vectors. The output at the last layer corresponds to next\nparameter. At training and validation time, we use teacher forcing, i.e., instead of using output of previous step, we use groundtruth sequences to train our model efficiently.\nLoss. We use a softmax layer to get probabilities if the next parameter is discrete. Instead of using a standard cross-entropy loss, we minimize KL-Divergence between softmax predictions and output one-hot distribution with Label Smoothing (Szegedy et al., 2016), which prevents the model from becoming overconfident. If the next parameter is continuous, we use an L1 loss.\nL = Eθ∼Disc.[ DKL(SoftMax(θL) ‖ p(θ′)) ] + λEθ∼Cont.[ ||θ − θ′||1 ] (4)\n3D Primitive Auto-encoding. PartNet dataset (Yu et al., 2019) consists of 3D objects decomposed into simpler meaningful primitives, such as chairs are composed of back, arms, 4 legs, and so on. We pose the problem of 3D shape generation as generating a layout of such primitives. We use Chen & Zhang (2019)’s approach to first encode voxel-based represent of primitive to dmodel-dimensional latent space using 3D CNN. An MLP based implicit parameter decoder projects the latent vector to the surface occupancy grid of the primitive.\nOrder of primitives. One of the limitations of an autoregressive modeling approach is that sequence of primitives is an important consideration, in order to train the generative model, even if the layout doesn’t have a natural defined order Vinyals et al. (2015). To generate a layout from any partial layout, we use a random permutation of primitives as input to the model. For the output, we always generate the sequences in raster order of centroid of primitives, i.e., we order the primitives in ascending order of their (x, y, z) coordinates. In our experiments, we observed that the ordering of elements is important for model training. Note that similar limitations are faced by contemporary works in layout generation (Jyothi et al., 2019; Li et al., 2019d; Hong et al., 2018; Wang et al., 2018a), image generation (Salimans et al., 2017; Gregor et al., 2015) and 3D shape generation (Wu et al., 2020; Zou et al., 2017). Generating a distribution over an order-invariant set of an arbitrary number of primitives is an exciting problem and we will explore it in future research.\nOther details. In our base model, we use dmodel = 512, L = 6, and nhead = 8 (number of multiattention heads). Label smoothing uses an = 0.1, and λ = 1. We use Adam optimizer (Kingma & Ba, 2014) with β1 = 0.9, β2 = 0.99 and learning rate 10−4 (10−5 for PartNet). We use early stopping based on validation loss. In the ablation studies provided in § B, we show that our model is quite robust to these choices, as well as other hyperparameters (layout resolution, ordering of elements, ordering of fields). To sample a new layout, we can start off with just a start of sequence embedding or an initial set of primitives. Several decoding strategies are possible to recursively generate primitives from the initial set. In samples generated for this work, unless otherwise specified, we have used nucleus sampling (Holtzman et al., 2019), with top-p = 0.9 which has been shown to perform better as compared to greedy sampling and beam search (Steinbiss et al., 1994)." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we discuss the qualitative and quantitative performance of our model on different datasets. Evaluation of generative models is hard, and most quantitative measures fail in providing a good measure of novelty and realism of data sampled from a generative model. We will use datasetspecific quantitative metrics used by various baseline approaches and discuss their limitations wherever applicable. We will provide the code and pretrained models to reproduce the experiments." }, { "heading": "4.1 3D SHAPE SYNTHESIS (ON PARTNET DATASET)", "text": "PartNet is a large-scale dataset of common 3D shapes that are segmented into semantically meaningful parts. We use two of the largest categories of PartNet - Chairs and Lamp. We voxelize the shapes into 643 and train an autoencoder to learn part embeddings similar to the procedure followed by PQ-Net (Wu et al., 2020). Overall, we had 6305 chairs and 1188 lamps in our datasets. We use the official train, validation, and test split from PartNet in our experiments. Although it is fairly trivial to extend our method to train for the class-conditional generation of shapes, in order to compare with baselines fairly, we train separate models for each of the categories.\nGenerated Samples. Fig. 3 shows examples of shape completion from the PartNet dataset. Given a random primitive, we use our model to iteratively predict the latent shape encoding of the next part,\nas well its position and scale in 3D. We then use the part decoder to sample points on the surface of the object. For visualization, we use the marching cubes algorithm to generate a mesh and render the mesh using a fixed camera viewpoint.\nQuantitative Evaluation. The output of our model is point clouds sampled on the surface of the 3D shapes. We use Chamfer Distance (CD) and Earth Mover’s Distance (EMD) to compare two point clouds. Following prior work, we use 4 different metrics to compare the distribution of shapes generated from the model and shapes in the test dataset: (i) Jensen Shannon Divergence (JSD) computes the KL divergence between marginal distribution of point clouds in generated set and test set, (ii) Coverage (Cov) - compares the distance between each point in generated set to its nearest neighbor in test set, (iii) Minimum Matching Distance (MMD) - computes the average distance of each point in test set to its nearest neighbor in generated set, and (iv) 1-nearest neighbor accuracy (1-NNA) uses a 1-NN classifier see if the nearest neighbor of a generated sample is coming from generated set or test set. Our model performs competitively with existing approaches to generate point clouds." }, { "heading": "4.2 LAYOUTS FOR NATURAL SCENES (COCO BOUNDING BOXES)", "text": "COCO bounding boxes dataset is obtained using bounding box annotations in COCO Panoptic 2017 dataset (Lin et al., 2014). We ignore the images where the isCrowd flag is true following the LayoutVAE (Jyothi et al., 2019) approach. The bounding boxes come from all 80 thing and 91 stuff categories. Our final dataset has 118280 layouts from COCO train split with a median length of 42 elements and 5000 layouts from COCO valid split with a median length of 33. We use the official validation split from COCO as test set in our experiments, and use 5% of the training data as validation.\nBaseline Approaches. We compare our work with 4 previous methods. LayoutGAN (Li et al., 2019b) is a GAN based layout generation framework, starting with a noise vector sampled from gaussian distribution to generate a bounding box layours. Since the method always generate fixed number of bounding boxes, it uses non-maximum suppression (NMS) to remove duplicates.\nLayoutVAE (Jyothi et al., 2019) uses consists of two separate autoregressive VAE models. The method assumes categories of elements present in a generated layout to be known. First, CountVAE generates counts of each of the elements of the label set, and then BoundingBoxVAE, generates the location and size of each occurrence of the bounding box. ObjGAN (Li et al., 2019d) is a GAN framework for text to image synthesis. An intermediate step in their image synthesis approach is to generate a bounding box layout given a sentence using a BiLSTM (trained independently). We\nRe al +L 2I m O ur s +L 2I m La yo ut VA E + L2 Im\nFigure 5: Downstream task. Image generation with layouts (Zhao et al., 2019).\nadopt this step of the ObjGAN framework to our problem setup by provide categories of all layout elements as input to the ObjGAN and synthesize all the elements’ bounding boxes. sg2im (Johnson et al., 2018) attempts to generate images given scene graph of the image by first generating a layout of the scene using graph convolutions and then using the layout to generate complete scene using GANs. Since sg2im requires a scene graph input, following the approach of (Jyothi et al., 2019), we create a scene graph from the input and reproduce the input layout using the scene graph.\nSince LayoutVAE and LayoutGAN are not open source, we implemented our own version of these baseline. Note that, like many GAN models, LayoutGAN was notoriously hard to train and our implementation (and hence results) might differ from author’s implementation despite our best efforts. We were able to reproduce LayoutVAE’s results on COCO dataset as proposed in the original paper and train our own models for different datasets using the same hyperparameters. We also re-purpose ObjGAN and sg2im using guidelines mentioned in LayoutVAE. Although evaluating generative models is challenging, we attempt to do a fair comparison to the best of our abilities. For our model (and others), we keep architecture hyperparameters same across the datasets. We also train different baselines for same number of epochs in corresponding datasets. Some of the ablation studies are provided in the appendix.\nGenerated Samples. Fig. 4 shows layout completion task using our model on COCO dataset. Although the model is trained with all 171 categories, in the figure we only show ‘thing’ categories for clarity. We also use the generated layouts for a downstream application of scene generation (Zhao et al., 2019).\nSemantics Emerge via Layout. We posited earlier that capturing layout should capture contextual relationships between various elements. We provide further evidence of our argument in Fig. 6. We visualize the 2D-tsne plot of the learned embeddings for categories. We observe that supercategories from COCO are clustered together in the embedding space of the model. Certain categories such as window-blind and curtain (which belong to different super-categories) also appear close to each other. These observations are in line with observations made by Gupta et al. (2019) who use visual co-occurence to learn category embeddings.\nQuantitative evaluation. Following the approach of LayoutVAE, we compute negative loglikelihoods (NLL) of all the layouts in validation data using importance sampling. NLL approach is good for evaluating validation samples, but fails for generated samples. Ideally, we would like to evaluate the performance of a generative model on a downstream task. To this end, we employ Layout2Im (Zhao et al., 2019) to generate an image from the layouts generated by each of the method. We compute Inception Score (IS) and Fréchet Inception Distance (FID) to compare quality and diversity of generated images. Our method is competitive with existing approaches in both these metrics, and outperforms existing approaches in terms of NLL.\nNote that ObjGAN and LayoutVAE are conditioned on the label set. So we provide labels of objects present in the each validation layout as input. The task for the model is to then predict the number and postition of these objects. Hence, these methods have unfair advantage over our method and ObjGAN indeed performs better than our method and LayoutGAN, which are unconditional. We clearly outperform LayoutGAN on IS and FID metrics.\nFigure 7: Quantitative Evaluations on COCO. Negative log-likelihood (NLL) of all the layouts in the validation set (lower the better). We use the importance sampling approach described in Jyothi et al. (2019) to compute. We also generated images from layout using Zhao et al. (2019) and compute IS and FID. Following Johnson et al. (2018), we randomly split test set samples into 5 groups and report standard deviation across the splits. The mean is reported using the combined test set.\nModel NLL↓ IS↑ FID↓ LayoutGAN (Li et al., 2019b) - 3.2 (0.22) 89.6 (1.6) LayoutVAE (Jyothi et al., 2019) 3.29 7.1 (0.41) 64.1 (3.8) ObjGAN (Li et al., 2019d) 5.24 7.5 (0.44) 62.3 (4.6) sg2im (Johnson et al., 2018) 3.4 3.3 (0.15) 85.8 (1.6) Ours 2.28 7.1 (0.30) 57.0 (3.5)\nO ur s\nLa yo\nut VA\nE\nFigure 9: Document Layouts. Generated samples LayoutVAE (top) and our method (bottom). Our method produces aligned bounding boxes for various elements." }, { "heading": "4.3 MOBILE APP WIREFRAMES (RICO) AND DOCUMENT LAYOUTS (PUBLAYNET)", "text": "Rico Mobile App Wireframes. Rico mobile app dataset (Deka et al., 2017; Liu et al., 2018) consists of layout information of more than 66000 unique UI screens from over 9300 android apps. Each layout consists of one or more of the 25 categories of graphical elements such as text, image, icon etc. A complete list of these elements is provided in the supplementary material. Overall, we get 62951 layouts in Rico with a median length of 36. Since the dataset doesn’t have official splits, we use 5% of randomly selected layouts for validation and 15% for testing.\nPubLayNet. PubLayNet (Zhong et al., 2019) is a large scale document dataset consisting of over 1.1 million articles collected from PubMed Central. The layouts are annotated with 5 element categories - text, title, list, label, and figure. We filter out the document layouts with over 128 elements. Our final dataset has 335703 layouts from PubLayNet train split with a median length of 33 elements and 11245 layouts from PubLayNet dev split with a median length of 36. We use the provided dev split as our test set and 5% of the training data for validation.\nGenerated layout samples. Fig. 8 and 9 shows some of the generated samples of our model from RICO mobile app wireframes and PubLayNet documents. Note that both the datasets share similarity in terms of distribution of elements, such as high coverage in terms of space, very little collision of elements, and most importantly alignment of the elements along both x and y-axes. Our method is able to preserve most of these properties as we discuss in the next section. Fig. 10 shows multiple completions done by our model for the same initial element.\nComparison with baselines. We use the same baselines for evaluation as discussed previously in §4.2. Fig. 9 shows that our method is able to preserve alignment between bounding boxes better\nInitial Layout Completion 1 Completion 3Completion 2\nthan competing methods. Note that we haven’t used any post-processing in order to generate these layouts. Our hypothesis is that (1) discretization of size/position, and (2) decoupling geometric fields in the attention module, are particularly useful in datasets with aligned boxes.\nTo measure this performance quantitatively, we introduce 2 important statistics. Overlap represents the intersection over union (IoU) of various layout elements. Generally in these datasets, elements do not overap with each other and Overlap is small. Coverage indicates the percentage of canvas covered by the layout elements. Table 2 shows that layouts generated by our method resemble real data statistics better than LayoutGAN and LayoutVAE." }, { "heading": "5 CONCLUSION.", "text": "We propose MMA, a multimodal attention framework to generate layouts of graphical elements. Our model uses self-attention model to capture contextual relationship between different layout elements and generate novel layouts, or complete partial layouts. We show that our model performs competitively with the state-of-the-art approaches for very diverse datasets such as Rico Mobile App Wireframes, COCO bounding boxes, PubLayNet documents, and 3D shapes. There are a few limitations of our approach. First, our model requires a layout or a scene to be decomposed into compositional primitives. In many cases, such primitives might not be even defined. Second, like most data-driven approaches, generated layouts are dominated by high frequency objects or shapes in the dataset. We can control the diversity to some extent using improved sampling techniques, however, generating diverse layouts that not only learn from data, but also from human priors or pre-defined rules is an important direction of research which we will continue to explore." }, { "heading": "A ARCHITECTURE AND TRAINING DETAILS", "text": "In all our R2 experiments, our base model consists of dmodel = 512, L = 6, nhead = 8, precision = 8 and dff = 2048. We also use a dropout of 0.1 at the end of each feedforward layer for regularization. We fix the the maximum number of elements in each of the datasets to 128 which covers over 99.9% of the layouts in each of the COCO, Rico and PubLayNet datasets. We also used Adam optimizer Kingma & Ba (2014) with initial learning rate of 10−4. We train our model for 300 epochs for each dataset with early stopping based on maximum log likelihood on validation layouts. Our COCO Bounding Boxes model takes about 1 day to train on a single NVIDIA GTX1080 GPU. Batching matters a lot to improve the training speed. We want to have evenly divided batches, with absolutely minimal padding. We sort the layouts by the number of elements and search over this sorted list to use find tight batches for training.\nIn all our R3 experiments, we change dmodel = 128, and learning rate to 10−5." }, { "heading": "B ABLATION STUDIES", "text": "We evaluate the importance of different model components with negative log-likelihood on COCO layouts. The ablation studies show the following:\nSmall, medium and large elements: NLL of our model for COCO large, medium, and small boxes is 2.4, 2.5, and 1.8 respectively. We observe that even though discretizing box coordinates introduces approximation errors, it later allows our model to be agnostic to large vs small objects.\nVarying precision: Increasing it allows us to generate finer layouts but at the expense of a model with more parameters. Also, as we increase the precision, NLL increases, suggesting that we might need to train the model with more data to get similar performance (Table 3).\nSize of embedding: Increasing the size of the embedding dmodel improves the NLL, but at the cost of increased number of parameters (Table 4).\nModel depth: Increasing the depth of the model L, does not significantly improve the results (Table 5). We fix the L = 6 in all our experiments.\nOrdering of the elements: Adding position encoding, makes the self-attention layer dependent to the ordering of elements. In order to make it depend less on the ordering of input elements, we take randomly permute the sequence. This also enables our model to be able to complete any partial layout. Since output is predicted sequentially, our model is not invariant to the order of output sequence also. In our experiments, we observed that predicting the elements in a simple raster scan order of their position improves the model performance both visually and in terms of negative log-likelihood. This is intuitive as filling the elements in a pre-defined order is an easier problem. We leave the task of optimal ordering of layout elements to generate layouts for future research. (Table 6).\nDiscretization strategy: Instead of the factorizing location in x-coordinates and y-coordinates, we tried predicting them at once (refer to the Split-xy column of Table 6). This increases the vocabulary size of the model (since we use H × H possible locations instead of H alone) and in turn the number of hyper-parameters with decline in model performance. An upside of this approach is that generating new layouts takes less time as we have to make half as many predictions for each element of the layout (Table 6).\nLoss: We tried two different losses, label smoothing (Müller et al., 2019) and NLL. Although optimizing using NLL gives better validation performance in terms of NLL (as is expected), we do not find much difference in the qualitative performance when using either loss function. (Table 6)\nTable 3: Effect of nanchors on NLL\nnanchors # params COCO Rico PubLayNet\n32× 32 19.2 2.28 1.07 1.10 8× 8 19.1 1.69 0.98 0.88 16× 16 19.2 1.97 1.03 0.95 64× 64 19.3 2.67 1.26 1.28 128× 128 19.6 3.12 1.44 1.46\nTable 4: Effect of d on NLL\nd # params COCO Rico PubLayNet\n512 19.2 2.28 1.07 1.10\n32 0.8 2.51 1.56 1.26 64 1.7 2.43 1.40 1.19 128 3.6 2.37 1.29 1.57 256 8.1 2.32 1.20 1.56\nTable 5: Effect of L on NLL\nTable 6: Effect of other hyperparameters on NLL\nC VISUALIZING ATTENTION\nThe self-attention based approach proposed enables us to visualize which existing elements are being attending to while the model is generating a new element. This is demonstrated in Figure 11" }, { "heading": "D LAYOUT VERIFICATION", "text": "Since in our method it is straightforward to compute likelihood of a layout, we can use our method to test if a given layout is likely or not. Figure 12 shows the NLL given by our model by doing leftright and top-down inversion of layouts in COCO (following Li et al. (2019b)). In case of COCO, if we flip a layout left-right, we observe that layout remains likely, however flipping the layout upside decreases the likelihood (or increases the NLL of the layout). This is intuitive since it is unlikely to see fog in the bottom of an image, while skis on top of a person." }, { "heading": "E MORE SEMANTICS IN LEARNED CATEGORY EMBEDDINGS", "text": "Table 7 captures the most frequent bigrams and trigrams (categories that co-occur) in real and synthesized layouts. Table 8 shows word2vec (Mikolov et al., 2013) style analogies being captured by embeddings learned by our model. Note that the model was trained to generate layouts and we did not specify any additional objective function for analogical reasoning task." }, { "heading": "F DATASET STATISTICS", "text": "In this section, we share statistics of different elements and their categories in our dataset. In particular, we share the total number of occurrences of an element in the trai ning dataset (in descending\norder) and the total number of distinct layouts an element was present in throughout the training data. Tables 9, 9 show the statistics for Rico wireframes, and table 10 show the statistics for PubLayNet documents." }, { "heading": "G COORDINATE EMBEDDING", "text": "Just like in Fig. 6, we project the embedding learned by our model on COCO in a 2-d space using TSNE. In the absence of explicit constraints on the learned embedding, the model learns to cluster together all the coordinate embedding in a distinct space, in a ring-like manner." }, { "heading": "H COMPARISON WITH POLYGEN", "text": "We would like to highlight some similarities of our framework with the recently proposed autoregressive generative model for 3D meshes, PolyGen Nash et al. (2020). While both works adopt Transformer Decoder for autoregressive modeling, the usage differs in the following aspects:\n• PolyGen models mesh vertices as nodes. Advantage of this method is that it allows modelling high resolution 3D objects. However challenge is that sequence lengths for high resolution meshes can be very high and it can be very difficult to model them using self-attention (whose memory requirements grow proportionally to the square of sequence length).\n• We on the other hand separate out attributes (not just coordinates but also height, width, category and (or) SDF encoding) of parts of 3D objects which are typically fewer in number. Deep Networks based SDF encoding are an active area of research and the current SOTA methods don’t provide high resolution results as mesh based methods.\n• Our model predicts future elements in order, but we randomize the order of the input elements. This allows us to do partial layout completion." }, { "heading": "I NEAREST NEIGHBORS", "text": "To see if our model is memorizing the training dataset, we compute nearest neighbors of generated layouts using chamfer distance on top-left and bottom-right bounding box coordinates of layout elements. Figure 15 shows the nearest neighbors of some of the generated layouts from the training dataset. We note that nearest neighbor search for layouts is an active area of research." }, { "heading": "J MORE EXAMPLES FOR LAYOUT TO IMAGE", "text": "Layouts for natural scenes are cluttered and hard to qualitatively evaluate even for a trained user. Here we share some more sample layouts generated from two methods used in the paper. Figure 16 shows some extra sample layouts and corresponding images generated using Layout2Im tool. Existing layout to image methods don’t work as well as free-form image generation methods but are arguably more beneficial in downstream applications. We hope that improving layout generation will aid the research community to develop better scene generation tools both in terms of diversity and quality." } ]
2,020
null
SP:f26a952abe712256ad3046d86187c08c6eb2e395
[ "This work studies the learning dynamics of neural networks in terms of robust and non-robust features. In particular, the authors argue that depending on various factors (e.g. learning rate, data augmentation), neural networks will have learning dynamics according to 1 of 2 pathways. Neural networks will either (1) first learn predictive robust features and weakly predictive non-robust features, followed by predictive non-robust features; or (2) only learn robust features, then overfit the training set, thereby not learning non-robust features. The paper has a good discussion expanding upon the robust/non-robust features model of Ilyas et. al. 2019, interesting experiments measuring what features the models is learning, and a digression that presents further results on non-robust features in different datasets.", "The paper posits some phenomena on neural network training: 1. With some proper regularizing effect, NN training tends to learn predictive robust features (and weakly predictive non-robust features) first and non-robust features next. 2. Without regularization, NN training does a similar thing as case 1 first but does not learn predictive non-robust features and overfits the training examples. " ]
Neural networks are known to be vulnerable to adversarial attacks small, imperceptible perturbations that cause the network to misclassify an input. A recent line of work attempts to explain this behavior by positing the existence of non-robust features well-generalizing but brittle features present in the data distribution that are learned by the network and can be perturbed to cause misclassification. In this paper, we look at the dynamics of neural network training through the perspective of robust and non-robust features. We find that there are two very distinct “pathways” that neural network training can follow, depending on the hyperparameters used. In the first pathway, the network initially learns only predictive, robust features and weakly predictive non-robust features, and subsequently learns predictive, non-robust features. On the other hand, a network trained via the second pathway eschews predictive non-robust features altogether, and rapidly overfits the training data. We provide strong empirical evidence to corroborate this hypothesis, as well as theoretical analysis in a simplified setting. Key to our analysis is a better understanding of the relationship between predictive non-robust features and adversarial transferability. We present our findings in light of other recent results on the evolution of inductive biases learned by neural networks over the course of training. Finally, we digress to show that rather than being “quirks” of the data distribution, predictive non-robust features might actually occur across datasets with different distributions drawn from independent sources, indicating that they perhaps possess some meaning in terms of human semantics.
[]
[ { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Alon Brutzkus", "Amir Globerson", "Eran Malach", "Shai Shalev-Shwartz" ], "title": "Sgd learns overparameterized networks that provably generalize on linearly separable data", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In 2017 ieee symposium on security and privacy (sp),", "year": 2017 }, { "authors": [ "Luke N. Darlow", "Elliot J. Crowley", "Antreas Antoniou", "Amos J. Storkey" ], "title": "Cinic-10 is not imagenet or cifar-10, 2018", "venue": null, "year": 2018 }, { "authors": [ "Gabriel Goh" ], "title": "A discussion of ’adversarial examples are not bugs, they are features’: Two examples of useful, non-robust features", "venue": "Distill, 2019a. doi: 10.23915/distill.00019.3", "year": 2019 }, { "authors": [ "Gabriel Goh" ], "title": "A discussion of ’adversarial examples are not bugs, they are features’: Robust feature leakage", "venue": "Distill, 2019b. doi: 10.23915/distill.00019.2", "year": 2019 }, { "authors": [ "Alex Graves", "Abdel-rahman Mohamed", "Geoffrey Hinton" ], "title": "Speech recognition with deep recurrent neural networks", "venue": "IEEE international conference on acoustics, speech and signal processing,", "year": 2013 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Logan Engstrom", "Brandon Tran", "Aleksander Madry" ], "title": "Adversarial examples are not bugs, they are features", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Ziwei Ji", "Matus Telgarsky" ], "title": "The implicit bias of gradient descent on nonseparable data", "venue": "In Conference on Learning Theory,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning overparameterized neural networks via stochastic gradient descent on structured data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yuanzhi Li", "Colin Wei", "Tengyu Ma" ], "title": "Towards explaining the regularization effect of initial large learning rate in training neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Karttikeya Mangalam", "Vinay Uday Prabhu" ], "title": "Do deep neural networks learn shallow learnable examples first? 2019", "venue": null, "year": 2019 }, { "authors": [ "Preetum Nakkiran" ], "title": "A discussion of ’adversarial examples are not bugs, they are features’: Adversarial examples are just bugs, too. Distill, 2019", "venue": "doi: 10.23915/distill.00019.5", "year": 2019 }, { "authors": [ "Preetum Nakkiran", "Gal Kaplun", "Dimitris Kalimeris", "Tristan Yang", "Benjamin L Edelman", "Fred Zhang", "Boaz Barak" ], "title": "Sgd on neural networks learns functions of increasing complexity", "venue": null, "year": 1905 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Ian Goodfellow" ], "title": "Transferability in machine learning: from phenomena to black-box attacks using adversarial samples", "venue": "arXiv preprint arXiv:1605.07277,", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Ian Goodfellow", "Somesh Jha", "Z Berkay Celik", "Ananthram Swami" ], "title": "Practical black-box attacks against machine learning", "venue": "In Proceedings of the 2017 ACM on Asia conference on computer and communications security,", "year": 2017 }, { "authors": [ "Ludwig Schmidt", "Shibani Santurkar", "Dimitris Tsipras", "Kunal Talwar", "Aleksander Madry" ], "title": "Adversarially robust generalization requires more data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. nature,", "year": 2016 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Antonio Torralba", "Rob Fergus", "William T Freeman" ], "title": "80 million tiny images: A large data set for nonparametric object and scene recognition", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 1958 }, { "authors": [ "Florian Tramèr", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "The space of transferable adversarial examples", "venue": "arXiv preprint arXiv:1704.03453,", "year": 2017 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural networks have achieved state of the art performance on tasks spanning an array of domains like computer vision, translation, speech recognition, robotics, and playing board games (Krizhevsky et al. (2012); Vaswani et al. (2017); Graves et al. (2013); Silver et al. (2016)). However in recent years, their vulnerability to adversarial attacks - small, targeted input perturbations, has come under sharp focus (Szegedy et al. (2013); Papernot et al. (2017); Carlini & Wagner (2017); Athalye et al. (2018); Schmidt et al. (2018)).\nIlyas et al. (2019) propose that neural network vulnerability is at least partly due to neural networks learning well-generalizing but brittle features that are properties of the data distribution. From this point of view, an adversarial example would be constructed by modifying an input of one class slightly such that it takes on the non-robust features of another class.\nThey provide empirical evidence for their theory by training a model on adversarially perturbed examples labeled as the target class, and showing that this model generalizes well to the original, unperturbed distribution.\nAnother unrelated line of work (Brutzkus et al. (2018); Ji & Telgarsky (2019); Li & Liang (2018)) aims to study the properties of the functions learned by gradient descent over the course of training. Nakkiran et al. (2019) and Mangalam & Prabhu (2019) independently showed that Stochastic Gradient Descent (SGD) learns simple, almost linear functions to start out, but then learns more complex functions as training progresses. Li et al. (2019) showed that models trained with a low learning rate learn easy-to-generalize but hard-to-fit features first, and thus perform poorly on easy-to-fit patterns.\nIn this paper, we study gradient descent on neural networks from the perspective of robust and nonrobust features. Our main thesis is that based on choices of hyperparameters, neural network training follows one of two pathways, :\n• Pathway 1 (Informal) : The neural network first learns predictive robust features and weakly predictive non-robust features. As training progresses, it learns predictive nonrobust features, and having learned both robust and non-robust predictive features, achieves good performance on held-out data. This is the pathway that Ilyas et al. (2019) used to prove their theory.\n• Pathway 2 (Informal) : The neural network learns predictive robust features and weakly predictive non-robust features (as in Pathway 1). But thereafter, it begins to fit the noise in the training set, and quickly achieves zero training error. In this scenario, the network learns only the robust predictive features and shows modest generalization on held-out data.\nThrough a series of experiments, we validate our two-pathway hypothesis, investigate the specific circumstances under which Pathway 1 and Pathway 2 occur, and analyze some properties of the two pathways. We will also develop a closer understanding of the relationship between adversarial transfer and predictive non-robust features, which will aid our analysis of the two pathways.\nThe rest of this paper is organized as follows. Section 2 sets up the notation and definitions we use. In Section 3, we investigate the link between adversarial features and transferability. In Section 4 we provide empirical evidence for the two-pathway hypothesis and analyze some characteristics of each pathway. Section 5 presents a theoretical analysis of gradient descent on a toy linear model. We show that for different choices of initial parameters, the linear model exhibits properties of the first and second pathways. We digress to explore whether non-robust features can occur across datasets in Section 6, and discuss future research directions in Section 7." }, { "heading": "2 DEFINITIONS AND PRELIMINARIES", "text": "Consider the binary classification setting, where D is a joint distribution over the input space X and the labels {−1, 1} 1. In this setting, Ilyas et al. (2019) define a feature as any function f : X → R, scaled such that E(x,y)∈D[f(x)] = 0 and E(x,y)∈D[f(x)2] = 1. A feature is said to be ρ-useful if\nE(x,y)∈D[y · f(x)] > ρ (1) for some ρ > 0, and γ-robust if\nE(x,y)∈D [\ninf δ∈∆(x)\ny · f(x+ δ) ] > γ (2)\n1This framework can easily be adapted to the multi-class setting\nfor some γ > 0 and some family of perturbations ∆. For brevity, we sometimes refer to a ρ-useful, γ-robust feature (ρ, γ > 0) simply as a robust feature. Let ρD(f) be the largest ρ for which f is ρ-useful. A feature f is said to be (highly) predictive or weakly predictive if ρD(f) is high or low respectively.\nA useful, non-robust feature as defined by Ilyas et al. (2019) is one that is ρ-useful for some ρ > 0, but is not γ-robust for any γ > 0. They propose the following experiment to demonstrate the existence of these features.\nLet C be a classifier trained with Empirical Risk Minimization (ERM) on an empirical distribution D̂. We operate under the following assumption. Assumption 1. If a distribution D contains a useful feature, then a classifier C trained with ERM on an empirical distribution D̂ drawn from D will learn this feature, provided that we avoid finite sample overfitting through appropriate measures such as regularization and cross-validation.\nLet LC(x, t) denote the loss of C on input x, for a target label t. Construct adversarial examples by solving the following optimization problem :\nxadv = arg min ‖x−x′‖≤\nLC(x ′, t) (3)\nIn particular, construct a distribution called D̂det comprised of (xadv, t) pairs by using Equation 3 with t chosen deterministically according to y for each (x, y) ∈ D̂. In the binary classification setting, t must be −y, so\nE(xadv,t)∈D̂det [t · f(xadv)] > 0, if f is non-robustly useful under D (4) E(xadv,t)∈D̂det [−t · f(xadv)] > 0, if f is robustly useful under D (5)\nIt is observed that a neural network trained on D̂det achieves non-trivial generalization to the original test set, that is D. From this, we can conclude that non-robust features exist and are useful for classification in the normal setting.\nRemark : Goh (2019b) showed that the D̂rand dataset constructed by choosing t randomly in the above procedure, suffers from a sort of “robust feature leakage”. PGD introduces faint robust cues in the generated adversarial example that can be learned by the model. But on the Ddet dataset, the robust features are correlated with a deterministic label which is different from t. Hence we use the D̂det dataset in preference to the D̂rand for all our experiments. Two kinds of non-robust features : Goh (2019a) points out a subtle flaw with the above definition of a non-robust feature - highly predictive non-robust features can arise from “contamination” of a robust feature with a non-robust feature, instead of something meaningful. To see how this can happen, consider a highly predictive robust feature fR and a weakly predictive non-robust feature fNR. Let fC be a “contaminated” feature that is a simple sum of fR and fNR (appropriately normalized). Then it is possible to construct a scenario in which\nE[y · fR(x)] > 0 E [\ninf δ∈∆(x)\ny · fR(x+ δ) ] > 0 (6)\nE[y · fNR(x)] & 0 E [\ninf δ∈∆(x)\ny · fNR(x+ δ) ] 0 (7)\nE[y · fC(x)] > 0 E [\ninf δ∈∆(x)\ny · fC(x+ δ) ] < 0 (8)\nfC is thus a highly predictive non-robust feature. Now when you train a model on (x + δ,−y) pairs, fC = fR + fNR is still correlated with −y. But fC′ = −fR + fNR is more correlated, so the model will learn this combination in preference to fC and will not generalize on the original distribution. In fact, thanks to learning−fR, it will generalize to the distribution with flipped labels, i.e., y → −y. In our analysis and experiments, when we refer to non-robust features, we will exclude such contaminated features.\nIllustrative Example : Consider a dataset of dog and cat images, where most dog images have snouts and most cats do not have snouts. Most cats have slightly lighter eyes than dogs, and making\nthe eyes slightly darker or lighter is part of the set of valid adversarial perturbations. Suppose that a very small majority of the dog images start with a pixel that has an odd numbered value. Then the different types of features in this dataset are enumerated in Table 1.\nFor f2, (x + δ,−y) pairs would be dogs with lighter eyes, labeled as cats. The network trained on these examples will learn Snout =⇒ Cat, Light Eyes =⇒ Cat. Since the eye-color is predictive of the true label, the second feature will ensure that the neural network has non-trivial performance on the original distribution. This is what Ilyas et al. (2019) observed in their experiments. f2 is thus a true non-robust feature.\nFor f4, (x+δ,−y) pairs would be dog images with the first pixel value converted to an even number, labeled as cats. The network trained on these examples will learn Snout =⇒ Cat, Dark Eyes =⇒ Cat, and Even Pixel =⇒ Cat. None of these will be particularly helpful on the true distribution, but the first two will be useful on the flipped distribution, i.e., where dogs are relabeled as cats. f4 is thus a contaminated robust feature, and not a non-robust feature.\nRemark : A network that learns only robust features but with contaminants can still be very vulnerable to adversarial attacks, as the above example shows. The weakly predictive non-robust feature f3 can be manipulated to consistently cause misclassification on out-of-distribution inputs." }, { "heading": "3 NON-ROBUST FEATURES AND TRANSFERABILITY", "text": "The phenomenon of adversarial transferability (Papernot et al., 2016), where a non-trivial fraction of the adversarial examples generated for one neural network are still adversarial to other neural networks trained independently on the same data, can be readily explained in terms of non-robust features.\nBy Assumption 1, different neural networks trained using ERM on a distribution would learn the predictive non-robust features (like Dark Eyes =⇒ Dog) present in the distribution. One would then construct an adversarial example by modifying an input such that the predictive non-robust features flip (modify all dog images to have lighter eyes). Then this adversarial example would transfer to all the different networks that have learned to rely on the non-robust features.\nA natural question to ask is, does all adversarial transferability arise from predictive non-robust features? Nakkiran (2019) showed that by explicitly penalizing transferability during PGD, one can construct adversarial examples that do not transfer, and from which it is not possible to learn a generalizing model. This establishes that adversarial examples that do not transfer, do not contain predictive non-robust features.\nHere we provide a simpler experiment that constructs non-transferable adversarial examples without explicitly penalizing transferability. This experiment also establishes a stronger claim, that adversarial examples transfer if and only if they exploit predictive non-robust features.\nLet the CIFAR-10 dataset form the data distributionD. Train two Resnet50 models (He et al., 2016) on D̂ and ensure by Assumption 1 that both networks have learned the predictive non-robust features of the distribution by using regularization and cross-validation across a grid of hyperparameters.\nConstruct a D̂det dataset for the first network using Equation 3 where t is chosen deterministically according to y using the transformation t = (y+1)%10. We use Projected Gradient Descent (PGD) (Madry et al., 2018) to solve the optimization problem in Equation 3. Split the adversarial examples into two categories - those that transfer to the second network with their target labels, and those that do not. Relabel all adversarial examples xadv with their target label t, and train a Resnet50 model on (xadv, t) pairs from each category.\nAs Equations 4 and 5 suggest, for (xadv, t) ∼ D̂det, the non-robust features ofD are predictive of t, but the robust features of D are predictive of (t− 1)%10. So if a neural network trained on a subset of D̂det learns predictive nonrobust features, it will generalize to D, and if it learns predictive robust features, it will generalize to the shifted distribution Dshift :\nDshift = {(x, (y + 1)%10) : (x, y) ∼ D} (9) Figure 2 shows the performance of these two networks on D and Dshift. We can see that the network trained on the examples that transfer generalizes well to D, but the network trained on the examples that do not transfer generalizes to Dshift. The configuration in the figure is as a result of a thorough grid search over hyperparameters with the metric for success being performance on D. Along with Assumption 1, our experiment establishes\nthat the examples that transfer contain predictive non-robust features, and the examples that don’t transfer don’t contain predictive non-robust features. In particular, we claim the following : Claim 1. Train two networks N1 and N2 on a common dataset such that both networks learn the predictive non-robust features present in the dataset. Then an adversarial example generated forN1 transfers to the second network if and only if this example contains predictive non-robust features.\nFurther, if a neural network C has learned predictive non-robust features, then PGD will construct some adversarial examples with predictive non-robust features (see Equation 4), and vice-versa. This allows us to infer the following property, which we will use in our analysis in the next section : Claim 2. If a neural network N2 has learned the predictive non-robust features in a dataset, then adversarial examples generated for another network N1 using PGD will transfer to N2 if and only if N1 has also learned predictive non-robust features." }, { "heading": "4 THE TWO PATHWAY HYPOTHESIS", "text": "" }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "We use the CIFAR-10 training set as our empirical distribution D̂, and train a neural network N1 using ERM on D̂ with cross-validation and regularization such that it learns non-robust features by Assumption 1. Construct the Ddet dataset according to the procedure described in Section 2, where the reference model C is N1 and the adversarial target t is chosen deterministically as t = (y + 1)%10. Split Ddet into training and validation sets and train a new neural network N2 on the training set.\nAs we discussed in Section 3, ifN2 is able to generalize toD, thenN2 must have learned the predictive non-robust features of D, and if N2 is able to generalize to Dshift, then N2 must have learned the predictive robust features of D. This is depicted in Figure 3 in the context of our illustrative example from Section 2.\nWe use the accuracy on D (respectively, Dshift) as a proxy for how much of the model’s performance can be attributed to its learning predictive non-robust (respectively, robust) features. We refer to these as “non-robust feature accuracy” and “robust feature accuracy”.\nFinally, the accuracies on the training and validation splits of Ddet tell us how well the model has fit the training data, and whether the model is overfitting. We train the network N2 using SGD for\n120 epochs using different combinations of learning rate and regularization, and plot the evolution of these four metrics over the course of training." }, { "heading": "4.2 RESULTS AND DISCUSSION", "text": "We observe that training follows two very distinct regimes or pathways depending on the choice of hyperparameters.\nThe first pathway, illustrated in Figure 1a occurs when the model is trained with regularization of some sort - either in the form of a high initial learning rate (LR), L2 weight decay, or data augmentation. The model starts out by learning only predictive robust features (possibly with contaminants), but at some point switches to learning a combination of robust and non-robust predictive features. Training and validation accuracy steadily increase, and the model ends with both training and validation accuracy close to 100%.\nThe second pathway, illustrated in Figure 1b occurs when the model is trained with a low starting learning rate, little or no L2 weight decay and no data augmentation. The model starts out similar to the first pathway, but then starts overfitting the training data before it can learn non-robust predictive features. At this point, validation accuracy stagnates. The model finishes with a training accuracy of 100% but a validation accuracy of 81%. Nearly all the performance of the model can be attributed to its learning predictive robust features.\nHyperparameters : In Section C of the Appendix, we present a study of the effect of different hyperparameters for a Resnet-18 model trained on D̂det. We observe that the model makes a sharp transition from Pathway 1 to 2 in the space of hyperparameters, with a narrow “middle ground”.\nOn clean data : Training on the D̂det dataset allows us to decompose the accuracy into robust and non-robust, but a similar decomposition doesn’t exist for a model trained on D. Instead we utilize Claim 2 and use adversarial transferability as a proxy for whether or not the model has learned non-robust features.\nTrain two models M(1)1 and M (1) 2 with different random initializations on the unaltered CIFAR10 dataset, with both data augmentation and some weight decay. Train two more models M(2)1 andM(2)2 with neither weight augmentation nor weight decay. We plot the training and validation accuracies over the course of training forM(1)1 andM (2) 1 in Figure 4a and Figure 4b.\nSimultaneously, we also plot the targeted adversarial attack success, as well as the transfer accuracy to M(1)2 and M (2) 2 . We observe that targeted adversarial attack success is high for both models. However, while adversarial examples generated forM(1)1 transfer toM (1) 2 with reasonable success, adversarial examples generated forM(2)1 fail to transfer to eitherM (1) 2 orM (2) 2 .\nWe conclude thatM(1)1 learns predictive non-robust features as training progresses, andM (1) 2 does not. Thus, M(1)1 follows Pathway 1 and M (2) 1 follows Pathway 2. We observe that learning predictive non-robust features seems to be essential for good generalization on the validation set, and robust features alone do not suffice.\nRemark : Although Figure 1a suggests that the model eventually generalizes better to the nonrobust feature mapping than the robust, this is not a generally applicable rule over different datasets, architectures and combinations of hyperparameters. Table 5 in the Appendix illustrates this point." }, { "heading": "4.3 RELATION TO OTHER RESULTS ABOUT NEURAL NETWORK TRAINING", "text": "Low and high learning rates : The regularizing effect of a high initial learning rate has been studied in detail by Li et al. (2019). They construct a dataset with two types of patterns - those that are hard-to-fit but easy-to-generalize (i.e., low in variation), and those that are easy-to-fit but hard-to-generalize (i.e., noisy).\nThey show that a neural network trained with small learning rate first focuses on the hard-to-fit patterns, and because of the low noise in them, quickly overfits to the training set. As a result, it is not able to learn easy-to-fit pattern effectively later on. In contrast, a model that starts out with a high learning rate learns easy-to-fit pattern first, and since these are noisy, doesn’t overfit the training set. Later on, once the learning rate is annealed, the model is able to effectively learn the harder-to-fit patterns.\nThese two cases can be crudely mapped onto our two pathways. The model in Pathway 2, trained with a low LR, learns only robust features to start out, indicating that these features are hard-to-fit. It overfits the training set and thereafter is unable to learn the non-robust features, which are easy-to-fit.\nThe model in Pathway 1, trained with a high LR, quickly begins to learn the non-robust features which are easy-to-fit. However, it learns the robust features too alongside, indicating that this mapping from the low and high LR scenarios to our two pathway theory is not perfect.\nComplexity of learned functions : Another perspective on the training of neural networks is given by Nakkiran et al. (2019). They define a performance correlation metric between two models that captures how much of the performance of one model can be explained by the other, and show that as training progresses, the peformance correlation between the current model and the best simple model decreases. This indicates that the functions learned by a neural network become increasingly complex as training progresses.\nAlthough their metric is defined for a binary classification setting, we adapt it for the multi-class setting, and use a multi-class logistic regression classifier as the “best simple classifier”. We measure the performance correlation between the model trained on Ddet and the simple classifier as training\nprogresses. The performance correlation, scaled by 100 and smoothed, is shown in Figure 1 along with the training curves.\nWe observe in Figure 1a that the point at which the performance correlation plateaus corresponds with when the robust accuracy decreases sharply. Similarly in Figure 1b, the correlation levels off along with the robust accuracy. We conjecture that the initial robust features learned by the model are simple, linear functions, and the non-robust features are more complex and non-linear. This is in line with the findings of Tramèr et al. (2017) that transferable adversarial examples occur in a space of high dimensionality." }, { "heading": "5 THEORETICAL ANALYSIS", "text": "In this section, we present some results for gradient descent on a toy linear model on a distribution with robust and non-robust predictive features. We get results that mirror our two pathways, for different choices of initial conditions. The setting we use is an adaptation of the one used by Nakkiran et al. (2019). For proofs of these theorems, refer to Section A of the Appendix.\nDefine a data distribution P over Rd × {−1, 1} as follows : y u.a.r∼ {−1, 1}, q u.a.r∼ {3, ..., d}\nλ1 ∼ Bernoulli(p) over {±1}, λ2 ∼ Bernoulli(p/k) over {±1}, p ∈ (0, 1/2), k > 1 x = yλ1e1 + yλ2e2 + eq\nwhere < 1 is some small positive constant, and ei denotes the ith natural basis vector. Now sample a ”training set” (X1, Y1), ..., (Xn, Yn) from this distribution. Let A = [XT1 ; ...;X T n ] ∈ Rn×d and B = [Y1, ..., Yn] T ∈ Rn. Consider training a linear classifier using gradient descent with a learning rate of η to find w ∈ Rd that minimizes the squared loss :\nL(w) = 1 2n ‖B −Aw‖22, w ∈ Rd\nWe operate in the overparameterized setting, where n d. So with high probability, the coordinates {3, ..., d} of the training data are orthogonal for all the training points. 2\nThe idea is that the data consists of a “robust” feature given by the first coordinate, a “non-robust” feature (one that can become anti-correlated with a perturbation of 2 ) given by the second coordinate, and a noisy component that comprises the rest of the coordinates, making it possible for a model to fit the data exactly. The robust component is predictive of the true label with probability 1− p, and the non-robust component is predictive of the true label with probability 1− (p/k). For simplicity, assume that the initial weight vector w0 = 0, and that n is sufficiently large. Theorem 1 (Robust before Non-robust). If ≤ √ (1− 2p)/(1− 2p/k), then at the end of the first step, with high probability, the model will rely on the robust feature for classification,, i.e., w\n(1) 1 ≥ w (2) 1 , and will have a population accuracy of 1− p.\nTheorem 2 (Two Pathways). Define\nkt = 2p(1 + n 2)− 2p(1− ) 2p(1 + n 2)− (1− )\nThen if η ≤ 2/(1 + 2 + (1/n)), as the number of gradient steps goes to infinity,\n• if k ≥ kt, sample accuracy approaches 1 and population accuracy approaches 1 − (p/k) with high probability.\n• if k < kt, sample accuracy approaches 1 and population accuracy approaches 1− p with high probability.\nDiscussion : The two cases of Theorem 2 very roughly correspond to Pathways 1 and 2. Since this is a strongly convex problem, gradient descent with a small enough learning rate will converge to a fixed solution, so we cannot mimic the setting where different training hyperparameters lead to Pathway 1 or 2. But we can see that if the non-robust feature is predictive enough, the model learns the non-robust feature, otherwise it learns the robust feature.\n2Caveat : here, d is both the input dimensionality and the number of parameters. Although deep learning models are overparameterized, it is uncommon for datasets to have more dimensions than data points." }, { "heading": "6 DIGRESSION : CROSS-DATASET TRANSFER", "text": "One view of non-robust features is that they are peculiarities or quirks in the data distribution. We provide evidence that allows us to tentatively refute this assumption by showing that one can construct two datasets from completely independent sources, and a model that learns only the predictive non-robust features of one dataset can achieve non-trivial generalization on the other dataset.\nThe CINIC-10 dataset Darlow et al. (2018) is a distribution-shifted version of CIFAR-10 constructed by sampling from the ImageNet synsets corresponding to each of the CIFAR-10 classes. Although it may seem like CIFAR-10 and CINIC-10 could be candidates for two datasets drawn from independent sources, ImageNet is constructed by querying Flickr, and Flickr is also one of the seven sources for the 80 million TinyImages dataset (Torralba et al. (2008)) that was used to construct CIFAR-10 (Krizhevsky et al. (2009)). So roughly one in seven CIFAR-10 images is from Flickr.\nTo be even more certain that there are no spurious correlations creeping in because of a common source, we construct the CIFAR-minus-Flickr dataset that consists of those CIFAR-10 images that haven’t been sourced from Flickr. This comprises 52,172 out of the 60,000 CIFAR-10 images.\nWe construct Ddet datasets as described in Section 4 for CIFAR-minus-Flickr and CINIC-10, and train Resnet50 models on them. These models can only learn non-robust features to help them generalize to the original unperturbed datasets, because the robust features are correlated with the shifted labels.\nThe results are shown in Table 2. Both Ddet trained models achieve an accuracy of close to 20% on the other dataset, which is a long way from the expected 10% accuracy of a random model." }, { "heading": "7 CONCLUSION AND FUTURE DIRECTIONS", "text": "In this paper, we’ve shown that from the perspective of predictive robust and non-robust features, neural network training follows two very different pathways, corresponding to the training and overfitting regimes. In both regimes, the model starts out by learning predictive robust features first.\nThis decomposition into two distinct pathways has several interesting implications. For instance, adversarial transferability means that even an adversary with no access to a model can mount a successful attack by constructing adversarial examples for a proxy model. But a model trained via Pathway 2 learns no predictive non-robust features, and adversarial examples generated for another model will in general not transfer to this model. Thus an adversary cannot perform a successful attack on this model without at least the ability to query the model and observe its outputs for a large number of inputs.\nA line of enquiry that arises naturally from our work is understanding precisely why this behavior occurs in neural networks. What characterstics do predictive non-robust features have that ensure that they are learned only subsequent to predictive robust features? We pose finding a more precise definition of non-robust features that will allow us to theoretically analyze and explain these properties as an important direction for future work.\nFinally, as we show in Section 6, predictive non-robust features can occur across datasets sampled from independent sources. Although this needs to be investigated more thoroughly, our results challenge the view that non-robust features are pecularities of the data distribution. We speculate that some of these features could have a meaning in terms of human semantics, like our illustrative example where the eye color was a predictive non-robust feature." }, { "heading": "A PROOFS OF THEOREMS IN SECTION 5", "text": "Consider using Gradient Descent with a learning rate of η to minimize the squared loss as described in Section 5. Then,\nwt+1 := wt + 1\nn ηAT (B −Awt)\nLetting α = η/n, it can be proved by induction that\nwt = w0 + αA T [ t−1∑ k=0 (I − αAAT )k ] (B −Aw0)\nLet the largest eigenvalue of AAT be λmax. If |1− αλmax| ≤ 1, then\nwt = w0 +A T (AAT )−1 [ I − (I − αAAT )t ] (B −Aw0)\n=⇒ wT = AT (AAT )−1(B −Aw0) + w0 (10) for some very large number of steps T . This achieves zero empirical training error, as we can verify that AwT = B.\nLet the first and second column of A be s and r respectively. Since (with high probability) coordinates 3 to d are orthogonal for all training points,\nAAT = I + ssT + rrT\nOther than 1, the eigenvalues of this matrix are\n1 + (sT s+ rT r)±\n√ (sT s− rT r)2 + 4(sT r)2\n2\nIt’s easy to see that sTs = n, rTr = n 2. Let sTr = rTs = n β. Then since is small,√ (sT s− rT r)2 + 4(sT r)2 = n √ (1− 2)2 + 4( β)2\n≈ n(1 + 2(2β2 − 1))\nThen the eigenvalues are\nλ1 = 1 + n(1 + 2β2), λ2 = 1 + n 2(1 + β2)\nTheorem 1 : If ≤ √\n(1− 2p)/(1− 2p/k), then at the end of the first step, with high probability, the model will rely on the robust feature for classification,, i.e., w(1)1 ≥ w (2) 1 , and will have a population accuracy of 1− p.\nProof.\nw (1) 1 = w (1) 0 + αs T (B −Aw0)\nw (2) 1 = w (2) 0 + αr T (B −Aw0)\nw (1) 1 ≥ w (2) 1 =⇒ αsTB ≥ αrTB\nIt is easy to see that E [ 1\nn sTB\n] = (1− 2p), E [ 1\nn rTB\n] = 2 ( 1− 2p\nk ) Since n is sufficiently large, these random variables are close to their mean with high probability. So,\n(1− 2p) ≥ 2 (\n1− 2p k ) This is true by the assumed bound on .\nTheorem 2 : Define\nkt = 2p(1 + n 2)− 2p(1− ) 2p(1 + n 2)− (1− )\nThen if η ≤ 2/(1 + 2 + (1/n)), as the number of gradient steps goes to infinity,\n• if k ≥ kt, sample accuracy approaches 1 and population accuracy approaches 1 − (p/k) with high probability.\n• if k < kt, sample accuracy approaches 1 and population accuracy approaches 1− p with high probability.\nProof. Gradient descent will converge if\nαλ1 ≤ 2 =⇒ η ≤ 2\n1 + 2β2 + (1/n)\nUsing the fact that β ≤ 1 gives us the bound on learning rate in the theorem statement. Next, using Equation 10,\nwT = A T (I + ssT + rrT )−1(B −Aw0) + w0\n(I+ssT +rrT )−1 = ( I − (1 + s\nTs)(rrT ) + (1 + rTr)(ssT )− (sTr)(srT )− (rTs)(rsT ) (1 + sTs)(1 + rTr)− (sTr)(rTs)\n)\n=⇒ w(1)T = s T (I + ssT + rrT )−1(B −Aw0) + w(1)0\n= [ sT − (1 + n)(n β)r T + n(1 + n 2)sT − (n2 β)rT − (n2 2β2)sT\n(1 + n)(1 + n 2)− n2 2β2\n] B\n=\n[ (1 + n 2)sT − (n β)rT\n(1 + n)(1 + n 2)− n2 2β2\n] B\nw (2) T = r T (I + ssT + rrT )−1(B −Aw0) + w(2)0\n= [ rT − (1 + n)(n 2)rT + (1 + n 2)(n β)sT − (n2 2β2)rT − (n2 3β)sT\n(1 + n)(1 + n 2)− n2 2β2\n] B\n=\n[ (1 + n)rT − (n β)sT\n(1 + n)(1 + n 2)− n2 2β2\n] B\nwhere we have used the fact that w0 = 0. Now suppose we sample a new point (X,Y ) from the data distribution. Let qi denote the index of the noise coordinate of Xi and let q denote the index of the noise coordinate of X . With high probability, q 6= qi ∀i. So,\nXTwT = X (1)w (1) T +X (2)w (2) T +X (q)w (q) T\n= X(1)w (1) T +X (2)w (2) T + w (q) 0\n= X(1)w (1) T +X (2)w (2) T\nWe want to analyze the case when the first and second coordinate disagree. Let X(1) = −1 and X(2) = . In this scenario if the model always predicts XTwT ≥ 0, it will match the prediction of the second coordinate and achieve a population accuracy of 1− p/k. On the other hand if it always predicts XTwT < 0, it will match the prediction of the first coordinate and achieve a population accuracy of 1− p.\nXTwT > 0 =⇒ w(2)T > w (1) T\n=⇒ [ (1 + n)rT − (n β)sT\n(1 + n)(1 + n 2)− n2 2β2\n] B ≥ [ (1 + n 2)sT − (n β)rT\n(1 + n)(1 + n 2)− n2 2β2\n] B\nWith high probability, sTB = n(1−2p), rTB = n (1−2p/k), and β = (1−p(k+1)/k+4p2/k).\n=⇒ [ (1 + n)n ( 1− 2p\nk\n) − (n β)n(1− 2p) ] ≥ [ (1 + n 2)n(1− 2p)− (n β)n ( 1− 2p\nk\n)]\n=⇒ n( 2 − 1) + 2pn ( 1− 2\nk\n) + 2pn2 2 ( 1− 1\nk\n) ≥ 0\n=⇒ k ≥ 2p(1 + n 2)− 2p(1− )\n2p(1 + n 2)− (1− )" }, { "heading": "B OTHER DATASETS AND ARCHITECTURES", "text": "In this section, we provide training graphs illustrating Pathway 1 and 2 for Resnet18 and Resnet50 models trained on theDdet versions of the CIFAR-10 and CINIC-10 (Darlow et al. (2018)) datasets. Along with each graph, we mention the hyperparameters used." }, { "heading": "C HYPERPARAMETERS", "text": "In this section, we list the hyperparameters used in our experiments. We also take the case of a Resnet18 trained on Ddet CIFAR-10 and look at which hyperparameters lead to Pathway 1 and which lead to Pathway 2. As we note in Section 4, there is a sharp transition between the two pathways in the space of hyperparameters.\nCommon Parameters :\nTraining on Non-transferable examples (Section 3)\nWe trained the model using 120 epochs of SGD, and did a grid search over the following combinations of hyperparameters.\nCIFAR Ddet (Figure 1) :\n• Pathway 1 : LR 0.1, No data augmentation, L2 5e-4\n• Pathway 2 : LR 0.01, No data augmentation, L2 0\nCIFAR (Figure 4) :\n• Pathway 1 : LR 0.01, Data augmentation, L2 5e-4\n• Pathway 2 : LR 0.01, No data augmentation, L2 0\nWhich hyperparameters lead to Pathway 1 and Pathway 2? :\nWe pick one model (Resnet18) and one dataset (CIFAR-10), and explore which hyperparameters lead to Pathway 1 and which lead to Pathway 2. To re-iterate, Pathway 2 is characterized by the model learning almost exclusively only the robust features. All models were trained with SGD for 120 epochs, and the reported accuracies are after the final epoch.\nwhich follow Pathway 2 have been highlighted in green .\nRemark : As we can see in the case with LR 0.1 and no data augmentation, the network exhibits a sharp transition from Pathway 2 to Pathway 1 in the space of hyperparameters. There is a narrow “middle ground” around L2 = 2.5e-4.\nCross-Dataset Transfer (Table 2) :\n• CIFAR-minus-Flickr (clean) : Adam optimizer, LR 1e-3, L2 1e-5, with data augmentation. • CINIC-10 (clean) : LR 0.01, data augmentation, L2 5e-4. • CIFAR-minus-Flickr Ddet : LR 0.1, No data augmentation, L2 5e-4. • CINIC-10 Ddet : LR 0.1, No data augmentation, L2 5e-4." } ]
2,020
null
SP:5b4768c8d71e9b044c50d77fb68d545370ca8329
[ "The paper introduces a new function $L(x)$ so that, when optimised under certain objectives defined over continuous observation $x$ and discrete latent $z$, learns the correct clustering probability $p(z|x)$. The loss functions considered are the Jensen-Fisher divergence and muture information. The authors introduces modifications to the principled objectives in practice and demonstrate performance on toy and real image datasets.", "This work introduces a parameterization called Neural Bayes that facilitates learning representations from unlabeled data by categorizing them, where each data point x is mapped to a latent discrete variable z such that the distribution p(x) is segmented into a finite number of conditional distributions. Imposing different constraints on the latent discrete space will result in learning representations manifesting various properties. Two use cases of the proposed parameterization are studied: disjoint manifold separation and mutual information maximization." ]
We introduce a parameterization method called Neural Bayes which allows computing statistical quantities that are in general difficult to compute and opens avenues for formulating new objectives for unsupervised representation learning. Specifically, given an observed random variable x and a latent discrete variable z, we can express p(x|z), p(z|x) and p(z) in closed form in terms of a sufficiently expressive function (Eg. neural network) using our parameterization without restricting the class of these distributions. To demonstrate its usefulness, we develop two independent use cases for this parameterization: 1. Disjoint Manifold Separation: Neural Bayes allows us to formulate an objective which can optimally label samples from disjoint manifolds present in the support of a continuous distribution. This can be seen as a specific form of clustering where each disjoint manifold in the support is a separate cluster. We design clustering tasks that obey this formulation and empirically show that the model optimally labels the disjoint manifolds. 2. Mutual Information Maximization (MIM): MIM has become a popular means for self-supervised representation learning. Neural Bayes allows us to compute mutual information between observed random variables x and latent discrete random variables z in closed form. We use this for learning image representations and show its usefulness on downstream classification tasks.
[]
[ { "authors": [ "Philip Bachman", "R Devon Hjelm", "William Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Mohamed Ishmael Belghazi", "Aristide Baratin", "Sai Rajeswar", "Sherjil Ozair", "Yoshua Bengio", "Aaron Courville", "R Devon Hjelm" ], "title": "Mine: mutual information neural estimation", "venue": "arXiv preprint arXiv:1801.04062,", "year": 2018 }, { "authors": [ "Anthony J Bell", "Terrence J Sejnowski" ], "title": "An information-maximization approach to blind separation and blind deconvolution", "venue": "Neural computation,", "year": 1995 }, { "authors": [ "Mathilde Caron", "Piotr Bojanowski", "Armand Joulin", "Matthijs Douze" ], "title": "Deep clustering for unsupervised learning of visual features", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Adam Coates", "Andrew Ng", "Honglak Lee" ], "title": "An analysis of single-layer networks in unsupervised feature learning", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Jean-Bastien Grill", "Florian Strub", "Florent Altché", "Corentin Tallec", "Pierre H Richemond", "Elena Buchatskaya", "Carl Doersch", "Bernardo Avila Pires", "Zhaohan Daniel Guo", "Mohammad Gheshlaghi Azar" ], "title": "Bootstrap your own latent: A new approach to self-supervised learning", "venue": "arXiv preprint arXiv:2006.07733,", "year": 2020 }, { "authors": [ "Michael Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "venue": "In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Philip Haeusser", "Johannes Plapp", "Vladimir Golkov", "Elie Aljalbout", "Daniel Cremers" ], "title": "Associative deep clustering: Training a classification network with no labels", "venue": "In German Conference on Pattern Recognition,", "year": 2018 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Weihua Hu", "Takeru Miyato", "Seiya Tokui", "Eiichi Matsumoto", "Masashi Sugiyama" ], "title": "Learning discrete representations via information maximizing self-augmented training", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Xu Ji", "João F Henriques", "Andrea Vedaldi" ], "title": "Invariant information clustering for unsupervised image classification and segmentation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Ralph Linsker" ], "title": "Self-organization in a perceptual network", "venue": null, "year": 1988 }, { "authors": [ "Guangcan Liu", "Zhouchen Lin", "Yong Yu" ], "title": "Robust subspace segmentation by low-rank representation", "venue": "In Proceedings of the 27th international conference on machine learning", "year": 2010 }, { "authors": [ "Yi Ma", "Allen Y Yang", "Harm Derksen", "Robert Fossum" ], "title": "Estimation of subspace arrangements with applications in modeling and segmenting mixed data", "venue": "SIAM review,", "year": 2008 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Andrew M Saxe", "James L McClelland", "Surya Ganguli" ], "title": "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks", "venue": "arXiv preprint arXiv:1312.6120,", "year": 2013 }, { "authors": [ "Uri Shaham", "Kelly Stanton", "Henry Li", "Boaz Nadler", "Ronen Basri", "Yuval Kluger" ], "title": "Spectralnet: Spectral clustering using deep neural networks", "venue": "arXiv preprint arXiv:1801.01587,", "year": 2018 }, { "authors": [ "Pascal Vincent", "Hugo Larochelle", "Isabelle Lajoie", "Yoshua Bengio", "Pierre-Antoine Manzagol" ], "title": "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion", "venue": "Journal of machine learning research,", "year": 2010 } ]
[ { "heading": "1 INTRODUCTION", "text": "We introduce a generic parameterization called Neural Bayes that facilitates unsupervised learning from unlabeled data by categorizing them. Specifically, our parameterization implicitly maps samples from an observed random variable x to a latent discrete space z where the distribution p(x) gets segmented into a finite number of arbitrary conditional distributions. Imposing different conditions on the latent space z through different objective functions will result in learning qualitatively different representations.\nOur parameterization may be used to compute statistical quantities involving observed variables and latent variables that are in general difficult to compute (thanks to the discrete latent space), thus providing a flexible framework for unsupervised learning. To illustrate this aspect, we develop two independent use cases for this parameterization– disjoint manifold separation (DMS) and mutual information maximization (Linsker, 1988), as described in the abstract. For the manifold separation task, we show experiments on 2D datasets and their high-dimensional counter-parts designed as per the problem formulation, and show that the proposed objective can optimally label disjoint manifolds. For the MIM task, we experiment with benchmark image datasets and show that the unsupervised representation learned by the network achieves performance on downstream classification tasks comparable with a closely related MIM method Deep InfoMax (DIM, (Hjelm et al., 2019)). For both objectives we design regularizations necessary to achieve the desired behavior in practice. All the proofs can be found in the appendix." }, { "heading": "2 RELATED WORK", "text": "Neural Bayes-DMS: Numerous recent papers have proposed clustering algorithm for unsupervised representation learning such as Deep Clustering (Caron et al., 2018), information based clustering (Ji et al., 2019), Spectral Clustering (Shaham et al., 2018), Assosiative Deep Clustering (Haeusser\net al., 2018) etc. Our goal in regards to clustering in Neural Bayes-DMS is in general different from such methods. Our objective is aimed at finding disjoint manifolds in the support of a distribution. It is therefore a generalization of traditional subspace clustering methods (where the goal is to find disjoint affine subspaces) (Ma et al., 2008; Liu et al., 2010), to arbitrary manifolds.\nAnother class of clustering algorithms include mixture models (such as kNNs). Our clustering proposal (DMS) is novel compared to this class in two ways: 1. we formulate the clustering problem as that of identifying disjoint manifold in the support of a distribution. This is different from assuming K ground truth clusters, where the notion of cluster is ill-defined; 2. the DMS objective in proposition 1 is itself novel and we prove its optimality towards labeling disjoint manifolds in theorem 1.\nNeural Bayes-MIM: Self-supervised representation learning has attracted a lot of attention in recent years. Currently contrastive learning methods and similar variants (such as MoCo (He et al., 2020), SimCLR (Chen et al., 2020), BYOL Grill et al. (2020)) produce state-of-the-art (SOTA) performance on downstream classification tasks. These methods make use of handcrafted image augmentation methods that exploit priors such as class information is typically associated with object shape and is location invariant. However, since we specifically develop an easier alternative to DIM (which also maximizes the mutual information between the input and the latent representations), for a fair comparison, we compare the performance of our Neural Bayes-MIM algorithm for representation learning only with DIM. We leave the extension of Neural Bayes MIM algorithm with data augmentation techniques and other advanced regularizations similar to Bachman et al. (2019) as future work. Our experiments show that our proposal performs comparably or slightly better compared with DIM. The main advantage of our proposal over DIM is that it offers a closed form estimation of MI due to discrete latent variables.\nWe note that the principle of mutual information maximization for representation learning was introduced in Linsker (1988) and Bell & Sejnowski (1995), and since then, a number of self-supervised methods involving MIM have been proposed. Vincent et al. (2010) showed that auto-encoder based methods achieve this goal implicitly by minimizing the reconstruction error of the input samples under isotropic Gaussian assumption. Deep infomax (DIM, Hjelm et al. (2019)) uses estimators like MINE Belghazi et al. (2018) and noise-contrastive estimation (NCE, Gutmann & Hyvärinen (2010)) to estimate MI and maximize it for both both local and global features in convolutional networks. Contrastive Predictive Coding (Oord et al., 2018) is another approach that maximizes MI by predicting the activations of a layer from the layer above using NCE.\nWe also point out that the estimation of mutual information due to Neural Bayes parameterization in the Neural Bayes-MIM-v1 objective (Eq 8) turns out to be identical to the one proposed in IMSAT (Hu et al., 2017). However, there are important differences: 1. we introduce regularizations which significantly improve the performance of representations on downstream tasks compared to IMSAT (cf table 3 in (Hu et al., 2017)); 2. we provide theoretical justifications for the parameterization used (lemma 1) and show in theorem 2 why it is feasible to compute high fidelity gradients using this objective in the mini-batch setting even though it contains the term Ex[Lk(x)]. On the other hand, the justification used in IMSAT is that optimizing using mini-batches is equivalent to optimizing an upper bound of the original objective; 3. we perform extensive ablation studies exposing the importance of the introduced regularizations." }, { "heading": "3 NEURAL BAYES", "text": "Consider a data distribution p(x) from which we have access to i.i.d. samples x ∈ Rn. We assume that this marginal distribution is a union of K conditionals where the kth conditional’s density is denoted by p(x|z = k) ∈ R+ and the corresponding probability mass denoted by p(z = k) ∈ R+. Here z is a discrete random variable with K states. We now introduce the parameterization that allows us to implicitly factorize any marginal distribution into conditionals as described above. Aside from the technical details, the key idea behind this parameterization is the Bayes’ rule.\nLemma 1. Let p(x|z = k) and p(z) be any conditional and marginal distribution defined for continuous random variable x and discrete random variable z. If Ex∼p(x)[Lk(x)] 6= 0 ∀k ∈ [K], then there exists a non-parametric function L(x) : Rn → R+K for any given input x ∈ Rn with the\nproperty ∑K k=1 Lk(x) = 1 ∀x such that,\np(x|z = k) = Lk(x) · p(x) Ex∼p(x)[Lk(x)] , p(z = k) = Ex[Lk(x)], p(z = k|x) = Lk(x) (1)\nThus the function L can be seen as a form of soft categorization of input samples. In practice, we use a neural network with sufficient capacity and softmax output to realize this function L. We name our parameterization method Neural Bayes and replace L with Lθ to denote the parameters of the network. By imposing different conditions on the structure of z by formulating meaningful objectives, we will get qualitatively different kinds of factorization of the marginal p(x), and therefore the function Lθ will encode the posterior for that factorization. In summary, if one formulates any objective that involves the terms p(x|z), p(z) or p(z|x), where x is an observed random variable and z is a discrete latent random variable, then they can be substituted with Lk(x)·p(x)Ex[Lk(x)] , Ex[Lk(x)] and Lk(x) respectively.\nOn an important note, Neural Bayes parameterization requires using the term Ex[Lk(x)], through which computing gradient is infeasible in general. A general discussion around this can be found in the appendix D. Nonetheless, we show that mini-batch gradients can have good fidelity for one of the objectives we propose using our parameterization. In the next two sections, we explore two different ways of factorizing p(x) resulting in qualitatively different goals of unsupervised representation learning." }, { "heading": "4 DISJOINT MANIFOLD SEPARATION (DMS)", "text": "In many cases, the support of a distribution may be a set of disjoint manifolds. In this task, our goal is to label samples from each disjoint manifold with a distinct value. This formulation can be seen as a generalization of subspace clustering (Ma et al., 2008) where the goal is to identify disjoint affine subspaces. To make the problem concrete, we first formalize the definition of a disjoint manifold.\nDefinition 1. (Connected Set) We say that a set S ⊂ Rn is a connected set (disjoint manifold) if for any x,y ∈ S, there exists a continuous path between x and y such that all the points on the path also belong to S.\nTo identify such disjoint manifolds in a distribution, we exploit the observation that only partitions that separate one disjoint manifold from others have high divergence between the respective conditional distributions while partitions that cut through a disjoint manifold result in conditional distributions with low divergence between them. Therefore, the objective we propose for this task is to partition the unlabeled data distribution p(x) into conditional distributions qi(x)’s such that a divergence between them is maximized. By doing so we recover the conditional distributions defined over the disjoint manifolds (we prove its optimality in theorem 1). We begin with two disjoint manifolds and extend this idea to multiple disjoint manifolds in the appendix B.\nLet J be a symmetric divergence (Eg. Jensen-Shannon divergence, Wasserstein divergence, etc), and q0 and q1 be the disjoint conditional distributions that we want to learn. Then the aforementioned objective can be written formally as follows:\nmax q0,q1 π∈(0,1)\nJ(q0(x)||q1(x)) (2)\ns.t. ∫ x q0(x) = 1,\n∫ x q1(x) = 1, q1(x) · π + q0(x) · (1− π) = p(x).\nSince our goal is to simply assign labels to data samples x corresponding to which manifold they belong instead of learning conditional distributions as achieved by Eq. (2), we would like to learn a function L(x) which maps samples from disjoint manifolds to distinct labels. To do so, below we derive an objective equivalent to Eq. (2) that learns such a function L(x).\nProposition 1. (Neural Bayes-DMS) Let L(x) : Rn → [0, 1] be a non-parametric function for any given input x ∈ Rn, and let J be the Jensen-Shannon divergence. Define scalars f1(x) := L(x)Ex[L(x)]\nand f0(x) := 1−L(x)\n1−Ex[L(x)] . Then the objective in Eq. (2) is equivalent to,\nmax L\n1 2 · Ex\n[ f1(x) · log ( f1(x)\nf1(x) + f0(x)\n)] + 1\n2 · Ex\n[ f0(x) · log ( f0(x)\nf1(x) + f0(x)\n)] + log 2\ns.t. Ex[L(x)] /∈ {0, 1}. (3)\nOptimality: We now prove the optimality of the proposed objective towards discovering disjoint manifolds present in the support of a probability density function p(x). Theorem 1. (optimality) Let p(x) be a probability density function over Rn whose support is the union of two non-empty connected sets (definition 1) S1 and S2 that are disjoint, i.e. S1 ∩ S2 = ∅. Let L(x) ∈ [0, 1] belong to the class of continuous functions which is learned by solving the objective in Eq. (3). Then the objective in Eq. (3) is maximized if and only if one of the following is true:\nL(x) = { 0 ∀x ∈ S1 1 ∀x ∈ S2\nor L(x) = {\n1 ∀x ∈ S1 0 ∀x ∈ S2.\nThe above theorem proves that optimizing the derived objective over the space of functions L implicitly partitions the data distribution into maximally separated conditionals by assigning a distinct label to points in each manifold. Most importantly, the theorem shows that the continuity condition on the function L(x) plays an important role. Without this condition, the network cannot identify disjoint manifolds. Extension to multiple disjoint manifold case can be found in section B in appendix." }, { "heading": "4.1 IMPLEMENTATION DETAILS", "text": "Prior Collapse: The constraint in proposition 1 is a boundary condition required for technical reasons in lemma 1. In practice we do not worry about them because optimization itself avoids situations where Ex[L(x)] ∈ {0, 1}. To see the reason behind this, note that except when initialized in a way such that Ex[L(x)] ∈ {0, 1}, the log terms are negative by definition. Since the denominators of f0 and f1 are Ex[L(x)] and 1− Ex[L(x)] respectively, the objective is maximized when Ex[L(x)] moves away from 0 and 1. Thus, for any reasonable initialization, optimization itself pushes Ex[L(x)] away from 0 and 1.\nSmoothness of Lθ(.): As shown in theorem 1, the proposed objectives can optimally recover disjoint manifolds only when the function Lθ(.) is continuous. In practice we found enforcing the function to be smooth (thus also continuous) helps significantly. Therefore, after experimenting with a handful of heuristics for regularizing Lθ, we found the following finite difference Jacobian regularization to be effective (L(.) can be scalar or vector),\nRc = 1\nB B∑ i=1 ‖Lθ(xi)− Lθ(xi + ζ · δ̂i)‖2 ζ2 (4)\nwhere δ̂i := δi‖δi‖2 is a normalized noise vector computed independently for each sample xi in a batch of size B as δi := Xvi. Here X ∈ Rn×B is the matrix containing the batch of samples, and each dimension of vi ∈ RB is sampled i.i.d. from a standard Gaussian. This computation ensures that the perturbation lies in the span of data, which we found to be important. Finally ζ is the scale of normalized noise added to all samples in a batch. In our experiments, since we always normalize the datasets to have zero mean and unit variance across all dimensions, we sample ζ ∼ N (0, 0.12). Implementation: We implement the binary-partition Neural Bayes-DMS using the Monte-Carlo sampling approximation of the following objective,\nmin θ\n1 2 · Ex\n[ f1(x) · log ( 1 + f0(x)\nf1(x)\n)] + 1\n2 · Ex\n[ f0(x) · log ( 1 + f1(x)\nf0(x)\n)] + β · Rc (5)\nwhere f1(x) := Lθ(x) Ex[Lθ(x)] + and f0(x) := 1−Lθ(x) 1−Ex[Lθ(x)] + . Here = 10 −7 is a small scalar used to prevent numerical instability, and β is a hyper-parameter to control the continuity of L. The multi-partition case can be implemented in a similar way. Due to the need for computing Ex[Lθ(x)] in the objective, optimizing it using gradient descent methods with small batch-sizes is not possible. Therefore we experiment with this method on datasets where gradients can be computed for a very large batch-size needed to approximate the gradient through Ex[Lθ(x)] sufficiently well.\n4.2 EXPERIMENTS\nClustering in general is an ill posed problem. However, in our problem setup, the definition is precise, i.e., our goal is to optimally label all the disjoint manifolds present in the support of a distribution. Since this is a unique goal that is not generally considered in literature, as empirical verification, we show qualitative results on 2D synthetic datasets in figure 1. Top 2 sub-figures have 2 clusters and the bottom 2 have 3 clusters. For all experiments we use a 4 layer MLP with 400 hidden units each, batchnorm, ReLU activation, and last layer Softmax activation. In all cases we train using Adam optimizer with a learning rate of 0.001, batch size of 400 and no weight decay, and trained until convergence. Regularization coefficient β was chosen from [0.5, 6] that resulted in optimal clustering. For generality in these experiments, these 2D datasets were projected to high dimensions (512) by appending 510 dimensions of 0 entries to each sample and then randomly ro-\ntated before performing clustering. The datasets were then projected back to the original 2D space for visualizing predictions. Additional experiments can be found in the appendix C." }, { "heading": "5 MUTUAL INFORMATION MAXIMIZATION (MIM)", "text": "Suppose we want to find a discrete latent representation z (with K states) for the distribution p(x) such that the mutual information MI(x, z) is maximized (Linsker, 1988). Such an encoding z demands that it must be very efficient since it has to capture maximum possible information about the continuous distribution p(x) in just K discrete states. Assuming we can learn such an encoding, we are interested in computing p(z|x) since it tells us the likelihood of x belonging to each discrete state of z, thereby performing soft categorization which may be useful for downstream tasks. In the proposition below, we show an objective for computing p(z|x) for a discrete latent representation z that maximizes MI(x, z).\nProposition 2. (Neural Bayes-MIM-v1) Let L(x) : Rn → R+K be a non-parametric function for any given input x ∈ Rn with the property ∑K i=1 Lk(x) = 1 ∀x. Consider the following objective,\nL∗ = arg max L Ex [ K∑ k=1 Lk(x) log Lk(x) Ex[Lk(x)] ] (6)\nThen L∗k(x) = p(z ∗ = k|x), where z∗ ∈ arg maxzMI(x, z).\nThe proof essentially involves expressing MI in terms of p(z|x), and p(z), which can be substituted using Neural Bayes parameterization. However, the objective proposed in the above theorem poses a challenge– the objective contains the term Ex[Lk(x)] for which computing high fidelity gradient in a batch setting is problematic (see appendix D). However, we can overcome this problem for the MIM objective because it turns out that gradient through certain terms are 0 as shown by the following theorem.\nTheorem 2. (Gradient Simplification) Denote,\nJ(θ) = −Ex [ K∑ k=1 Lθk(x) log Lθk(x) Ex[Lθk(x)] ] , Ĵ(θ) = −Ex [ K∑ k=1 Lθk(x) log〈 Lθk(x) Ex[Lθk(x)] 〉 ] (7)\nwhere 〈.〉 indicates that gradients are not computed through the argument. Then ∂J(θ)∂θ = ∂Ĵ(θ) ∂θ .\nThe above theorem implies that as long as we plugin a decent estimate of Ex[Lθk(x)] in the objective, unbiased gradients can be computed using randomly sampled mini-batches. Note that the objective can be re-written as,\nmin θ − Ex [ K∑ k=1 Lθk(x) log〈Lθk(x)〉 ] + K∑ k=1 Ex[Lk(x)] log〈Ex[Lk(x)]〉 (8)\nThe second term is the negative entropy of the discrete latent representation p(z = k) := Ex[Lk(x)] which acts as a uniform prior. In other words, this term encourages learning a latent code z such that all states of z activate uniformly over the marginal input distribution x. This is an attribute of distributed representation which is a fundamental goal in deep learning. We can therefore further encourage this behavior by treating the coefficient of this term as a hyper-parameter. In our experiments we confirm both the distributed representation behavior of this term as well as the benefit of using a hyper-parameter as our coefficient." }, { "heading": "5.1 IMPLEMENTATION DETAILS", "text": "Alternative Formulation of Uniform Prior: In practice we found that an alternative formulation of the second term in Eq 8 results in better performance and more interpretable filters. Specifically, we replace it with the following cross-entropy formulation,\nRp(θ) := − K∑ k=1 1 K log(Ex[Lk(x)]) + K − 1 K log(1− Ex[Lk(x)]) (9)\nWhile both, the second term in Eq 8 as well asRp(θ) are minimized when Ex[Lk(x)] = 1/K, the latter formulation provides much stronger gradients during optimization when Ex[Lk(x)] approaches 1 (see appendix F.1 for details); Ex[Lk(x)] = 1 is undesirable since it discourages distributed representation. Finally, unbiased gradients can be computed through Eq 9 as long as a good estimate of Ex[Lk(x)] is plugged in. Also note that the condition Ex[Lk(x)] /∈ {0, 1} in lemma 1 is met by the Neural Bayes-MIM objective implicitly during optimization as discussed in the above paragraph in regards to distributed representation.\nImplementation: The final Neural Bayes-MIM-v2 objective is,\nmin θ − Ex [ K∑ k=1 Lθk(x) log〈Lθk(x) + 〉 ] + (1 + α) · Rp(θ) + β · Rc (10)\nwhere α and β are hyper-parameters,Rc is a smoothness regularization introduced in Eq. 4, = 10−7 is a small scalar used to prevent numerical instability. Qualitatively, we find that the regularization Rc prevents filters from memorizing the input samples. Finally, we apply the first two terms in Eq 10 to all hidden layers of a deep network at different scales (computed by spatially average pooling and applying Softmax). These two regularizations gave a significant performance boost. Thorough implementation details are provided in the appendix E. For brevity, we refer to our final objective as Neural Bayes-MIM in the rest of the paper.\nOn the other hand, to compute a good estimate of gradients, we use the following trick. During optimization, we compute gradients using a sufficiently large mini-batch of size MBS (Eg. 500) that fits in memory (so that the estimate of Ex[Lk(x)] is reasonable), and accumulate these gradients until BS samples are seen (Eg. 2000), and averaged before updating the parameters to further reduce estimation error." }, { "heading": "5.2 EXPERIMENTS", "text": "Instead of aiming for state-of-the-art results on self-supervised learning tasks, our goal in this section is to conduct experiments using Neural Bayes-MIM to understand the behavior of the algorithm, the hyper-parameters involved, and compare performance with DIM, a closely related self-supervised learning method. The use of data augmentation, additional architectures and other regularizations similar to Bachman et al. (2019) is left as future work. Therefore, we use the following simple CNN encoder architecture1 Enc in our experiments: C(200, 3, 1, 0)−P (2, 2, 0,max)−C(500, 3, 1, 0)−\n1We use the following shorthand for a) conv layer: C(number of filters, filter size, stride size, padding); b) pooling: P(kernel size, stride, padding, pool mode)\nC(700, 3, 1, 0)− P (2, 2, 0,max)− C(1000, 3, 1, 0). For an input image x of size 32× 32× 3, the output of this encoder Enc(x) has size 2× 2× 1000. The encoder is initialized using orthogonal initialization (Saxe et al., 2013), batch normalization (Ioffe & Szegedy, 2015) is used after each convolution layer and ReLU non-linearities are used. All datasets are normalized to have dimensionwise 0 mean and unit variance. Early stopping in all experiments is done using the test set (following previous work). We broadly follow the experimental setup of Hjelm et al. (2019). We do not use any data augmentation in our experiments. After training the encoder, we freeze its features and train a 1 hidden layer (200 units) classifier to get the final test accuracy." }, { "heading": "5.2.1 ABLATION STUDIES", "text": "Behavior of Neural Bayes-MIM-v1 (Eq 8) vs Neural Bayes-MIM (v2, Eq 9): The experiments and details are discussed in the appendix F.2. The main differences are: 1. majority of the filters learned by the v1 objective are dead, as opposed to the v2 objective which encourages distributed representation; 2. the performance of v2 is better than that of the v1 objective.\nPerformance due to Regularizations and State Scaling: We now evaluate the effects of the various components involved in the Neural Bayes-MIM objective– coefficients α and β, and applying the objective at different scales of hidden states. We use the CIFAR-10 dataset for these experiments.\nIn the first experiment, for each value of the number of different scales considered, we vary α, β and record the final performance, thus capturing the variation in performance due to all these three components. We consider two scaling configurations: 1. no pooling is applied to the hidden layers; 2. for each hidden layer, we spatially average pool the state using a 2× 2 pooing filter with a stride of 2. For the encoder used in our experiments (which has 4 internal hidden layers post ReLU), this gives us 4 and 8 states respectively (including the original un-scaled hidden layers) to apply the Neural Bayes-MIM objective. After getting all the states, we apply the Softmax activation to each state along the channel dimension so that the Neural Bayes parameterization holds. Thus for states with height and width, the objective is applied to each spatial (x, y) location separately and averaged. Also, for states with height (or width) less than the pooling size, we use the height (or width) as pooling size.\nWe train Neural Bayes-MIM on the full training set for 100 epochs using Adam with learning rate 0.001 (other Adam hyper-parameters are standard), mini-batch size 500 and batch size 2000, 0 weight decay. In the first 32 experiments, α and β are sampled uniformly from [0, 10] and [0, 5] respectively. In the next 5 experiments, α is set to be 0 while β is sampled uniformly. In the next 5 experiments, β is set to be 0 while α is sampled uniformly. Thus in total we run 42 experiments for each number of scaling considered.\nOnce we get a trained Enc(x), we train a 1 hidden layer (with 200 units) MLP classifier on the frozen features from Enc(x) using the labels in the training set. This training is done for 100 epochs using Adam with learning rate 0.001 (other Adam hyper-parameters are standard), batch size 128 and weight decay 0.\nAs a baseline for these experiments, we use a randomly initialed encoder Enc(x). Since there are no tunable hyper-parameters in this case, we perform a grid search on the classifier hyper-parameters. Specifically, we choose weight decay from {0, 1e− 5, 5e− 5, 1e− 4}, batch size from {128, 256}, and learning rate from {0.0001, 0.001}. This yields a total of 16 configurations. The test accuracy from these runs varied between 58.59% and 67.97%. We consider 67.97% as our baseline.\nThe performance of encoders under the aforementioned configurations is shown in figure 2. It is clear that both the hyper-parameters α and especially β play an important role in the quality of representations learned. Also, applying Neural Bayes-MIM at different scales of the network states significantly improves the average and best performance.\nFilter visualization, convergence experiments and the effect of batch-size are shown in appendix F.3." }, { "heading": "5.2.2 FINAL CLASSIFICATION PERFORMANCE", "text": "We compare the final test accuracy of Neural Bayes-MIM with 2 baselines– a random encoder (described in ablation studies) and Deep Infomax (Hjelm et al., 2019) on benchmark image datasets– CIFAR10 and CIFAR-100 (Krizhevsky, 2009) and STL-10 (Coates et al., 2011). Random Network refers to the use of a randomly initialized network. The experimental details for them are identical to those in the ablation above involving a hyper-parameter search over 16 configurations done for each dataset separately.\nDIM results are reported from Hjelm et al. (2019). We omit STL-10 number for DIM because we resize images to a much smaller size of 32× 32 in our runs instead of 64× 64 as used in DIM. The following describes the experimental details for Neural Bayes-MIM. We use α = 2 and β = 4 (chosen roughly by examining figure 2), and MBS=500, BS=4000 in all the experiments. Note these values are not tuned for STL-10 and CIFAR-100. For CIFAR-10 and STL-10 each, we run 4 configurations of Neural Bayes-MIM over hyper-parameters learning rate ∈ {0.0001, 0.001} and weight decay ∈ {0, 0.00001}. For each run, we then train a 1 hidden layer (200 units) classifier on top of the frozen features with learning rate ∈ {0.0001, 0.001}. We report the best performance of all runs. For CIFAR-100, we take the encoder that produces the best performance on CIFAR-10, and train a classifier with the 2 learning rates and report the best of the 2 runs.\nTable 1 reports the classification performance of all the methods. We note that all experiments were done with CNN architecture without any data augmentation. Neural Bayes-MIM outperforms baseline methods in general." }, { "heading": "6 CONCLUSION", "text": "We proposed a parameterization method that can be used to express an arbitrary set of distributions p(x|z), p(z|x) and p(z) in closed form using a neural network with sufficient capacity, which can in turn be used to formulate new objective functions. We formulated two different objectives that use this parameterization which were aimed towards different goals of unsupervised learning– identification of disjoint manifolds in the support of continuous distributions and learning deep network features using the infomax principle. We focused on theoretical analysis for both the objectives and presented preliminary experiments supporting the theoretical claims." }, { "heading": "A PROOFS FOR NEURAL BAYES-DMS (BINARY CASE)", "text": "Proposition 1. (Neural Bayes-DMS, proposition 2 in main text) Let L(x) : Rn → [0, 1] be a nonparametric function for any given input x ∈ Rn, and let J be the Jensen-Shannon divergence. Define scalars f1(x) :=\nL(x) Ex[L(x)] and f0(x) := 1−L(x) 1−Ex[L(x)] . Then the objective in Eq. (2) is equivalent to,\nmax L\n1 2 · Ex\n[ f1(x) · log ( f1(x)\nf1(x) + f0(x)\n)] + 1\n2 · Ex\n[ f0(x) · log ( f0(x)\nf1(x) + f0(x)\n)] + log 2\n(11) s.t. Ex[L(x)] /∈ {0, 1} (12)\nProof: Using the Neural Bayes parameterization from lemma 1 for binary case, we set,\nq1(x) := L(x) · p(x)\nEx∼p(x)[L(x)] q0(x) := (1− L(x)) · p(x) 1− Ex∼p(x)[L(x)]\n(13)\nThese parameterizations therefore automatically satisfy the constraints in Eq. (2). Finally, using the definition of JS divergence, the maximization problem in Eq. (2) can be written as,\nmax q0,q1\n1 2 · ∫ x q1(x) log q1(x) 0.5 · (q0(x) + q1(x)) + q0(x) log\nq0(x)\n0.5 · (q0(x) + q1(x)) (14)\nSubstituting q0 and q1 with their respective parameterizations and using the definitions of f0(x) and f1(x) completes the proof.\nTheorem 1. (optimality, Theorem 2 in main text) Let p(x) be a probability density function over Rn whose support is the union of two non-empty connected sets (definition 1) S1 and S2 that are disjoint, i.e. S1 ∩ S2 = ∅. Let L(x) ∈ [0, 1] belong to the class of continuous functions which is learned by solving the objective in Eq. (3). Then the objective in Eq. (3) is maximized if and only if one of the following is true:\nL(x) = { 0 ∀x ∈ S1 1 ∀x ∈ S2\nor L(x) = {\n1 ∀x ∈ S1 0 ∀x ∈ S2\n(15)\nProof: The two cases exist in the theorem due to symmetry. Recall the definition of f0(x) and f1(x) in Eq. (3),\nf1(x) := L(x)\nEx[L(x)] f0(x) := 1− L(x) 1− Ex[L(x)]\n(16)\nwhere L(x) ∈ [0, 1] and for a feasible L(x), and therefore π := Ex[L(x)] ∈ (0, 1) due to the conditions specified in this theorem. Thus f1(x) ∈ [0, 1π ] and f0(x) ∈ [0, 1 1−π ]. By design, the terms\nlog (\nf1(x) f1(x)+f0(x)\n) and log ( f0(x)\nf1(x)+f0(x)\n) are non-positive. Thus, for any x ∈ S1 ∪ S2,\nF (x) = f1(x) · log (\nf1(x)\nf1(x) + f0(x)\n) + f0(x) · log ( f0(x)\nf1(x) + f0(x)\n) (17)\nis maximized only when L(x) = 0 or L(x) = 1 leading to F (x) = 0. Therefore, the objective in Eq. (3) is maximized by setting L(x) = 0 or L(x) = 1 ∀x ∈ S1 ∪ S2. Finally, since L(x) is a continuous function, @x1,x2 ∈ S1 such that L(x1) = 0 and L(x2) = 1. We prove this by contradiction. Suppose there exists a pair (x1,x2) of this kind. Then along any path connecting x1 and x2 within S1, there must exist a point where L(x) is not continuous since L(x) = 0 or L(x) = 1 ∀x ∈ S1 ∪ S2 to satisfy the maximization condition. This is a contradiction. By symmetry, the same argument can be proved for x1,x2 ∈ S2. Therefore one of the two cases mentioned in the theorem must be the optimal solution for L(x) in Eq. (3). Thus we have proved the claim." }, { "heading": "B NEURAL BAYES-DMS: EXTENSION TO MULTIPLE PARTITIONS", "text": "In order to extend our proposal to multiple partitions (sayK), the idea is to find conditional distribution qi (i ∈ [K]) corresponding to each of the K partitions such the divergence between conditional distribution of every partition and the conditional distribution of the combined remaining partitions is maximized. Specifically, we propose the following primary objective,\nmax qk\nπk 6=0,∀k∈[K]\n1\nK K∑ k=1\nJ(qk(x)||q̄k(x)) s.t. (18)∫ x qk(x) = 1 ∀k ∈ [K] (19)\nK∑ k=1 qk(x) · πk = p(x) (20)\nK∑ k=1 πk = 1 (21)\nwhere q̄k(x) is the conditional distribution corresponding to the full data distribution excluding the partition defined by qk(x). Formally,\nq̄k(x) := p(x)− qk(x) · πk\n1− πk (22)\nThen the theorem below shows an equivalent way of solving the above objective.\nTheorem 2. Let L(x) : Rn → R+K be a non-parametric function for any given input x ∈ Rn with the property ∑K k=1 Lk(x) = 1 ∀x, and let J be the Jensen-Shannon divergence. Define scalars fk(x) := Lk(x)\nEx[Lk(x)] and f̄k(x) := 1−Lk(x) 1−Ex[Lk(x)] . Then the objective in Eq. (18) is equivalent to,\nmax Lk∀i∈[K]\n1 2 · Ex [ K∑ k=1 fk(x) · log (\nfk(x)\nfk(x) + f̄k(x)\n) + f̄k(x) · log ( f̄k(x)\nf̄k(x) + fk(x)\n)] + log 2\n(23) s.t. Ex[Lk(x)] = πk\nHere Lk(x) denotes the kth unit of L(x).\nProof: Similar to theorem 1, the main idea is to parameterize qk and q̄k as follows,\nqk(x) := Lk(x) · p(x)\nEx∼p(x)[Lk(x)] q̄k(x) := (1− Lk(x)) · p(x) 1− Ex∼p(x)[Lk(x)]\n(24)\nTo verify that these parameterizations are valid, note that,∫ x qk(x) = ∫ x Lk(x) · p(x) Ex∼p(x)[Lk(x)] = 1 (25)\nSimilarly, ∫ x q̄k(x) = 1. To verify that the second constraint is satisfied, we use the above parameterization and substitute Ex[Lk(x)] = πk and get, K∑ k=1 Lk(x) · p(x) πk · πk = p(x) · ( K∑ k=1 Lk(x) ) (26)\n= p(x) (27)\nwhere the last equality uses the definition of L(x). Also notice that each πk ∈ [0, 1] and thus Ex[Lk(x)] = πk is feasible for any arbitrary distribution qk(x) when Lk(x) ≥ 0.\nFinally, using the proposed parameterization we have,\nq̄k(x) = p(x)− qi(x) · πk\n1− πk (28)\n= p(x) · 1− L i(x) Ex[Lk(x)] · πk 1− πk\n(29)\n= p(x) · 1− L i(x)\n1− Ex[Lk(x)] (30)\n= f̄k(x) · p(x) (31)\nwhere we have used the fact that Ex[Lk(x)] = πk. Using the definition of JS divergence, the max problem in Eq. (18) can be written as,\nmax Lk∀i∈[K]\n1 2 · K∑ k=1 ∫ x qk(x) · log (\nqk(x)\n0.5 · (qk(x) + q̄k(x))\n) + q̄k(x) · log ( q̄k(x)\n0.5 · (q̄k(x) + qk(x)) ) (32)\nSubstituting qk and q̄k with their respective parameterizations and using the definitions of fk(x) and f̄k(x) completes the proof.\nIn terms of implementation, we propose to simply haveK output units in the label generating network Lθ while sharing the rest of the network. Also, we use Softmax activation at the output layer to satisfy the properties of L specified in the above theorem." }, { "heading": "C ADDITIONAL NEURAL BAYES-DMS EXPERIMENTS", "text": "We run an experiment on MNIST. We randomly split the training set into 90% − 10% trainingvalidation set In this experiment, we train a CNN with the following architecture: C(100, 3, 1, 0)− P (2, 2, 0,max)−C(100, 3, 1, 0)−C(200, 3, 1, 0)−P (2, 2, 0,max)−C(500, 3, 1, 0)−P (., ., ., avg)− FC(10). Here P (., ., ., avg) denotes the entire spatial field is average pooled to result in 1× 1 heightwidth, and FC(10) denotes a fully connected layer with output dimension 10. Finally, Softmax is applied at the output and the network is trained using the Neural Bayes-DMS objective. We optimize the objective using Adam with learning rate 0.001, batch size 5000, 0 weight decay for 100 epochs (other Adam hyper-parameters are kept standard). We use β = 1 for the smoothness regularization coefficient. Once this network is trained, we train a linear classifier on top of this 10 dimensional output using Adam with identical configurations except a batch size of 128 is used. We early stop on the validation set of MNIST and report the test accuracy using that model. The classifier reaches 99.22% test accuracy. This experiment shows that MNIST classes lie on nearly disjoint manifolds and that Neural Bayes-DMS can correctly label them. As baseline, a linear classifier trained on features from a randomly initialized identical CNN architecture reaches 42.97%.\nD GRADIENT COMPUTATION PROBLEM FOR THE Ex[Lθ(x)] TERM\nThe Neural Bayes parameterization contains the term Ex[Lθ(x)]. Computing unbiased gradient through this term is in general difficult without the use of very large batch-sizes even though the quantity Ex[Lθ(x)] itself may have a good estimate using very few samples. For instance, consider the scalar function ψ(t) = 1 + 0.01 sinωt. Consider the scenario when ω → ∞. The quantity E[ψ(t)] can be estimated very accurately using even one example. Further, E[ψ(t)] = 1, hence ∂E[ψ(t)] ∂t = 0. However, when using a finite number of samples, the approximation of ∂E[ψ(t)] ∂t can have a very high variance estimate due to improper cancelling of gradient terms from individual samples.\nIn the case of Neural Bayes-MIM we found that gradients through terms involving Ex[Lθ(x)] were 0. This allows us to estimate gradients for this objective reliably in the mini-batch setting. But in general it may be challenging to do so and solving objectives using Neural Bayes parameterization may require a customized work-around for each objective.\nE IMPLEMENTATION DETAILS OF THE NEURAL BAYES-MIM OBJECTIVE\nWe apply the Neural Bayes-MIM objective (Eq 10) to all the hidden layers at different scales (using average pooling). We now discuss its implementation details. Consider the CNN architecture used in our experiments– C(200, 3, 1, 0) − P (2, 2, 0,max) − C(500, 3, 1, 0) − C(700, 3, 1, 0) − P (2, 2, 0,max)− C(1000, 3, 1, 0). Denote hi (i ∈ {0, 1, 2, 3}) be the 4 hidden layer ReLU outputs after the 4 convolution layers. For input of size 32 × 32 × 3, all these hidden states have height and width dimension in addition to channel dimension. For a mini-batch B, these hidden states are therefore 4 dimensional tensors. Let these 4 dimensions for the ith state be denoted by |B| × Ci × Hi ×Wi, where the dimensions denote batch-size, number of channels, height and width. Denote S to be the Softmax function applied along the channel dimension, and P to be P (2, 2, 0, avg). Further, denote hi := P(hi−4) (i ∈ {4, 5, 6, 7}) as the scaled version of the original states computed by average pooling, and define numbers Ci, Hi,Wi accordingly. Then the total Neural Bayes-MIM objective for this architecture is given by,\nmin θ − 1 |B| ∑ x∈B 1 8 7∑ i=0 1 HiWi Hi,Wi∑ h,w=1 Ci∑ k=1 S(hik,h,w(x)) log〈S(hik,h,w(x)) + 〉 + (1 + α) · Rp(θ) + β · Rc (33)\nwhere,\nRp(θ) := − 1\n8 7∑ i=0 1 HiWi Hi,Wi∑ h,w=1 [ Ci∑ k=1 1 Ci log [ 1 |B| ∑ x∈B S(hik,h,w(x)) ] + Ci − 1 Ci log [ 1− 1 |B| ∑ x∈B S(hik,h,w(x)) ]] (34)\nand,\nRc = 1 |B| ∑ x∈B ‖P(h3k(x))− P(h3k(x + ζ · δ̂)‖2 ζ2 (35)\nwhere δ̂ := δ‖δ‖2 is a normalized noise vector computed independently for each sample x in the batch B as,\nδ := Xv. (36)\nHere X ∈ Rn×B is the matrix containing the batch of samples, and each dimension of v ∈ RB is sampled i.i.d. from a standard Gaussian. This computation ensures that the perturbation lies in the span of data. Finally ζ is the scale of normalized noise added to all samples in a batch. In our experiments, since we always normalize the datasets to have zero mean and unit variance across all dimensions, we sample ζ ∼ N (0, 0.12). Note that for the architecture used, P(h3k(x)) results in an output with height and width equal to 1, hence the output is effectively a 2D matrix of size |B| × C3. Finally, the gradient form this mini-batch is accumulated and averaged over multiple batches before updating the parameters for a more accurate estimate of gradients." }, { "heading": "F ADDITIONAL ANALYSIS OF NEURAL BAYES-MIM", "text": "F.1 GRADIENT STRENGTH OF UNIFORM PRIOR IN NEURAL BAYES-MIM-V1 (EQ 8) VS NEURAL BAYES-MIM-V2 (9)\nAs discussed in the main text, the term,\nRv1p (θ) := K∑ k=1 Ex[Lk(x)] log〈Ex[Lk(x)]〉 (37)\nacts as a uniform prior encouraging the representations to be distributed. However, gradients are much stronger when Ex[Lk(x)] approaches 1 for the alternative cross-entropy formulation,\nRv2p (θ) := − K∑ k=1 1 K logEx[Lk(x)] + K − 1 K log(1− Ex[Lk(x)]) (38)\nTo see this, note that gradient forRv1p (θ) is given by,\n∂Rv1p (θ) ∂θ = K∑ k=1 ∂Ex[Lθk(x)] ∂θ logEx[Lθk(x)]− K∑ k=1 ∂Ex[Lθk(x)] ∂θ\n(39)\n= K∑ k=1 ∂Ex[Lθk(x)] ∂θ logEx[Lθk(x)]− ∂Ex[ ∑K k=1 Lθk(x)] ∂θ (40)\n= K∑ k=1 ∂Ex[Lθk(x)] ∂θ logEx[Lθk(x)] (41)\nwhere the last equality holds due to the linearity of expectation and because ∑K k=1 Lθk(x) = 1 by design. On the other hand, gradients forRv2p (θ) is given by,\n∂Rv2p (θ) ∂θ = − K∑ k=1 1 K ( 1 Ex[Lk(x)] − K − 1 1− Ex[Lk(x)] ) ∂Ex[Lθk(x)] ∂θ (42)\nWhen the representation being learned is such that the marginal p(z) peaks along a single state k, i.e., Ex[Lk(x)]→ 1 (making the representation degenerate), the gradient for the kth term for v1 is given by,\n∂Ex[Lθk(x)] ∂θ logEx[Lθk(x)] ≈ 0 (43)\nwhile that for v2 is given by,\n− 1 K\n( 1\nEx[Lk(x)] − K − 1 1− Ex[Lk(x)]\n) ∂Ex[Lθk(x)]\n∂θ ≈ lim c→0\n1 c · ∂Ex[Lθk(x)] ∂θ (44)\nwhose magnitude approaches infinity as Ex[Lk(x)] → 1. Thus Rv2p (θ) is beneficial in terms of gradient strength.\nF.2 EMPIRICAL COMPARISON BETWEEN NEURAL BAYES-MIM-V1 (EQ 8) AND NEURAL BAYES-MIM-V2 (9)\nTo empirically understand the difference in behavior of Neural Bayes-MIM objective v1 vs v2, we first plot the filters learned by the v1 objective and compare it with those learned by the v2 objective. The filters learned by the v1 objective are shown in figure 3 using the configuration α = 4, β = 4. It can be seen that most filters are dead. We tried other configurations as well without any change in the outcome. Since the v1 and v2 objective differ only in the formulation of the uniform prior regularization, as explained in the previous section, we believe that v1 leads to dead filters because of weak gradients from its regularization term.\nIn the second set of experiments, we train many models using Neural Bayes-MIM-v1 and Neural Bayes-MIM-v2 objectives separately with different hyper-parameter configurations similar to the setting of figure 2. The performance scatter plot is shown in figure 6. We find that Neural BayesMIM-v2 has better average and best performance compared with Neural Bayes-MIM-v1.\nF.3 ADDITIONAL EXPERIMENTS\nVisualization of Filters: We visualize the filters learned by the Neural Bayes-MIM objective on MNIST digits and qualitatively study the effects of the regularizations used. For this we train a deep fully connected network with 3 hidden layers each of width 500 using Adam with learning rate 0.001, batch size 500, 0 weight decay for 50 epochs (other Adam hyper-parameters are kept standard). We train three configurations: 1. α = 0, β = 4; 2. α = 4, β = 0; 3. α = 4, β = 4. The learned filters are shown in figure 3. We find that the uniform prior regularization (α > 0) prevents dead filters while the smoothness regularization (β > 0) prevents input memorization.\nAccuracy vs Epochs: Finally, we plot the evolution of accuracy over epochs for all the models learned in the experiments of figure 2. For Neural Bayes-MIM we use the models with scaling (42 in total), and all 16 models for the random encoder. The convergence plot (figure 5) shows that models pre-trained with Neural Bayes-MIM quickly converge to a higher test accuracy compared to the baseline.\nEffect of Mini-batch size (MBS) and Batch size (BS): During implementation, we proposed to compute gradients using a reasonably large mini-batch of size MBS and accumulate gradients until BS samples are seen. This is done to overcome the gradient estimation problem due to the Ex[Lk(x)] term in Neural Bayes-MIM. Here we evaluate the effect of these two hyper-parameters on the final test performance. We choose MBS from {50, 100, 250, 500} and BS from {50, 250, 500, 2000, 3000}.\nFor each combination of MBS and BS, we train the CNN encoder using Neural Bayes-MIM with α = 2 and β = 4 (chosen by examining figure 2); the rest of the training settings are kept identical to those used for figure 2 experiment. Table 2 shows the final test accuracy on CIFAR-10 for each combination of hyper-parameters MBS and BS. We make two observations: 1. using very small MBS (Eg. 50 and 100) typically results in poor (even worse than that of a random encoder (67.97%)), while larger MBS significantly improves performance; 2. using a larger BS further improves performance in most cases (even when MBS is small)." }, { "heading": "G PROOF OF LEMMA 1", "text": "Lemma 1. Let p(x|z = k) and p(z) be any conditional and marginal distribution defined for continuous random variable x and discrete random variable z. If Ex∼p(x)[Lk(x)] 6= 0 ∀k ∈ [K], then there exists a non-parametric function L(x) : Rn → R+K for any given input x ∈ Rn with the property ∑K k=1 Lk(x) = 1 ∀x such that,\np(x|z = k) = Lk(x) · p(x) Ex∼p(x)[Lk(x)] , p(z = k) = Ex[Lk(x)], p(z = k|x) = Lk(x) (45)\nand this parameterization is consistent.\nProof: First we show the existence proof. Notice that there exists a non-parametric function gk(x) := p(x|z=k) p(x) ∀x ∈ supp(p(x)). Denote Gk(x) = p(z = k)gk(x). Then,\nEx[Gk(x)] = Ex[p(z = k)gk(x)] = p(z = k) (46) and,\nGk(x) Ex[Gk(x)] = p(z = k)gk(x) p(z = k) = p(x|z = k) p(x) (47)\nThus Lk := Gk works. To verify that this parameterization is consistent, note that for any k,∫ x p(x|z = k) = ∫ x Lk(x) · p(x) Ex∼p(x)[Lk(x)] = 1 (48) where we use the condition Ex∼p(x)[Lk(x)] 6= 0 ∀k ∈ [K]. Secondly, we note that, K∑ k=1 p(x|z = k) · p(z = k) = K∑ k=1 Lk(x) · p(x) Ex[Lk(x)] · Ex[Lk(x)] (49)\n= K∑ k=1 Lk(x) · p(x) (50)\n= p(x) (51) where the last equality is due to the conditions ∑K k=1 Lk(x) = 1 ∀x. Thirdly,\nK∑ k=1 p(z = k) = K∑ k=1 Ex[Lk(x)] (52)\n= Ex[ K∑ k=1 Lk(x)] (53)\n= 1\nFinally, we have from Bayes’ rule:\np(z = k|x) = p(x|z = k) · p(z = k) p(x)\n(54)\n= Lk(x) · p(x) Ex∼p(x)[Lk(x)] · Ex∼p(x)[Lk(x)] p(x) (55)\n= Lk(x) (56) where the second equality holds because of the existence and consistency proofs of p(x|z = k) := Lk(x)·p(x) Ex∼p(x)[Lk(x)] and p(z = k) := Ex[Lk(x)] shown above." }, { "heading": "H PROOFS FOR NEURAL BAYES-MIM", "text": "Proposition 2. (Neural Bayes-MIM-v1) (proposition 1 in main text) Let L(x) : Rn → R+K be a non-parametric function for any given input x ∈ Rn with the property ∑K i=1 Lk(x) = 1 ∀x. Consider the following objective,\nL∗ = arg max L Ex [ K∑ k=1 Lk(x) log Lk(x) Ex[Lk(x)] ] (57)\nThen L∗k(x) = p(z ∗ = k|x), where z∗ ∈ arg maxzMI(x, z).\nProof: Using the Neural Bayes parameterization in lemma 1, we have,\nMI(x, z) = ∫ x K∑ k=1 p(x, z = k) log p(x, z = k) p(x)p(z = k) (58)\n= ∫ x K∑ k=1 p(z = k|x)p(x) log p(z = k|x) p(z = k)\n(59)\n= ∫ x K∑ k=1 Lk(x) · p(x) · log Lk(x) Ex[Lk(x)] (60)\n= Ex∼p(x) [ K∑ k=1 Lk(x) log Lk(x) Ex[Lk(x)] ] (61)\nTherefore the two objectives are equivalent and we have a closed form estimate of mutual information. Given z∗ is a maximizer of MI(x, z), since L is a non-parametric function, there exists L∗ such that p(z∗ = k|x) = L∗k(x) due to lemma 1. Theorem 3. (Theorem 1 in main text) Denote,\nJ(θ) = −Ex [ K∑ k=1 Lθk(x) log Lθk(x) Ex[Lθk(x)] ] (62)\nĴ(θ) = −Ex [ K∑ k=1 Lθk(x) log〈 Lθk(x) Ex[Lθk(x)] 〉 ] (63)\nwhere 〈.〉 denotes gradients are not computed through the argument. Then ∂J(θ)∂θ = ∂Ĵ(θ) ∂θ .\nProof: We note that,\nJ(θ) = −Ex [ K∑ k=1 Lθk(x) log Lθk(x) Ex[Lθk(x)] ] (64)\n= −Ex [ K∑ k=1 Lθk(x) logLθk(x) ] + Ex [ K∑ k=1 Lθk(x) logEx[Lθk(x)] ] (65)\nDenote the first term by T1. Then due to chain rule,\n−∂T1 ∂θ = Ex [ K∑ k=1 ∂Lθk(x) ∂θ logLθk(x) ] − Ex [ K∑ k=1 Lθk(x) Lθk(x) · ∂Lθk(x) ∂θ ] (66)\n= Ex [ K∑ k=1 ∂Lθk(x) ∂θ logLθk(x) ] − Ex [ K∑ k=1 ∂Lθk(x) ∂θ ] (67)\n= Ex [ K∑ k=1 ∂Lθk(x) ∂θ logLθk(x) ] − Ex [ ∂ ∑K k=1 Lθk(x) ∂θ ] (68)\n= Ex [ K∑ k=1 ∂Lθk(x) ∂θ logLθk(x) ] (69)\nwhere the last equality holds due to the linearity of expectation and because ∑K k=1 Lθk(x) = 1 by design. Now denote the second term by T2. Then due to chain rule,\n−∂T2 ∂θ = −Ex [ K∑ k=1 ∂Lθk(x) ∂θ logEx[Lθk(x)] ] − Ex [ K∑ k=1 Lθk(x) Ex[Lθk(x)] · ∂Ex[Lθk(x)] ∂θ ] (70)\n= −Ex [ K∑ k=1 ∂Lθk(x) ∂θ logEx[Lθk(x)] ] − K∑ k=1 Ex[Lθk(x)] Ex[Lθk(x)] · ∂Ex[Lθk(x)] ∂θ\n(71)\n= −Ex [ K∑ k=1 ∂Lθk(x) ∂θ logEx[Lθk(x)] ] − K∑ k=1 Ex [ ∂Lθk(x) ∂θ ] (72)\n= −Ex [ K∑ k=1 ∂Lθk(x) ∂θ logEx[Lθk(x)] ] − Ex [ ∂ ∑K k=1 Lθk(x) ∂θ ] (73)\n= −Ex [ K∑ k=1 ∂Lθk(x) ∂θ logEx[Lθk(x)] ] (74)\nwhere once again the last equality holds due to the linearity of expectation and because∑K k=1 Lθk(x) = 1 by design. Thus the gradient for J is given by,\n∂J(θ)\n∂θ = −Ex [ K∑ k=1 ∂Lθk(x) ∂θ log Lθk(x) Ex[Lθk(x)] ] (75)\nwhich is the same as ∂Ĵ(θ)∂θ . This concludes the proof." } ]
2,020
null
SP:9d58dff3946cc3ebd5f5272deab9c5ccddd48efc
[ "This paper utilizes Randomly Wired architectures to boost deep GNNs. Theoretical analyses verify that randomly wired architectures behave like path ensemble and it enables adaptive receptive field. Experimental results on three non-popular datasets demonstrate the strength of the proposed model. Overall, the idea is interesting. Yet this paper can be made better through the following aspects:", "The paper proposes a new method for building graph convolutional neural networks. It shows, that during the building of the network, instead of stacking many layers and adding the residual connection between them, one could employ a randomly-wired architecture, that can be a more effective way to increase the capacity of the network and thus it could obtain richer representations. The proposed method is an interesting direction in the field of graph convolutional neural networks. The new method could be seen asa generalization of the residual networks and the jumping knowledge networks." ]
Graph neural networks have become a staple in problems addressing learning and analysis of data defined over graphs. However, several results suggest an inherent difficulty in extracting better performance by increasing the number of layers. Besides the classic vanishing gradient issues, recent works attribute this to a phenomenon peculiar to the extraction of node features in graph-based tasks, i.e., the need to consider multiple neighborhood sizes at the same time and adaptively tune them. In this paper, we investigate the recently proposed randomly wired architectures in the context of graph neural networks. Instead of building deeper networks by stacking many layers, we prove that employing a randomly-wired architecture can be a more effective way to increase the capacity of the network and obtain richer representations. We show that such architectures behave like an ensemble of paths, which are able to merge contributions from receptive fields of varied size. Moreover, these receptive fields can also be modulated to be wider or narrower through the trainable weights over the paths. We also provide extensive experimental evidence of the superior performance of randomly wired architectures over three tasks and five graph convolution definitions, using a recent benchmarking framework that addresses the reliability of previous testing methodologies.
[]
[ { "authors": [ "Babak Alipanahi", "Andrew Delong", "Matthew T Weirauch", "Brendan J Frey" ], "title": "Predicting the sequence specificities of dna-and rna-binding proteins by deep learning", "venue": "Nature biotechnology,", "year": 2015 }, { "authors": [ "Uri Alon", "Eran Yahav" ], "title": "On the bottleneck of graph neural networks and its practical implications", "venue": "arXiv preprint arXiv:2006.05205,", "year": 2020 }, { "authors": [ "Xavier Bresson", "Thomas Laurent" ], "title": "Residual gated graph convnets", "venue": "arXiv preprint arXiv:1711.07553,", "year": 2017 }, { "authors": [ "Michael M Bronstein", "Joan Bruna", "Yann LeCun", "Arthur Szlam", "Pierre Vandergheynst" ], "title": "Geometric deep learning: going beyond Euclidean data", "venue": "IEEE Signal Processing Magazine,", "year": 2017 }, { "authors": [ "Nima Dehmamy", "Albert-László Barabási", "Rose Yu" ], "title": "Understanding the representation power of graph neural networks in learning graph topology", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "David K Duvenaud", "Dougal Maclaurin", "Jorge Iparraguirre", "Rafael Bombarell", "Timothy Hirzel", "Alán Aspuru-Guzik", "Ryan P Adams" ], "title": "Convolutional networks on graphs for learning molecular fingerprints", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Vijay Prakash Dwivedi", "Chaitanya K Joshi", "Thomas Laurent", "Yoshua Bengio", "Xavier Bresson" ], "title": "Benchmarking graph neural networks (v1)", "venue": "arXiv preprint arXiv:2003.00982v1,", "year": 2020 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Neural architecture search: A survey", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Federico Errica", "Marco Podda", "Davide Bacciu", "Alessio Micheli" ], "title": "A fair comparison of graph neural networks for graph classification", "venue": null, "year": 1912 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "arXiv preprint arXiv:1506.02142,", "year": 2015 }, { "authors": [ "Shunwang Gong", "Mehdi Bahri", "Michael M Bronstein", "Stefanos Zafeiriou" ], "title": "Geometrically principled connections in graph neural networks", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Johannes Klicpera", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Predict then propagate: Graph neural networks meet personalized pagerank", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Guohao Li", "Matthias Müller", "Guocheng Qian", "Itzel C Delgadillo", "Abdulellah Abualshour", "Ali Thabet", "Bernard Ghanem" ], "title": "Deepgcns: Making GCNs go as deep as CNNs", "venue": "arXiv preprint arXiv:1910.06849,", "year": 2019 }, { "authors": [ "Guohao Li", "Matthias Muller", "Ali Thabet", "Bernard Ghanem" ], "title": "DeepGCNs: Can GCNs go as deep as CNNs", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Hoang NT", "Takanori Maehara" ], "title": "Revisiting graph neural networks: All we have is low-pass filters", "venue": "arXiv preprint arXiv:1905.09550,", "year": 2019 }, { "authors": [ "Kenta Oono", "Taiji Suzuki" ], "title": "Graph neural networks exponentially lose expressive power for node classification", "venue": "arXiv preprint arXiv:1905.10947,", "year": 2019 }, { "authors": [ "Yu Rong", "Wenbing Huang", "Tingyang Xu", "Junzhou Huang" ], "title": "Dropedge: Towards deep graph convolutional networks on node classification", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Emanuele Rossi", "Fabrizio Frasca", "Ben Chamberlain", "Davide Eynard", "Michael Bronstein", "Federico Monti" ], "title": "Sign: Scalable inception graph neural networks", "venue": "arXiv preprint arXiv:2004.11198,", "year": 2020 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Galligher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI magazine,", "year": 2008 }, { "authors": [ "Martin Simonovsky", "Nikos Komodakis" ], "title": "Dynamic edge-conditioned filters in convolutional neural networks on graphs", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Diego Valsesia", "Giulia Fracastoro", "Enrico Magli" ], "title": "Learning Localized Generative Models for 3D Point Clouds via Graph Convolution", "venue": "In International Conference on Learning Representations (ICLR)", "year": 2019 }, { "authors": [ "Andreas Veit", "Michael J Wilber", "Serge Belongie" ], "title": "Residual networks behave like ensembles of relatively shallow networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Lio", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Clément Vignac", "Guillermo Ortiz-Jiménez", "Pascal Frossard" ], "title": "On the choice of graph neural network architectures", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Li Wan", "Matthew Zeiler", "Sixin Zhang", "Yann Le Cun", "Rob Fergus" ], "title": "Regularization of neural networks using dropconnect", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "Yue Wang", "Yongbin Sun", "Ziwei Liu", "Sanjay E Sarma", "Michael M Bronstein", "Justin M Solomon" ], "title": "Dynamic graph cnn for learning on point clouds", "venue": "ACM Transactions on Graphics (TOG),", "year": 2019 }, { "authors": [ "B. Weisfeiler", "A Lehman" ], "title": "A reduction of a graph to a canonical form and an algebra arising during this reduction", "venue": "Nauchno–Technicheskaya Informatsia,", "year": 1968 }, { "authors": [ "Zonghan Wu", "Shirui Pan", "Fengwen Chen", "Guodong Long", "Chengqi Zhang", "S Yu Philip" ], "title": "A comprehensive survey on graph neural networks", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "Saining Xie", "Alexander Kirillov", "Ross Girshick", "Kaiming He" ], "title": "Exploring randomly wired neural networks for image recognition", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Keyulu Xu", "Chengtao Li", "Yonglong Tian", "Tomohiro Sonobe", "Ken-ichi Kawarabayashi", "Stefanie Jegelka" ], "title": "Representation learning on graphs with jumping knowledge networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Lingxiao Zhao", "Leman Akoglu" ], "title": "Pairnorm: Tackling oversmoothing in gnns", "venue": "arXiv preprint arXiv:1909.12223,", "year": 2019 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Data defined over the nodes of graphs are ubiquitous. Social network profiles (Hamilton et al., 2017), molecular interactions (Duvenaud et al., 2015), citation networks (Sen et al., 2008), 3D point clouds (Simonovsky & Komodakis, 2017) are just examples of a wide variety of data types where describing the domain as a graph allows to encode constraints and patterns among the data points. Exploiting the graph structure is crucial in order to extract powerful representations of the data. However, this is not a trivial task and only recently graph neural networks (GNNs) have started showing promising approaches to the problem. GNNs (Wu et al., 2020) extend the deep learning toolbox to deal with the irregularity of the graph domain. Much of the work has been focused on defining a graph convolution operation (Bronstein et al., 2017), i.e., a layer that is well-defined over the graph domain but also retains some of the key properties of convolution such as weight reuse and locality. A wide variety of such graph convolution operators has been defined over the years, mostly based on neighborhood aggregation schemes where the features of a node are transformed by processing the features of its neighbors. Such schemes have been shown to be as powerful as the Weisfeiler-Lehman graph isomorphism test (Weisfeiler & Lehman, 1968; Xu et al., 2019), enabling them to simultaneuosly learn data features and graph topology. However, contrary to classic literature on CNNs, few works (Li et al., 2019a; Dehmamy et al., 2019; Xu et al., 2018; Dwivedi et al., 2020) addressed GNNs architectures and their role in extracting powerful representations. Several works, starting with the early GCN (Kipf & Welling, 2017), noticed an inability to build deep GNNs, often resulting in worse performance than that of methods that disregard the graph domain, when trying to build anything but very shallow networks. This calls for exploring whether advances on CNN architectures can be translated to the GNN space, while understanding the potentially different needs of graph representation learning. Li et al. (2019b) suggest that GCNs suffer from oversmoothing as several layers are stacked, resulting in the extraction of mostly low-frequency features. This is related to the lack of self-loop information in this specific graph convolution. It is suggested that ResNet-like architectures mitigate the problem as the skip connections supply high frequency contributions. Xu et al. (2018) point out that the size of the receptive field of a node, i.e., which nodes contribute to the features of the node under\nconsideration, plays a crucial role, but it can vary widely depending on the graph and too large receptive fields may actually harm performance. They conclude that for graph-based problems it would be optimal to learn how to adaptively merge contributions from receptive fields of multiple size. For this reason they propose an architecture where each layer has a skip connection to the output so that contributions at multiple depths (hence sizes of receptive fields) can be merged. Nonetheless, the problem of finding methods for effectively increasing the capacity of graph neural networks is still standing, since stacking many layers has been proven to provide limited improvements (Li et al., 2019b; Oono & Suzuki, 2019; Alon & Yahav, 2020; NT & Maehara, 2019).\nIn this paper, we argue that the recently proposed randomly wired architectures (Xie et al., 2019) are ideal for GNNs. In a randomly wired architecture, “layers” are arranged according to a random directed acyclic graph and data are propagated through the paths towards the output. Such architecture is ideal for GNNs because it realizes the intuition of Xu et al. (2018) of being able of merging receptive fields of varied size. Indeed, the randomly wired network can be seen as an extreme generalization of their jumping network approach where layer outputs can not only jump to the network output but to other layers as well, continuously merging receptive fields. Hence, randomly wired architectures provide a way of effectively scaling up GNNs, mitigating the depth problem and creating richer representations. Fig. 1 shows a graphical representation of this concept by highlighting the six layers directly contributing to the output, having different receptive fields induced by the distribution of paths from the input.\nOur novel contributions can be summarized as follows: i) we are the first to analyze randomly wired architectures and show that they are generalizations of ResNets when looked at as ensembles of paths (Veit et al., 2016); ii) we show that path ensembling allows to merge receptive fields of varied size and that it can do so adaptively, i.e., trainable weights on the architecture edges can tune the desired size of the receptive fields to be merged to achieve an optimal configuration for the problem; iii) we introduce improvements to the basic design of randomly wired architectures by optionally embedding a path that sequentially goes through all layers in order to promote larger receptive fields when needed, and by presenting MonteCarlo DropPath, which decorrelates path contributions by randomly dropping architecture edges; iv) we provide extensive experimental evidence, using a recently introduced benchmarking framework (Dwivedi et al., 2020) to ensure significance and reproducibility, that randomly wired architectures consistently outperform ResNets, often by large margins, for five of the most popular graph convolution definitions on three different tasks." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 GRAPH NEURAL NETWORKS", "text": "A major shortcoming of CNNs is that they are unable to process data defined on irregular domains. In particular, one case that is drawing attention is when the data structure can be described by a graph and the data are defined as vectors on the graph nodes. This setting can be found in many applications, including 3D point clouds (Wang et al., 2019; Valsesia et al., 2019), computational biology (Alipanahi et al., 2015; Duvenaud et al., 2015), and social networks (Kipf & Welling, 2017). However, extending CNNs from data with a regular structure, such as images and video, to graph-structured data is not straightforward if one wants to preserve useful properties such as locality and weight reuse.\nGNNs redefine the convolution operation so that the new layer definition can be used on domains described by graphs. The most widely adopted graph convolutions in the literature rely on message passing, where a weighted aggregation of the feature vectors in a neighborhood is computed. The GCN (Kipf & Welling, 2017) is arguably the simplest definition, applying the same linear transformation to all the node features, followed by neighborhood aggregation and non-linear activation:\nh (l+1) i = σ 1 |Ni| ∑ j∈Ni Wh (l) j . Variants of this definition have been developed, e.g., GraphSage (Hamilton et al., 2017) concatenates the feature vector of node i to the feature vectors of its neighbors, so that self-information can also be exploited; GIN (Xu et al., 2019) uses a multilayer perceptron instead of a linear transform, replaces average with sum to ensure injectivity and proposes a different way of computing the output by using all the feature vectors produced by the intermediate layers. These definitions are all isotropic because they treat every edge in the same way. It has been observed that better representation capacity can be achieved using anistropic definitions, where every edge can have a different transformation, at the cost of increased computational complexity. The Gated GCN (Bresson & Laurent, 2017) and GAT (Veličković et al., 2017) definitions fall in this category." }, { "heading": "2.2 RANDOMLY WIRED ARCHITECTURES", "text": "In recent work, Xie et al. (2019) explore whether it is possible to avoid handcrafted design of neural network architectures and, at the same time, avoid expensive neural architecture search methods (Elsken et al., 2019), by designing random architecture generators. They show that “layers” performing convolution, normalization and non-linear activation can be connected in a random architecture graph. Strong performance is observed on the traditional image classification task by outperforming state-of-the-art architectures. The authors conjecture that random architectures generalize ResNets and similar constructions, but the underlying principles of their excellent performance are unclear, as well as whether the performance translates to tasks other than image recognition or to operations other than convolution on grids." }, { "heading": "3 RANDOMLY WIRED GNNS", "text": "In this section, we first introduce randomly wired architectures and the notation we are going to use. We then analyze their behavior when viewed as ensembles of paths.\n∑\nRELU\nGCONV\nBN\n𝜎(wia) 𝜎(wib) 𝜎(wic)\nh(i)\nh(a) h(b)\nh(c)\nFigure 2: An architecture node is equivalent to a GNN layer.\nA randomly wired architecture consists of a directed acyclic graph (DAG) connecting a source architecture node, which is fed with the input data, to a sink architecture node. One should not confuse the architecture DAG with the graph representing the GNN domain: to avoid any source of confusion we will use the terms architecture nodes (edges) and domain nodes (edges), respectively. A domain node is a node of the graph that is fed as input to the GNN. An architecture node is effectively a GNN layer performing the following operations (Fig. 2): i) aggregation of the inputs from other architecture nodes via a weighted sum as in (Xie et al., 2019):\nh(i) = ∑ j∈Ai ωijh (j) = ∑ j∈Ai σ(wij)h (j), i = 1, ..., L− 1 (1)\n3\nbeing σ a sigmoid function, Ai the set of direct predecessors of the architecture node i, and wij a scalar trainable weight; ii) a non-linear activation; iii) a graph-convolution operation (without output activation); iv) batch normalization.\nThe architecture DAG is generated using a random graph generator. In this paper, we will focus on the Erdős-Renyi model where the adjacency matrix of the DAG is a strictly upper triangular matrix with entries being realizations of a Bernoulli random variable with probability p. If multiple input architecture nodes are randomly generated, they are all wired to a single global input. Multiple output architecture nodes are averaged to obtain a global output. Other random generators may be used, e.g., small-world and scale-free random networks have been studied in (Xie et al., 2019). However, a different generator will display a different behavior concerning the properties we study in Sec. 3.1." }, { "heading": "3.1 RANDOMLY WIRED ARCHITECTURES BEHAVE LIKE PATH ENSEMBLES", "text": "It has already been shown that ResNets behave like ensembles of relatively shallow networks, where one can see the ResNet architecture as a collection of paths of varied lengths (Veit et al., 2016). More specifically, in a ResNet with n layers, where all layers have a skip connection except the first one and the last one, there are exactly 2L−2 paths, whose lengths follow a Binomial distribution (i.e., the number of paths of length l from layer k to the last layer is ( L−k−1 l−2 ) ), and the average path length is L 2 + 1 (Veit et al., 2016). In this section, we show that a randomly wired neural network can also be considered as an ensemble of networks with varied depth. However, in this case, the distribution of the path length is different from the one obtained with the ResNet, as shown in the following lemma (proof in the supplementary material).\nLemma 3.1. Let us consider a randomly wired network with L architecture nodes, where the architecture DAG is generated according to a Erdős-Renyi graph generator with probability p. The average number of paths of length l from node k to the sink, where k < L, is E[N (k)l ] = ( L−k−1 l−2 ) pl−1 and the average total number of paths from node k to the sink is E[N (k)] = p(1 + p)L−k−1.\nWe can observe that if p = 1, the randomly wired network converges to the ResNet architecture. This allows to think of randomly wired architectures as generalizations of ResNets as they enable increased flexibility in the number and distribution of paths instead of enforcing the use of all 2L−2." }, { "heading": "3.2 RECEPTIVE FIELD ANALYSIS", "text": "In the case of GNNs, we define the receptive field of a domain node as the neighborhood that affects the output features of that node. As discussed in Sec. 1, the work in (Xu et al., 2018) highlights that one of the possible causes of the depth problem in GNNs is that the size of the receptive field is not adaptive and may rapidly become excessively large. Inspired by this observation, in this section we analyze the receptive field of a randomly wired neural network. We show that the receptive field of the output is a combination of the receptive fields of shallower networks, induced by each of the paths. This allows to effectively merge the contributions from receptive fields of varied size. Moreover, we show that the trainable parameters along the path edges modulate the contributions of various path lengths and enable adaptive receptive fields, that can be tuned by the training procedure.\nWe first introduce a definition of the receptive field of a feedforward graph neural network1.\nDefinition 3.1. Given a feedforward graph neural network with L layers, the receptive field of radius L of a domain node is its L-hop neighborhood.\nIn a randomly wired architecture, each path induces a corresponding receptive field whose radius depends on the length of the path. Then, the receptive field at the output of the network is obtained by combining the receptive fields of all the paths. In order to analyze the contribution of paths of different lengths to the receptive field of the network, we introduce the concept of distribution of the receptive field radius of the paths. Notice that if we consider a feedforward network with L layers, the distribution of the receptive field radius is a delta centered in L.\nThe following lemma allows to analyze the distribution of the receptive field radius in a randomly wired architecture.\n1We use the term “feedforward neural network” to indicate an architecture made of a simple line graph, without skip connections: this is a representation of one path.\nLemma 3.2. The derivative ∂y∂x0 of the output y of a randomly wired architecture with respect to the input x0 is\n∂y ∂x0 = ∑ p∈P ∂yp ∂x0 = ∑ p∈P ∏ {i,j}∈Ep ωij ∂ȳp ∂x0 = L∑ l=2 ∑ p∈Pl λp ∂ȳp ∂x0 ,\nwhere yp is the output of path p, ȳp is the output of path p when we consider all the aggregation weights equal to 1, λp =\n∂yp ∂x0 / ∂ȳp ∂x0 , P is the set of all paths from source to sink, L is the number of architecture nodes, P l is the set of paths from source to sink of length l and Ep is the set of edges of the path p.\nProof. Direct computation.\nFrom Lemma 3.2, we can observe that the contribution of each path to the gradient is weighted by its corresponding architecture edge weights. Thus, we can define the following distribution ρ of the receptive field radius:\nρl = ∑ p∈Pl λp = ∑ p∈Pl ∏ {i,j}∈Ep ωij for l = 2, ..., n, (2)\nwhere we have assumed that the gradient ∂ȳp∂x0 depends only on the path length, as done in (Veit et al., 2016). This is a reasonable assumption if all the architecture nodes perform the same operation. The distribution of the receptive field radius is therefore influenced by the architecture edge weights. Figure 3 shows an example of how such weights can modify the radius distribution. If we consider ωij = 1 for all i and j, we obtain that the radius distribution is equal to the path length distribution. In order to provide some insight into the role of parameter p in the distribution of the receptive field radius, we focus on this special case and analyze the distribution of the path lengths in a randomly wired architecture by introducing the following Lemma (proof in the supplementary material). Lemma 3.3. Let us consider a randomly wired network with L architecture nodes, where the architecture DAG is generated according to a Erdős-Renyi graph generator with probability p. The average length of the paths from node k to the sink is E[l(k)] ≈ p1+p (L− k − 1) + 2.\nTherefore, if p = 1 and ωij = 1 for all i and j the radius distribution is a Binomial distribution centered in L2 + 1 (as in ResNets), instead when p < 1 the mean of the distribution is lower. The path length distribution for different p values is shown in Fig. 4. This shows that, differently from feedforward networks, the receptive field of ResNets and randomly wired architectures is a combination of receptive fields of varied sizes, where most of the contribution is given by shallow paths, i.e. smaller receptive fields. The parameter p of the randomly wired neural network influences the distribution of the receptive field radius: a lower p value skews the distribution towards shallower paths, instead a higher p value skews the distribution towards longer paths.\nAfter having considered the special case where ωij = 1 for all i and j, we now focus on the general case. Since the edge architecture weights are trainable parameters, they can be adapted to optimize the distribution of the receptive field radius. This is one of the strongest advantages provided by randomly wired architectures with respect to ResNets. This is particularly relevant in the context of GNNs, where we may have a non-uniform growth of the receptive field caused by the irregularity\nof the graph structure (Xu et al., 2018). Notice that the randomly wired architecture can be seen as a generalization of the jumping knowledge networks proposed in (Xu et al., 2018), where all the architecture nodes, not only the last one, merge contributions from previous nodes. We also remark that, even if we modify the ResNet architecture by adding trainable weights to each branch of the residual module, we cannot retrieve the behaviour of the randomly wired architecture. In fact, the latter has intrinsically more granularity than a ResNet: the expected number of architecture edge weights of a randomly wired network is pL(L+1)2 , instead a weighted ResNet has only 2(L − 2) weights. Ideally, we would like to weight each path independently (i.e., directly optimizing the value of λp in Eq. (3.2)). However, this is unfeasible because the number of parameters would become excessively high and the randomly wired architecture provides an effective tradeoff. Given an architecture node, weighting in a different way each input edge is important because to each edge corresponds a different length distribution of the paths going through such edge, as shown by the following Lemma (proof in supplementary material). Lemma 3.4. Let us consider a randomly wired network with n architecture nodes, where the architecture DAG is generated according to a Erdős-Renyi graph generator with probability p. Given an edge {i, j} between the architecture nodes i and j where i < j, the average length of the paths from the source to the sink going through that edge is E[lij ] ≈ p1+p (L− (j − i)− 3) + 4." }, { "heading": "3.3 SEQUENTIAL PATH", "text": "In the previous sections we have shown that a randomly wired architecture behaves like an ensemble of paths merging contribution from receptive fields of varied size, where most of the contribution is provided by shallow paths. As discussed previously, this provides numerous advantages with respect to feedforward networks and ResNets. However, some graph-based tasks may actually benefit from a larger receptive field (Li et al., 2019b), so it is interesting to provide randomly wired architectures with mechanisms to directly promote longer paths. Differently from ResNets, in a randomly wired neural network with L architecture nodes the longest path may be shorter than L, leading to a smaller receptive field. In order to overcome this issue, we propose to modify the generation process of the random architecture by imposing that it should also include the sequential path, i.e., the path traversing all architecture nodes. This design of the architecture skews the initial path length distribution towards longer paths, which has the effect of promoting their usage. Nevertheless, the trainable architecture edge weights will ultimately define the importance of such contribution. Fig. 3 shows an example of how including the sequential path changes the distribution of the receptive field radius." }, { "heading": "3.4 MONTECARLO DROPPATH REGULARIZATION", "text": "The randomly wired architecture offers new degrees of freedom to introduce regularization techniques. In particular, one could delete a few architecture edges during training with probability pdrop as a way to avoid co-adaptation of architecture nodes. This is reminiscent of DropOut (Srivastava et al., 2014) and DropConnect (Wan et al., 2013), although it is carried out at a higher level of abstraction, i.e., connections between “layers” instead of neurons. It is also reminiscent of techniques used in Neural Architecture Search (Zoph et al., 2018) and the approach used in ImageNet experiments in (Xie et al., 2019), although implementation details are unclear for the latter.\nWe propose to use a MonteCarlo approach where paths are also dropped in testing. Inference is performed multiple times for different realizations of dropped architecture edges and results are averaged. This allows to sample from the full predictive distribution induced by DropPath, as in MonteCarlo DropOut (Gal & Ghahramani, 2015). It is worth noting that MonteCarlo DropPath decorrelates the contributions of paths in Eq. (3.2) even if they share architecture edges (proof in supplementary material), thus allowing finer control over the modulation of the receptive field radius." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "Experimental evaluation of GNNs is a topic that has recently received great attention. The emerging consensus is that benchmarking methods routinely used in past literature are inadequate and lack reproducibility. In particular, Vignac et al. (2020) showed that commonly used citation network datasets like CORA, CITESEER, PUBMED are too simple and skew results towards simpler architectures or even promote ignoring the underlying graph. TU datasets are also recognized to be too small (Errica et al., 2019) and the high variability across splits does not allow for sound comparisons across methods. In order to evaluate the gains offered by randomly wired architectures across a wide variety of graph convolutions and tasks, we adopt a recently proposed GNN benchmarking framework (Dwivedi et al., 2020), that has introduced new datasets and allows for reproducible experiments.\nTable 1: ZINC Mean Absolute Error.\nL = 8 L = 16 L = 32\nGCN 0.465±0.012 0.445±0.022 0.426±0.011\nRAN-GCN 0.447±0.0191.5σ 0.398 ±0.015 2.1σ 0.385 ±0.015 3.7σ\nGIN 0.444±0.017 0.461±0.022 0.633±0.089\nRAN-GIN 0.398±0.0042.7σ 0.426 ±0.020 1.6σ 0.540 ±0.155 1.0σ GatedGCN 0.339±0.027 0.284±0.014 0.277±0.025\nRAN-GatedGCN 0.310±0.0101.1σ 0.218 ±0.017 4.7σ 0.215 ±0.025 2.5σ\nGraphSage 0.363±0.005 0.355±0.003 0.351±0.009\nRAN-GraphSage 0.368±0.0151.0σ 0.340 ±0.009 5.0σ 0.333 ±0.008 2.0σ\nGAT 0.416±0.016 0.384±0.011 0.357±0.011\nRAN-GAT 0.430±0.0200.9σ 0.392 ±0.012 0.7σ 0.368 ±0.011 1.0σ\nTable 2: CLUSTER Accuracy.\nL = 8 L = 16 L = 32\nGCN 48.71±3.04 48.57±7.85 55.62±3.12\nRAN-GCN 58.61±3.153.3σ 62.24 ±1.64 1.7σ 63.32 ±0.99 2.5σ\nGIN 49.93±1.79 49.04±2.51 44.96±5.56\nRAN-GIN 54.38±2.522.5σ 56.58 ±6.26 3.0σ 56.19 ±2.91 2.0σ GatedGCN 63.10±2.54 70.09±1.89 71.94±1.51\nRAN-GatedGCN63.85±2.450.3σ 72.13 ±1.68 1.1σ 74.32 ±0.89 1.6σ\nGraphSage 66.22±0.73 71.50±1.03 70.23±1.77\nRAN-GraphSage 67.21±3.231.4σ 71.90 ±2.09 0.4σ 72.56 ±2.08 1.3σ\nGAT 54.35±4.39 60.68±6.10 55.41±4.31\nRAN-GAT 63.38±2.492.1σ 69.68 ±1.58 1.5σ 70.93 ±1.18 3.6σ\nWe focus on testing five of the most commonly used graph convolution definitions: GCN (Kipf & Welling, 2017), GIN (Xu et al., 2019)2, Gated GCN (Bresson & Laurent, 2017), GraphSage (Hamilton et al., 2017), GAT (Veličković et al., 2017). We select three representative tasks introduced by (Dwivedi et al., 2020): graph regression on the ZINC dataset, node classification on the CLUSTER dataset, and graph classification with superpixels on CIFAR10. To ensure reproducibility, we use exactly the same setting as (Dwivedi et al., 2020). We are interested in the performance differences between the baseline ResNet architecture, i.e., a feedforward architecture with skip connections after every layer, and the randomly wired architecture. It was already shown in (Dwivedi et al., 2020) that ResNet GNNs significantly outperform architectures without residual connections. We remark that other works proposed methods to build deeper GNN (Rong et al., 2019; Zhao & Akoglu, 2019; Gong et al., 2020), but such techniques can be regarded as complementary to our work. We do not attempt to optimize a specific method, nor we are interested in comparing one graph convolution to another. A fair comparison is ensured by running both methods with the same number of trainable parameters and with the same hyperparameters. In particular, the learning rate of both methods is adaptively decayed between 10−3 and 10−5 by halving according to the value of the validation loss, with a patience of 5 epochs. Stopping criterion is validation loss not improving for 5 epochs after reaching the minimum learning rate. We average the results of all experiments over 4 runs with different weight initializations and different random architecture graphs, drawn with p = 0.6. We also evaluate results for multiple values of the total number of layers (architecture nodes) L, in order to show that randomly wired GNNs allow a more effective increase in capacity. The random architectures use sequential paths (Sec. 3.2) in the ZINC experiment, sequential paths and DropPath in the CLUSTER experiment, and only DropPath in CIFAR103. The reason for these choices is that the regression task in ZINC and the node classification task in CLUSTER are particularly sensitive to the size of the receptive field, as observed by analyzing the experimental receptive radius (supplementary material). On the other hand, CIFAR10 is bottlenecked by overfitting, and it greatly benefits from the regularizing effect of DropPath, as also observed on CLUSTER. The number of DropPath iterations in testing was fixed to 16." }, { "heading": "4.1 RANDOM GNN BENCHMARKING", "text": "The results presented in this section show that randomly wired GNNs have compelling performance in many regards. First of all, they typically provide higher accuracy or lower error than their ResNet counterparts for the same number of parameters. Moreover, they are more effective at increasing capacity than stacking layers: while they are essentially equivalent to ResNets for very short networks, they enable larger gains when additional layers are introduced.\nTable 1 shows the results obtained on the ZINC dataset. The metric is mean absolute error (MAE), so lower is better. The superscript reports the standard deviation among runs and the subscript reports the level of significance by measuring how many baseline standard deviations the average value of the\n2GIN and RAN-GIN compute the output as in (Xu et al., 2018), using the contributions of all architecture nodes.\n3We do not use DropPath for RAN-GIN on any experiment as we observed unstable behavior.\nTable 3: CIFAR10 Accuracy.\nL = 8 L = 16 L = 32\nGCN 54.85±0.20 54.74±0.52 54.76±0.53\nRAN-GCN 57.81±0.0814.8σ 57.29 ±0.44 4.9σ 58.49 ±0.21 7.0σ\nGIN 48.59±1.60 47.14±1.75 36.90±4.71\nRAN-GIN 52.52±0.662.5σ 52.07 ±1.78 2.8σ 42.73 ±7.93 1.2σ GatedGCN 68.27±0.80 69.16±0.66 69.46±0.47\nRAN-GatedGCN 68.86±1.640.7σ 72.00 ±0.44 4.3σ 73.50 ±0.68 8.6σ\nGraphSage 65.58±0.46 66.12±0.11 65.33±0.34\nRAN-GraphSage 65.31±0.380.6σ 66.10 ±1.11 0.2σ 67.68 ±0.37 6.9σ\nGAT 64.43±0.33 63.61±0.66 64.62±0.65\nRAN-GAT 66.18±0.655.3σ 66.27 ±0.16 4.0σ 66.01 ±0.38 2.1σ\nTable 4: Median relative gain over L = 4.\nL = 8 L = 16 L = 32\nZINC ResNet +7.88% +17.06% +17.99%\nRandom +14.22% +21.81% +24.36%\nCLUSTER ResNet +17.90% +15.80% +14.26%\nRandom +20.75% +30.07% +32.41%\nCIFAR10 ResNet −0.84% −0.14% −1.22%\nRandom +1.31% +3.58% +4.10%\nrandom architecture deviates from the average value of the baseline. Results are in bold if they are at least 1σ significant. The results show that the randomly wired GNNs typically outperform the ResNet baseline by significant margins. Table 2 reports the node classification accuracy on the CLUSTER dataset. It can be seen that the random architectures achieve very significant improvements on this dataset, especially for RAN-GCN, RAN-GIN and RAN-GAT. Table 3 reports the classification accuracy on CIFAR10 when the images are converted to graphs using superpixels. Also in this case, the randomly wired architecture greatly outperforms the baseline, in some cases achieving gains higher than 5σ. Finally, Table 4 shows the relative performance gain (relative improvement in accuracy or mean absolute error), averaged over all the graph convolution definitions, with respect to a short 4-layer network, where random wiring and ResNets are almost equivalent (results in supplementary material). We can notice that deeper ResNets always provide lower gains with respect to their shallow counterpart than the randomly wired GNNs. Moreover, we observe monotonically increasing gains for random GNNs while deeper ResNets are either unable to significantly extract more performance beyond L = 16 or even provide worse results than the L = 4 network. This supports our claim that the randomly wired architecture is a superior way to increase GNN capacity.\nFinally, we compare the proposed method against two other frameworks for GNNs, namely PPNP Klicpera et al. (2018) and SIGN Rossi et al. (2020), which propose different approaches for solving the oversmoothing problem. Due to the significant differences in the approaches, providing a fair comparison is challenging. We decided to equalize the number of parameters across the methods, since notions as number of layers or features cannot be translated in all the frameworks (180k,360k,720k parameters correspond to the L = 8, 16, 32 settings in the previous tables). Table ?? shows the obtained results. We can observe that both PPNP on node classification and SIGN on all tasks outperform the standard GCN architecture without skip connections, but they cannot outperform GCN with residual connections and the randomly wired GCN." }, { "heading": "4.2 ABLATION STUDY", "text": "" }, { "heading": "4.2.1 EDGE PROBABILITY", "text": "We first investigate the impact of the probability p of drawing an edge in the random architecture. Table 6 shows the results for a basic random architecture without DropPath nor embedded sequential path. It appears that an optimal value of p exists that maximizes performance. This could be explained by a tradeoff between size of receptive field and the ability to modulate it." }, { "heading": "4.2.2 DROPPATH", "text": "The impact of DropPath on CIFAR10 is shown in Table 7. We found the improvement due to DropPath to be increasingly significant for a higher number of architecture nodes, as expected due to the increased number of edges. The value of the drop probability pdrop = 0.01 was not extensively cross-validated. However, Table 8 shows that higher drop rates lowered performance." }, { "heading": "4.2.3 EMBEDDED SEQUENTIAL PATH", "text": "The impact of embedding a sequential path as explained in Sec. 3.2 is shown in Table 9. It can be observed that its effect of promoting receptive fields with larger radius is useful on this task. We remark that, while we do not report results due to space constraints, this is not always the case and some tasks (e.g., CIFAR10) do not benefit from promoting larger receptive fields." }, { "heading": "5 CONCLUSIONS", "text": "We showed how randomly wired architectures can boost the performance of GNNs by merging receptive fields of multiple size. Consistent and statistically significant improvements over a wide range of tasks and graph convolutions suggest considering them as the go-to choice for new models." } ]
2,020
null
SP:ca83623b552cb6bd000d5a67fd81e41a6d7b1e7a
[ "The paper studies the behaviour of disentanglement methods and metrics on data where a couple of factors of variation (FoV) are correlated, a more realistic setup compared to the usual independent FoV setting in the literature. The paper shows how the correlation in the FoV is reflected in the representations learned by the models, and claims that the widely used disentanglement scores fail to capture these correlations. A couple of solutions that use weak supervision are suggested.", "This paper systematically presents a large-scale empirical study on the disentangled representation learning when the underlying factors are possibly entangled. From the results of purely unsupervised settings, the authors have discovered the shortcomings of the existing metrics of disentanglement as well as the poor learned representations (in terms of disentanglement). However, with the help of small amount of factor labels or other weak supervision signals, recent approaches could learn fairly perfect representation." ]
Despite impressive progress in the last decade, it still remains an open challenge to build models that generalize well across multiple tasks and datasets. One path to achieve this is to learn meaningful and compact representations, in which different semantic aspects of data are structurally disentangled. The focus of disentanglement approaches has been on separating independent factors of variation despite the fact that real-world observations are often not structured into meaningful independent causal variables. In this work, we bridge the gap to real-world scenarios by analyzing the behavior of most prominent methods and disentanglement scores on correlated data in a large scale empirical study (including 4260 models). We show that systematically induced correlations in the dataset are being learned and reflected in the latent representations, while widely used disentanglement scores fall short of capturing these latent correlations. Finally, we demonstrate how to disentangle these latent correlations using weak supervision, even if we constrain this supervision to be causally plausible. Our results thus support the argument to learn independent mechanisms rather than independent factors of variations.
[]
[ { "authors": [ "Tameem Adel", "Zoubin Ghahramani", "Adrian Weller" ], "title": "Discovering interpretable representations for both deep generative and discriminative models", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Francis Bach", "Michael Jordan" ], "title": "Kernel independent component analysis", "venue": "Journal of Machine Learning Research,", "year": 2002 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2013 }, { "authors": [ "Yoshua Bengio", "Tristan Deleu", "Nasim Rahaman", "Rosemary Ke", "Sébastien Lachapelle", "Olexa Bilaniuk", "Anirudh Goyal", "Christopher Pal" ], "title": "A meta-transfer objective for learning to disentangle causal mechanisms", "venue": null, "year": 1901 }, { "authors": [ "Diane Bouchacourt", "Ryota Tomioka", "Sebastian Nowozin" ], "title": "Multi-level variational autoencoder: Learning disentangled representations from grouped observations", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Rob Brekelmans", "Daniel Moyer", "Aram Galstyan", "Greg Ver Steeg" ], "title": "Exact rate-distortion in autoencoders via echo noise", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Christopher P Burgess", "Irina Higgins", "Arka Pal", "Loic Matthey", "Nick Watters", "Guillaume Desjardins", "Alexander Lerchner" ], "title": "Understanding disentangling in beta-VAE", "venue": "arXiv preprint arXiv:1804.03599,", "year": 2018 }, { "authors": [ "Agisilaos Chartsias", "Thomas Joyce", "Giorgos Papanastasiou", "Scott Semple", "Michelle Williams", "David Newby", "Rohan Dharmakumar", "Sotirios A Tsaftaris" ], "title": "Factorised spatial representation learning: Application in semi-supervised myocardial segmentation", "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention,", "year": 2018 }, { "authors": [ "Tian Qi Chen", "Xuechen Li", "Roger Grosse", "David Duvenaud" ], "title": "Isolating sources of disentanglement in variational autoencoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Pierre Comon" ], "title": "Independent component analysis, a new concept", "venue": "Signal Processing,", "year": 1994 }, { "authors": [ "Elliot Creager", "David Madras", "Jörn-Henrik Jacobsen", "Marissa A Weis", "Kevin Swersky", "Toniann Pitassi", "Richard Zemel" ], "title": "Flexibly fair representation learning by disentanglement", "venue": null, "year": 1906 }, { "authors": [ "Cynthia Dwork", "Moritz Hardt", "Toniann Pitassi", "Omer Reingold", "Richard Zemel" ], "title": "Fairness through awareness", "venue": "In Proceedings of the 3rd innovations in theoretical computer science conference,", "year": 2012 }, { "authors": [ "Cian Eastwood", "Christopher KI Williams" ], "title": "A framework for the quantitative evaluation of disentangled representations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Peter Földiák" ], "title": "Learning invariance from transformation sequences", "venue": "Neural Computation,", "year": 1991 }, { "authors": [ "Robert Geirhos", "Jörn-Henrik Jacobsen", "Claudio Michaelis", "Richard Zemel", "Wieland Brendel", "Matthias Bethge", "Felix A Wichmann" ], "title": "Shortcut learning in deep neural networks", "venue": "arXiv preprint arXiv:2004.07780,", "year": 2020 }, { "authors": [ "Anirudh Goyal", "Alex Lamb", "Jordan Hoffmann", "Shagun Sodhani", "Sergey Levine", "Yoshua Bengio", "Bernhard Schölkopf" ], "title": "Recurrent independent mechanisms", "venue": null, "year": 1909 }, { "authors": [ "Luigi Gresele", "Paul K. Rubenstein", "Arash Mehrjou", "Francesco Locatello", "Bernhard Schölkopf" ], "title": "The incomplete rosetta stone problem: Identifiability results for multi-view nonlinear ica", "venue": "In Conference on Uncertainty in Artificial Intelligence (UAI),", "year": 2019 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-VAE: Learning basic visual concepts with a constrained variational framework", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Irina Higgins", "Arka Pal", "Andrei Rusu", "Loic Matthey", "Christopher Burgess", "Alexander Pritzel", "Matthew Botvinick", "Charles Blundell", "Alexander Lerchner. Darla" ], "title": "Improving zero-shot transfer in reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Irina Higgins", "Nicolas Sonnerat", "Loic Matthey", "Arka Pal", "Christopher P Burgess", "Matko Bošnjak", "Murray Shanahan", "Matthew Botvinick", "Demis Hassabis", "Alexander Lerchner" ], "title": "Scan: Learning hierarchical compositional visual concepts", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Aapo Hyvarinen", "Hiroshi Morioka" ], "title": "Unsupervised feature extraction by time-contrastive learning and nonlinear ica", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Aapo Hyvärinen", "Petteri Pajunen" ], "title": "Nonlinear independent component analysis: Existence and uniqueness results", "venue": "Neural Networks,", "year": 1999 }, { "authors": [ "Aapo Hyvarinen", "Hiroaki Sasaki", "Richard E Turner" ], "title": "Nonlinear ica using auxiliary variables and generalized contrastive learning", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Christian Jutten", "Juha Karhunen" ], "title": "Advances in nonlinear blind source separation", "venue": "In International Symposium on Independent Component Analysis and Blind Signal Separation,", "year": 2003 }, { "authors": [ "Ilyes Khemakhem", "Diederik Kingma", "Ricardo Monti", "Aapo Hyvarinen" ], "title": "Variational autoencoders and nonlinear ica: A unifying framework", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by factorising", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "David Klindt", "Lukas Schott", "Yash Sharma", "Ivan Ustyuzhaninov", "Wieland Brendel", "Matthias Bethge", "Dylan Paiton" ], "title": "Towards nonlinear disentanglement in natural data with temporal sparse coding", "venue": null, "year": 2007 }, { "authors": [ "Abhishek Kumar", "Prasanna Sattigeri", "Avinash Balakrishnan" ], "title": "Variational inference of disentangled latent concepts from unlabeled observations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Zejian Li", "Yongchuan Tang", "Wei Li", "Yongxing He" ], "title": "Learning disentangled representation with pairwise independence", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Francesco Locatello", "Gabriele Abbati", "Thomas Rainforth", "Stefan Bauer", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "On the fairness of disentangled representations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Francesco Locatello", "Michael Tschannen", "Stefan Bauer", "Gunnar Rätsch", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Disentangling factors of variation using few labels. The 2nd Learning from Limited Labeled Data (LLD) Workshop at the International Conference on Learning Representations, 2019c", "venue": null, "year": 2019 }, { "authors": [ "Francesco Locatello", "Ben Poole", "Gunnar Rätsch", "Bernhard Schölkopf", "Olivier Bachem", "Michael Tschannen" ], "title": "Weakly-supervised disentanglement without compromises", "venue": "arXiv preprint arXiv:2002.02886,", "year": 2020 }, { "authors": [ "David Madras", "Elliot Creager", "Toniann Pitassi", "Richard Zemel" ], "title": "Learning adversarially fair and transferable representations", "venue": "arXiv preprint arXiv:1802.06309,", "year": 2018 }, { "authors": [ "David Madras", "Elliot Creager", "Toniann Pitassi", "Richard Zemel" ], "title": "Fairness through causal awareness: Learning causal latent-variable models for biased data", "venue": "In Proceedings of the Conference on Fairness, Accountability, and Transparency,", "year": 2019 }, { "authors": [ "Emile Mathieu", "Tom Rainforth", "N. Siddharth", "Yee Whye Teh" ], "title": "Disentangling disentanglement in variational auto-encoders", "venue": "arXiv preprint arXiv:1812.02833,", "year": 2018 }, { "authors": [ "G. Parascandolo", "N. Kilbertus", "M. Rojas-Carulla", "B. Schölkopf" ], "title": "Learning independent causal mechanisms", "venue": "In Proceedings of the 35th International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Pascal Paysan", "Reinhard Knothe", "Brian Amberg", "Sami Romdhani", "Thomas Vetter" ], "title": "A 3d face model for pose and illumination invariant face recognition", "venue": "In 2009 Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance,", "year": 2009 }, { "authors": [ "J. Peters", "D. Janzing", "B. Schölkopf" ], "title": "Elements of Causal Inference - Foundations and Learning Algorithms", "venue": null, "year": 2017 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "arXiv preprint arXiv:1401.4082,", "year": 2014 }, { "authors": [ "Mark Schmidt", "Alexandru Niculescu-Mizil", "Kevin Murphy" ], "title": "Learning graphical model structure using l1-regularization paths", "venue": "In AAAI,", "year": 2007 }, { "authors": [ "Bernhard Schölkopf" ], "title": "Causality for machine learning", "venue": "arXiv preprint arXiv:1911.10500,", "year": 2019 }, { "authors": [ "Rui Shu", "Yining Chen", "Abhishek Kumar", "Stefano Ermon", "Ben Poole" ], "title": "Weakly supervised disentanglement with guarantees", "venue": "arXiv preprint arXiv:1910.09772,", "year": 2019 }, { "authors": [ "Peter Sorrenson", "Carsten Rother", "Ullrich Köthe" ], "title": "Disentanglement by nonlinear ica with general incompressible-flow networks (gin)", "venue": "arXiv preprint arXiv:2001.04872,", "year": 2020 }, { "authors": [ "Raphael Suter", "Djordje Miladinović", "Stefan Bauer", "Bernhard Schölkopf" ], "title": "Interventional robustness of deep latent variable models", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Sjoerd van Steenkiste", "Francesco Locatello", "Jürgen Schmidhuber", "Olivier Bachem" ], "title": "Are disentangled representations helpful for abstract visual reasoning", "venue": null, "year": 1905 }, { "authors": [ "Mengyue Yang", "Furui Liu", "Zhitang Chen", "Xinwei Shen", "Jianye Hao", "Jun Wang" ], "title": "Causalvae: Structured causal disentanglement in variational autoencoder", "venue": "arXiv preprint arXiv:2004.08697,", "year": 2020 } ]
[ { "heading": null, "text": "Despite impressive progress in the last decade, it still remains an open challenge to build models that generalize well across multiple tasks and datasets. One path to achieve this is to learn meaningful and compact representations, in which different semantic aspects of data are structurally disentangled. The focus of disentanglement approaches has been on separating independent factors of variation despite the fact that real-world observations are often not structured into meaningful independent causal variables. In this work, we bridge the gap to real-world scenarios by analyzing the behavior of most prominent methods and disentanglement scores on correlated data in a large scale empirical study (including 4260 models). We show that systematically induced correlations in the dataset are being learned and reflected in the latent representations, while widely used disentanglement scores fall short of capturing these latent correlations. Finally, we demonstrate how to disentangle these latent correlations using weak supervision, even if we constrain this supervision to be causally plausible. Our results thus support the argument to learn independent mechanisms rather than independent factors of variations.\n1 INTRODUCTION\nDue to the induced structure, disentangled representations promise generalization to unseen scenarios (Higgins et al., 2017b), increased interpretability (Adel et al., 2018; Higgins et al., 2018) and faster learning on downstream tasks (van Steenkiste et al., 2019; Locatello et al., 2019a). While the advantages of disentangled representations have been well established, they generally assume the existence of natural factors that vary independently within the given dataset, which is rarely the case in real-world settings. As an example, consider a scene with a table and some chairs (see Fig. 1). The higher-level factors of this representation are in fact correlated and what we actually want to infer are independent (causal) mechanisms (Peters et al., 2017; Parascandolo et al., 2018; Suter et al., 2019; Goyal et al., 2019).\nA complex generative model can be thought of as the composition of independent mechanisms or “causal” modules, which generate highdimensional observations (such as images or videos). In the causality community, this is often considered a prerequisite to achieve representations which are robust to interventions upon variables determined by\nsuch models (Peters et al., 2017). One particular instantiation of this idea in the machine learning community is the notion of disentangled representations (Bengio et al., 2013). The goal of disentanglement learning is to find a representation of the data which captures all the ground-truth factors of variation (FoV) independently.\nDespite the recent growth of the field, the performance of state-of-the-art disentanglement learners remains unknown for more realistic settings where FoV are correlated during training. Given the potential societal impact in the medical domain (Chartsias et al., 2018) or fair decision making (Locatello et al., 2019a; Madras et al., 2018; Creager et al., 2019), the evaluation of the usefulness of disentangled representations trained on correlated data is of high importance.\nTo go beyond the highly idealized settings considered thus far, we conducted a large scale empirical study to systematically assess the effect of induced correlations between pairs of factors of variation in training data on the learned representations. To provide a qualitative and quantitative evaluation, we investigate multiple datasets with access to ground-truth labels. Moreover, we study the generalization abilities of the representations learned on correlated data as well as their performance in particular for the downstream task of fair decision making.\nContributions. Our main contributions can be summarized as follows:\n• We present the first large-scale empirical study (4260 models)1 that examines how modern disentanglement learners perform when ground truth factors of the observational data are correlated.\n• We find that factorization-based inductive biases are insufficient to learn disentangled representations from observational data. Existing methods fail to disentangle correlated factors of variation, resulting in correlated latent space dimensions. Moreover, standard disentanglement metrics do not reveal these persisting correlations.\n• We investigate the usefulness of semi-supervised and weakly-supervised approaches to resolve latent entanglement. For the latter setting, we focus on multiple observational and interventional distributions." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "Disentanglement. Current state-of-the-art disentanglement approaches use the framework of variational auto-encoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014). The (high-dimensional) observations x are modelled as being generated from some latent features z with chosen prior p(z) according to the probabilistic model pθ(x|z)p(z). The generative model pθ(x|z) as well as the proxy posterior qφ(z|x) can be parameterized by neural networks, which are optimized by maximizing the variational lower bound (ELBO) of log p(x1, . . . ,xN ).\nLV AE = N∑ i=1 Eqφ(z|x(i))[log pθ(x (i)|z)]−DKL(qφ(z|x(i))‖p(z)) (1)\nThe above objective does not enforce any structure on the latent space, except for similarity (in KL-divergence) to the prior p(z) (typically chosen as an isotropic Gaussian). However, the structure and semantic meaning of latent representations can be relevant to study generation properties. Consequently, various proposals for structure-imposing regularization and commonly used evaluation metrics measuring different notions of disentanglement of the learned representations have been made (Higgins et al., 2017a; Kim & Mnih, 2018; Burgess et al., 2018; Kumar et al., 2018; Chen et al., 2018; Eastwood & Williams, 2018; Mathieu et al., 2018). Recently, it has been shown that unsupervised disentangling by optimising marginal likelihood in a generative model is impossible without further inductive bias (Locatello et al., 2019b). To address this theoretical limitation, methods have been proposed that do not require explicitly labelled data but only some weak labeling information (Locatello et al., 2020; Shu et al., 2019). Ideas related to disentangling the factors of variations date back to the non-linear ICA literature (Bach & Jordan, 2002; Comon, 1994; Jutten & Karhunen, 2003; Hyvärinen & Pajunen, 1999; Hyvarinen et al., 2019; Hyvarinen & Morioka, 2016; Gresele et al., 2019). Recent work combines non-linear ICA with disentanglement (Khemakhem et al., 2020; Sorrenson et al., 2020; Klindt et al., 2020)\nCorrelations. A set of random variables Xi=1,...,n is not independent, if and only if their joint distribution does not factorize\nP (X1, X2, . . . , Xn) 6= n∏ i=1 P (Xi). (2)\nIn this case, we speak of dependence between the random variables, also commonly referred to as correlation.2 Correlation between two variables can either stem from a direct causal relationship (one\n1Each model has been trained for 300,000 iterations on Tesla V100 GPUs. Reproducing these experiments requires approximately 0.79 GPU years.\n2We use the term correlation here in a broad sense of any statistical association, not just linear dependencies.\nvariable affects the other), but can also be due to different unobserved circumstances (confounders) affecting both. Real-world datasets display many of these (“spurious” and often a priori unknown) correlations (Geirhos et al., 2020). However, most work on learning disentangled representations assumes that there is an underlying set of independent ground truth variables that govern the generative process of observable data. These methods are hence predominantly evaluated on data that obey independence in the true factors of variation, which we then consider to be the correct factorization. In the real world, the observation generating process is likely not always as clearly “disentangled” and we do expect correlations in the collected datasets. It is thus an open question to what degree existing inductive biases from the encoder/decoder architecture, but more importantly the dataset biases, affect the learned representation. In our experiments, we introduce dataset correlations in a controlled manner to understand to what degree state-of-the-art approaches can cope with such correlations. We believe these correlations to reflect a major feature of more realistic environments.\nOther Related Work. Most popular datasets in the disentanglement literature exhibit perfect independence in their FoV. At some level this is sensible as it reflects the underlying assumption in the inductive bias being studied. However, this assumption is unlikely to hold in practice as shown by Li et al. (2019), who propose methods based on a pairwise independence assumption instead. The literature so far has not thoroughly measured how popular inductive biases such as factorized priors behave when learning from correlated datasets, although several smaller experiments along these lines can be acknowledged. Chen et al. (2018) studied correlated 3DFaces (Paysan et al., 2009) by fixing all except three factors in which the authors conclude that the β-TC-VAE regularizer can help to disentangle imposed correlations. Brekelmans et al. (2019) show that Echo noise results in superior disentanglement compared to standard betaVAE in a small experiment on a downsampled dSprites variant where randomly selected factor pairs are excluded. However, the latent structure was not studied in detail; our findings suggest that global disentanglement metrics are insufficient to diagnose issues when models learn from correlated data. Creager et al. (2019) based some of the evaluations of a proposed new autoencoder architecture in the fairness context on a biased dSprites variant and Yang et al. (2020) study a linear SCM in a VAE architecure on datasets with dependent variables. However, their studies focused on representation learners that require strong supervision via FoV labels at train time." }, { "heading": "3 THE EFFECT OF CORRELATED DATA ON DISENTANGLEMENT LEARNERS", "text": "In this section, we want to present the key findings from our empirical study of unsupervised disentanglement learning on a particular variant of correlated data. We start by outlining the experimental design of our study in Section 3.1. Based on this, we analyze the latent spaces in Section 3.2 and find that factorization-based inductive biases are insufficient to learn disentangled representations from observational data. Persisting pairwise correlations in the latent space are not sufficiently revealed by standard disentanglement metrics that might be particularly relevant and problematic for fairness applications. Finally, in Section 3.3, we show extrapolation and generalization capabilities of the learned representations towards unseen factor combinations due to the induced correlations." }, { "heading": "3.1 EXPERIMENTAL DESIGN", "text": "For our first experiments we introduce correlations between single pairs of factors of variation on the following three datasets: Shapes3D with object size and azimuth (denoted “A”), dSprites with orientation and X-position (“B”) and finally the real-world observations dataset MPI-3D with\nfirst and second degree of freedom (“C”). We focus on linear correlations with Gaussian noise between the two variables, which we denote by c1, c2. The ground truth factors for c1, c2 take values zc1 ∈ {0, . . . , zmaxc1 } and zc2 ∈ {0, . . . , zmaxc2 } respectively. We then parameterize correlations by sampling the training dataset from the joint P (zc1, zc2) ∝ N (zc2−αzc1, σ) where α = zmaxc2 /zmaxc1 . The strength of the correlations can be tuned by σ, for which we choose 0.2, 0.4, 0.7 in normalized units with respect to the range of values in zc1,c2. Lower σ indicates stronger correlation. See Fig. 5 for an example of P (zc1, zc2) for correlating azimuth and object size in Shapes3D with σ = 0.2. Additionally, we study the uncorrelated limit (σ =∞), which amounts to the case typically studied in the literature. We train the same six VAE methods as discussed in Locatello et al. (2019b), including β-VAE, FactorVAE, AnnealedVAE, DIP-VAE-I, DIP-VAE-II and β-TC-VAE, each with six hyperparameter settings. Each method has been trained using five different random seeds. All remaining factors of variation are sampled uniformly at random. This first study sums up to a total of 2160 trained models, or 180 models per dataset and correlation strength3. Appendix A describes additional implementation details." }, { "heading": "3.2 CAN UNSUPERVISED METHODS ACHIEVE DISENTANGLEMENT OF CORRELATED DATA?", "text": "Shortcomings of existing metrics. Following recent studies, we evaluate the trained models with the help of a broad range of disentanglement metrics that aim at quantifying overall success by a single scalar measure. Perhaps surprisingly, Fig. 2 shows no clear trend among all implemented disentanglement scores w.r.t. correlation strength (see Fig. 9 in the Appendix for the full result across all datasets and metrics). The metrics have been evaluated by sampling from the correlated data distribution although they do not differ substantially when evaluated on the uncorrelated distribution. However, as we will demonstrate along the following analysis, the latent spaces in this setting show some characteristic differences when trained on a strongly correlated pair of FoVs. We thus argue that common disentanglement metrics are limited when correlations are introduced into the training data. To conduct a more careful analysis of the inductive data bias applied on the learned representations we will instead evaluate pairwise metrics. Note that regarding BetaVAE and FactorVAE this observed trend is to some degree expected as they would yield perfect disentanglement scores even if we would take the correlated ground truth factors or a linear transformation in the case of BetaVAE as the representation.\nLatent structure and pairwise entanglement. We start by analysing latent traversals of some trained models on Shapes3D (A). For strong correlations (σ = 0.2 and σ = 0.4), we typically observe trained models with two latent codes encoding the two correlated variables simultaneously. In these cases, one of the latent codes corresponds to data along the major axis of the correlation line whereas the other latent code dimension manifests in an orthogonal change of the two variables along the minor axis. Still, a full traversal of the code corresponding to the minor axis often seems to cover only observations within the correlation line. Fig. 3 (left) shows this effect for the latent space of a model trained on Shapes3D (A) with strongest correlation (σ = 0.2).\n3Code for all experiments will be released after publication\nTo quantify this observation, we analyze the importance of individual latent codes in predicting the value of a given ground truth FoV. An importance weight for each pair of {FoV, latent dimension} is computed by training a gradient boosting trees (GBT) classifier to predict the ground truth labels from the latents (10,000 samples). In the right panel of Fig. 3, we compute these importance weights for the model used to generate traversals in the left panel. The corresponding evaluation for a model trained on the same dataset with weak correlation does not reveal this feature visually (see Fig. 10 in the Appendix).\nTo support this claim for a larger set of models, we calculate a pairwise entanglement score that allows us to measure how difficult it is to separate two factors of variation from their latent codes. This computation involves grouping FoV into pairs based on an ordering of their pairwise mutual information or GBT feature importance between latents and FoV; we defer to Appendix A for a detailed description of this procedure. Figure 4 (left) shows that across all datasets the pair of correlated FoV has a higher score than the median of all other pairs, indicating that they are harder to disentangle. This threshold decreases with weaker correlation and the pair becomes easier to disentangle for weaker correlations (σ ≥ 0.7). These findings suggest that the models still manage to disentangle correlated factors if the correlation is not too strong.\nFinally, correlations between variables are of crucial importance in fairness applications motivating an additional investigation on ramifications of these entangled latent spaces. In this setting we are interested in the unfairness of predicting the second correlated variable while the first correlated variable is considered being a protected or sensitive attribute. In the following, we use a variant of demographic parity (Dwork et al., 2012) that computes pairwise mutual information between latents and FoV (Locatello et al., 2019a). In Fig. 4 (right) we evaluate this score when correlations are present within the data in the case of Shapes3D (A). Unfairness tracks correlation strength in this scenario. These results suggest that we cannot expect disentangled representations learned unsupervisedly to help reduce unfairness beyond the benefits discussed in Locatello et al. (2019a). More comprehensive results, that support the finding that the correlated pair is statistically more entangled in the latent representations across all unsupervised experiments and datasets is provided in Appendix B.1. Summary. We find that existing methods fail to learn disentangled representations of correlated factors of variation, and moreover that standard disentanglement metrics are insufficient to reveal these troublesome pairwise entanglements in the latent space." }, { "heading": "3.3 GENERALIZATION PROPERTIES", "text": "In this Section, we aim to understand how the trained models perform on unseen training data far away from the correlation line; out-of-distribution (OOD) w.r.t. the train distribution. We analyse this capability for the model from Fig. 3. In this model, the remaining factors seem to be disentangled well enough to only focus on the two latent dimensions encoding the entangled variables. As a first test, we sample observations from the FoV then set object size and azimuth to six distinct configurations of zero probability, see Fig. 5 (left). The trained model is capable of reconstructing these observations despite never having encountered these configurations or neighbors thereof during\ntraining. This suggests that the encoder maps representations to a meaningful point in the latent space from which the decoder is equally capable of generating expected observations. To test this hypothesis further, we analysed latent traversals originating from these OOD points and observe that changes in the remaining factors reliably yield the expected reconstructions. Traversals with respect to the two entangled latent codes continue to encode object size and azimuth. To fully understand this models’ generalization properties we visualise the occupied latent space spanned by the two identified dimensions encoding both correlated factors in Fig. 5 (right). We are particularly interested in where these points are located with respect to the ground truth value of each correlated variable, depicted via color. The two sets of depicted points are (1) latent codes sampled from the correlated training data and (2) latent codes sampled with a (object size, azimuth) configuration that has zero probability under the correlated training distribution. We observe that contours of equal color (ground truth) are not aligned with the latent axes. This indicates that the two latent dimensions encode both FoV at the same time. Likewise, we can understand the generalization capabilities of this model far away from the trained data. Extreme configurations such as small azimuth and large object size are encoded to space regions corresponding to the intersections of the manifolds with constant value for each correlated variable. This shows that all out-of-distribution points are encoded in this representation space in a way that obeys the natural ordering of each respective factor. This behaviour remains even in cases where the trained latent space does not mirror the default value ordering as stored in our ground truth table. For this we additionally trained 360 models on two additional Shapes3D variants each, where we strongly correlated object color - object size (“D”) and object color - azimuth (“E”) respectively (σ = 0.2 and σ = 0.4). As the color values do not allow for a unique natural ordering, the trained models do often encode a different color manifold ordering into the latent space. In Appendix B.1, we show some of their characteristic latent space visualizations with similar extrapolation and generalization capabilities. We conclude from these results that disentanglement methods can generalize towards unseen FoV configurations as long as each factor value is contained in the training data within a different configuration." }, { "heading": "4 FINDING THE RIGHT FACTORIZATION", "text": "The results from Section 3 illustrate the limitations of state-of-the-art unsupervised methods on correlated data (and thus real-world observational data). We now investigate the usefulness of several approaches for mitigating pairwise correlations in the latent code. We begin with a post-hoc procedure in Section 4.1 that uses limited label information on the ground truth factors and show that it achieves a substantial correction of the pairwise latent correlation. We then consider a recently proposed approach leveraging recent advances in weakly supervised disentanglement learning that applies directly at train time. As will be seen in Section 4.2, this method results in substantially more disentangled representations, even when applied on correlated data from different sampling scenarios." }, { "heading": "4.1 POST-HOC ALIGNMENT CORRECTION WITH FEW LABELS", "text": "When a limited number of FoV labels {yi} can be accessed, a reasonable option for resolving entangled dimensions of the latent code is by fast adaptation. To identify the two entangled dimensions (zi, zj) we look at the maximum feature importance for a given FoV from a GBT trained using these labels only. We then train a substitution function using supervised learning to replace these two dimensions with the predicted ground truth label. Crucially, both steps of this procedure rely on the same FoV labels, which should be as few as possible. In Fig. 6 we show the pairwise entanglement score of the correlated FoVs under this fast adaptation with a linear regression as the substitution function, which succeeds with as few as 100 labels, corresponding to less than 0.02% of all data points in Shapes3D. However, fast adaptation with linear regression substitution fails in some settings: when no two latent dimensions encode the applied correlation isolated from the other latent codes, or when the correlated variables do not have a unique natural ordering (e.g. color or categorical variables). To address this, a nonlinear substitution function such as a MLP can reduce this pairwise entanglement to a certain degree (see additional results in Appendix B.2).\nWe find that the efficacy of fast adaptation depends on the level of disentanglement of the representations with respect to all the other factors. This implies that if the representation is well disentangled at the start of the fast adaption procedure, it is possible to achieve a perfectly disentangled model (according to our previous visual and quantitative evaluations). However, if all FoV are entangled at the beginning, the fast adaption method will have little effect. Finally, we note that model selection is impossible in a purely unsupervised manner based on any of the used disentanglement metrics, as they all require labeled ground truth data. These shortcomings shall be resolved by the following method capable of disentangling the correlated factors of variation much more reliably." }, { "heading": "4.2 ALIGNMENT DURING TRAINING USING WEAK SUPERVISION", "text": "Since the unsupervised disentangling by optimising marginal likelihood in a generative model is impossible (Locatello et al., 2019b, Theorem 1), inductive biases like grouping information (Bouchacourt et al., 2018) or access to labels (Locatello et al., 2019c) is required. Changes in natural environments, which typically correspond to changes of only a few underlying factors of variation, provide a weak supervision signal for representation learning algorithms (Goyal et al., 2019; Földiák, 1991; Schmidt et al., 2007; Bengio et al., 2019). Without correlations it has been shown that this weak supervision helps in learning much more disentangled representations (Locatello et al., 2020; Shu et al., 2019). Locatello et al. (2020) showed access to observations which display differences in a known number of factors of variation (without knowing which ones specifically) is sufficient to learn disentangled representation. These additional weak assumptions render the generative model identifiable in contrast to unsupervised disentanglement. This kind of extra knowledge might be available in certain settings 4, e.g., in temporarily close frames from a video of a moving robot arm where some factors remain unchanged. Hence, we want to investigate the usefulness of such a weakly-supervised method applied in various scenarios when training data is correlated. Specifically we implement the Ada-GVAE variant of Locatello et al. (2020) that was shown to allow for model\n4On the other hand, in applications with fairness concerns it may be impossible to intervene on FoV representing sensitive and immutable attributes of individuals (gender, race, etc.); we refer to Madras et al. (2019) for a more complete discussion.\nselection via the (unsupervised) reconstruction loss. The method requires a pair of observations that differs in a known number of factors, without knowing which in particular.\nWeak supervision mitigates pairwise latent entanglement. We trained the three correlated Shapes3D variants (A, D, E) with pairwise correlations between object color, object size and azimuth with the same correlation strength settings. Due to the definition of the regularizer we limit this study to the β-VAE models with the same 6 hyperparameters and use 5 random seeds, yielding 360 additional models. For the generation of pairs we study the case where the difference in the observation pairs is present in one random FoV. Whenever we sample the difference to be in one of the correlated FoV, its respective value in each pair is drawn from the probability distribution conditioned on the other correlated FoV. This means the difference in this factor is typically very small and depends on the correlation strength. Note that this procedure assures that constructed pairs are consistent with the observational data such that the correlation is never broken. Fig. 7 summarizes the weak supervision results when imposing correlations in object size and azimuth. We consistently observe much better disentangled models, often achieving perfect DCI score irrespective of correlations in the dataset. The latent spaces tend to strongly align their coordinates with the ground truth label axis. Finally, weak-supervision reduces unfairness relative to the unsupervised baseline, and occasionally achieves zero unfairness score.\nThese results suggest that weak supervision can provide a strong inductive bias capable of finding the right factorization and resolving spurious correlations for datasets of unknown degree of correlation. As a prominent example, this is an issue in the fairness context where real-world datasets often display unknown discriminatory correlations. Additional results on the other datasets can be found in Appendix B.2 including two additional scenarios where one has intervening capabilities to generate the pair. We consistently observe the same strong trends regarding disentangled correlations in all of the above studies using weak supervision." }, { "heading": "5 CONCLUSION", "text": "We have presented the first large-scale empirical study examining how modern disentanglement learners cope with correlated observational data. We find that existing methods fail to learn disentangled representations of correlated factors of variation, and moreover that standard disentanglement metrics are insufficient to reveal these pairwise entanglements. We discussed practical implications for downstream tasks like fair decision making. Finally, we demonstrate how to correct for these latent correlations via various weakly supervised training scenarios. Our results thus support the importance and usefulness of learning independent mechanisms rather than independent factors of variations (Schölkopf, 2019; Parascandolo et al., 2018; Suter et al., 2019; Goyal et al., 2019). Besides the simple correlations studied in this work, future work is needed to address the open question whether these results extend to more complex nonlinear correlations and settings where many more variables are correlated simultaneously." }, { "heading": "B ADDITIONAL RESULTS", "text": "B.1 SECTION 3\nShortcomings of existing metrics. Following recent studies, we evaluate the trained models with the help of a broad range of disentanglement metrics that aim at quantifying overall success by the help of a single scalar measure. Perhaps surprisingly, as can be seen in Fig. 9, there is no clear trend among all implemented disentanglement scores w.r.t. correlation strength. The metrics have been evaluated by sampling from the correlated data distribution although they do not differ substantially when evaluated on the uncorrelated distribution. We thus argue that commonly used methods are insufficient to provide insight into the latent space learned.\nLatent structure and pairwise entanglement. Our hypothesis that the latent representations are less correlated if the correlation strength is weaker is shown for a model on Shapes3D (A) with weak correlation in Fig. 10. Here the latent traversals do not mirror the major and minor axis of the correlated joint distribution. This conclusion is being backed by the thresholds of the pairwise entanglement metrics for the correlated pair vs. the median of all other pairs across all datasets, either when computing them using the GBT feature importances or the mutual information. See Fig. 11 for these respective results. Another pairwise metric that tracks the correlation strength in our scenario is the unfairness score between the correlated pair of factors that is being shown for datasets A, B and C in Fig. 12.\nGeneralization Properties In order to support our conclusion from these results that disentanglement methods can generalize towards unseen FoV configurations we show in Fig. 13 latent traversals originating from these OOD point with smallest object size and largest azimuth. We observe that changes in the remaining factors reliably yield the expected reconstructions. Additionally, we are interested in where samples from correlated models are located with respect to the ground truth value of each correlated variable. For this, we are visualizing the latent spaces with similar extrapolation and generalization capabilities of four models from the two strongest correlation dataset variants of Shapes3d (D) and Shapes3d (E) in Fig. 14.\nB.2 SECTION 4\nPost-hoc alignment correction with few labels In Fig. 15, we see the axis alignment of the correlated latent space after fast adaptation using linear regression on a model trained on Shapes3D (A). Fast adaptation with linear regression substitution fails in some settings: when no two latent dimensions encode the applied correlation isolated from the other latent codes, or when the correlated variables do not have a unique natural ordering (e.g. color or categorical variables). Additionally, the functional form of the latent manifolds beyond the training distribution is unknown and in general expected to be nonlinear. We test the possibility of fast adaptation in this case using as substitution function a one-hidden layer MLP classifier of size 100 on the correlated Shapes3D variants. Under this method, we sample the FoV from a uniform independent distribution. A small number of such samples could practically be labeled manually. Using only 1000 labeled data points for our fast adaptation method shows a significant reduction in disentanglement-thresholds for the correlated pair (Fig. 16).\nAlignment during training using weak supervision Using the studied weakly supervision AdaGVAE method with k = 1 from Locatello et al. (2020), we showed that weak supervision can provide a strong inductive bias capable of finding the right factorization and resolving spurious correlations for datasets of unknown degree of correlation. Despite the results shown on Shapes3D (A) in the main paper, results across all three correlation variants in Shapes3D (A, D, E) are shown in Fig. 17 as well as some representative latent space visualizations that show strong axis alignment in Fig. 18. This study contains a total of 360 trained models.\nIn addition to the experiment from the main paper where pairs are constructed solely from the correlated observational data, we want to study two scenarios where we have limited intervention capabilities on the FoV to generate training pairs. The resulting distribution of FoVs (still exhibiting correlations) in these pairs depends on whether the correlation between two pairs is due to a causal link or due to a common confounder.\nScenario I-1: We assume there is a confounder causing a spurious correlation between the factors, such that correlation is broken whenever we constraint the change to be in one of the correlations.\nThe value of the changing variable is then sampled uniformly in the second observation. Note that this still means that the vast majority of sampled pairs exhibit correlated FoV. This is depicted in the substantially lifted disentanglement scores shown in Fig. 19 as well as some selected latent space visualizations that show strong axis alignment in Fig. 20. We consistently observe much better disentangled models, often achieving perfect DCI score irrespective of correlations in the data set. The latent spaces tend to strongly align their coordinates with the ground truth label axis. We chose 10 random seeds per configuration in this study, yielding 720 models in total.\nScenario I-2: Let us assume C1 causes C2 in our examples, which manifests as the studied linear correlations. Within this setting we cannot sample uniformly in C2 if we intervene (or “fix”) all factors except for this causal factor. Intervening on all factors but C1, however, allows us to sample any value in C1 as it is not causally affected by C2. To test the hypothesis that this constraint still allows for disentangling the correlation, we trained on Shapes3D (A) and sample pairs consistent with this causal model. Besides observing visually disentangled factors in the latent traversals, we show a summary of our results in Fig. 21 with the same significant improvements regarding disentangling the correlated FoVs. Besides the above correlation strengths, we additionally trained the same models using a very strong correlation of σ = 0.1, yielding 300 models trained in this study." } ]
2,020
ON DISENTANGLED REPRESENTATIONS LEARNED FROM CORRELATED DATA
SP:3f9266d190e590b01625de888376769d59737d81
[ "This paper provides a generalization of AGD to constant sectional curvature spaces (or subsets of them), and proves the same global rates of convergence that hold in the Euclidean space. Additionally, they provide reductions for the bounded sectional curvature case. Their basic strategy involves the use of geodesic maps to accumulate local linear lower bounds, in a way that accounts for the geometric distortion incurred by the map.", "This paper considered the problem of minimizing (strongly and non-strongly) geodesically convex functions on hyperbolic and spherical manifolds, manifolds of constant curvature 1 and -1, respectively, and proposed accelerated algorithms for such problems. In particular, the author(s) showed the proposed algorithms enjoy global accelerated rates that match their Euclidean counterparts. A key to the main result is Lemma 2.2 which asserts a certain quasar convexity-type condition of the pull-back of the objective function to some Euclidean domain through a geodesic map. Based on this lemma, the main result follows from combining techniques for developing accelerated algorithms in Euclidean space, such as the approximate duality gap technique and a certain discretization scheme for continuous dynamics. Some reduction results, which obtain accelerated algorithms for the strongly convex case from the non-strongly convex case, and vice versa, are also presented." ]
We further research on the acceleration phenomenon on Riemannian manifolds by introducing the first global first-order method that achieves the same rates as accelerated gradient descent in the Euclidean space for the optimization of smooth and geodesically convex (g-convex) or strongly g-convex functions defined on the hyperbolic space or a subset of the sphere, up to constants and log factors. To the best of our knowledge, this is the first method that is proved to achieve these rates globally on functions defined on a Riemannian manifoldM other than the Euclidean space. Additionally, for any Riemannian manifold of bounded sectional curvature, we provide reductions from optimization methods for smooth and gconvex functions to methods for smooth and strongly g-convex functions and vice versa.
[]
[ { "authors": [ "Kwangjun Ahn", "Suvrit Sra" ], "title": "From Nesterov’s estimate sequence to riemannian acceleration", "venue": "arXiv preprint arXiv:2001.08876,", "year": 2020 }, { "authors": [ "Foivos Alimisis", "Antonio Orvieto", "Gary Bécigneul", "Aurelien Lucchi" ], "title": "A continuous-time perspective for modeling acceleration in riemannian optimization", "venue": "arXiv preprint arXiv:1910.10782,", "year": 2019 }, { "authors": [ "Foivos Alimisis", "Antonio Orvieto", "Gary Bécigneul", "Aurelien Lucchi" ], "title": "Practical accelerated optimization on riemannian manifolds", "venue": "arXiv preprint arXiv:2002.04144,", "year": 2020 }, { "authors": [ "Zeyuan Allen-Zhu" ], "title": "Katyusha: The first direct acceleration of stochastic gradient methods", "venue": "J. Mach. Learn. Res.,", "year": 2017 }, { "authors": [ "Zeyuan Allen-Zhu. Natasha" ], "title": "Faster non-convex optimization than SGD", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Zeyuan Allen-Zhu" ], "title": "Katyusha X: practical momentum method for stochastic sum-of-nonconvex optimization", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Zeyuan Allen Zhu", "Elad Hazan" ], "title": "Optimal black-box reductions between optimization objectives", "venue": "In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Zeyuan Allen Zhu", "Lorenzo Orecchia" ], "title": "Nearly-linear time positive LP solver with faster convergence rate", "venue": "In Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing,", "year": 2015 }, { "authors": [ "Zeyuan Allen Zhu", "Lorenzo Orecchia" ], "title": "Linear coupling: An ultimate unification of gradient and mirror descent", "venue": "In 8th Innovations in Theoretical Computer Science Conference,", "year": 2017 }, { "authors": [ "Zeyuan Allen Zhu", "Zheng Qu", "Peter Richtárik", "Yang Yuan" ], "title": "Even faster accelerated coordinate descent using non-uniform sampling", "venue": "In Proceedings of the 33nd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Rafael Mendes de Oliveira", "Avi Wigderson" ], "title": "Much faster algorithms for matrix scaling", "venue": "IEEE Annual Symposium on Foundations of Computer Science,", "year": 2017 }, { "authors": [ "Zeyuan Allen-Zhu", "Ankit Garg", "Yuanzhi Li", "Rafael Oliveira", "Avi Wigderson" ], "title": "Operator scaling via geodesically convex optimization, invariant theory and polynomial identity testing", "venue": "In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing,", "year": 2018 }, { "authors": [ "Herbert Busemann", "Bhalchandra Phadke" ], "title": "A general version of Beltrami’s theorem in the large", "venue": "Pacific Journal of Mathematics,", "year": 1984 }, { "authors": [ "Léopold Cambier", "Pierre-Antoine Absil" ], "title": "Robust low-rank matrix completion by riemannian optimization", "venue": "SIAM J. Scientific Computing,", "year": 2016 }, { "authors": [ "Yair Carmon", "John C. Duchi", "Oliver Hinder", "Aaron Sidford" ], "title": "Convex until proven guilty\": Dimension-free acceleration of gradient descent on non-convex functions", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Anoop Cherian", "Suvrit Sra" ], "title": "Riemannian dictionary learning and sparse coding for positive definite matrices", "venue": "IEEE Trans. Neural Networks Learn. Syst.,", "year": 2017 }, { "authors": [ "Michael Cohen", "Jelena Diakonikolas", "Lorenzo Orecchia" ], "title": "On acceleration with noise-corrupted gradients", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Chris Criscitiello", "Nicolas Boumal" ], "title": "Efficiently escaping saddle points on manifolds", "venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Chris Criscitiello", "Nicolas Boumal" ], "title": "An accelerated first-order method for non-convex optimization on manifolds", "venue": "arXiv preprint arXiv:2008.02252,", "year": 2020 }, { "authors": [ "Ashok Cutkosky", "Tamás Sarlós" ], "title": "Matrix-free preconditioning in online learning", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Glaydston de Carvalho Bento", "Orizon P. Ferreira", "Jefferson G. Melo" ], "title": "Iteration-complexity of gradient, subgradient and proximal point methods on riemannian manifolds", "venue": "J. Optim. Theory Appl.,", "year": 2017 }, { "authors": [ "Jelena Diakonikolas", "Michael I. Jordan" ], "title": "Generalized momentum-based methods: A hamiltonian perspective", "venue": "CoRR, abs/1906.00436,", "year": 2019 }, { "authors": [ "Jelena Diakonikolas", "Lorenzo Orecchia" ], "title": "Accelerated extra-gradient descent: A novel accelerated first-order method", "venue": "January 11-14,", "year": 2018 }, { "authors": [ "Jelena Diakonikolas", "Lorenzo Orecchia" ], "title": "The approximate duality gap technique: A unified theory of first-order methods", "venue": "SIAM Journal on Optimization,", "year": 2019 }, { "authors": [ "Alan Edelman", "Tomás A. Arias", "Steven Thomas Smith" ], "title": "The geometry of algorithms with orthogonality constraints", "venue": "SIAM J. Matrix Analysis Applications,", "year": 1998 }, { "authors": [ "OP Ferreira", "MS Louzeiro", "LF Prudente" ], "title": "Gradient method for optimization on riemannian manifolds with lower bounded curvature", "venue": "SIAM Journal on Optimization,", "year": 2019 }, { "authors": [ "Alexander Gasnikov", "Pavel E. Dvurechensky", "Eduard A. Gorbunov", "Evgeniya A. Vorontsova", "Daniil Selikhanovych", "César A. Uribe", "Bo Jiang", "Haoyue Wang", "Shuzhong Zhang", "Sébastien Bubeck", "Qijia Jiang", "Yin Tat Lee", "Yuanzhi Li", "Aaron Sidford" ], "title": "Near optimal methods for minimizing convex functions with lipschitz $p$-th derivatives", "venue": "In Conference on Learning Theory, COLT 2019,", "year": 2019 }, { "authors": [ "Matthieu Genicot", "Wen Huang", "Nickolay T. Trendafilov" ], "title": "Weakly correlated sparse components with nearly orthonormal loadings", "venue": "In Geometric Science of Information - Second International Conference,", "year": 2015 }, { "authors": [ "Marvin J Greenberg" ], "title": "Euclidean and non-Euclidean geometries: Development and history", "venue": null, "year": 1993 }, { "authors": [ "Karsten Grove", "Peter Petersen", "Silvio Levy" ], "title": "Comparison geometry, volume 30", "venue": null, "year": 1997 }, { "authors": [ "Sergey Guminov", "Alexander Gasnikov" ], "title": "Accelerated methods for alpha-weakly-quasi-convex problems", "venue": "arXiv preprint arXiv:1710.00797,", "year": 2017 }, { "authors": [ "Gennadij Heidel", "Volker Schulz" ], "title": "A riemannian trust-region method for low-rank tensor completion", "venue": "Numerical Lin. Alg. with Applic.,", "year": 2018 }, { "authors": [ "Oliver Hinder", "Aaron Sidford", "Nimit Sharad Sohoni" ], "title": "Near-optimal methods for minimizing star-convex functions and beyond", "venue": null, "year": 1906 }, { "authors": [ "Reshad Hosseini", "Suvrit Sra" ], "title": "Matrix manifold optimization for gaussian mixtures", "venue": "In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Reshad Hosseini", "Suvrit Sra" ], "title": "An alternative to EM for gaussian mixture models: Batch and stochastic riemannian optimization", "venue": "CoRR, abs/1706.03267,", "year": 2017 }, { "authors": [ "Wen Huang", "Ke Wei" ], "title": "Extending FISTA to riemannian optimization for sparse pca", "venue": "arXiv preprint arXiv:1909.05485,", "year": 2019 }, { "authors": [ "Wen Huang", "Ke Wei" ], "title": "Riemannian proximal gradient methods", "venue": "arXiv preprint arXiv:1909.06065,", "year": 2019 }, { "authors": [ "Ian T Jolliffe", "Nickolay T Trendafilov", "Mudassir Uddin" ], "title": "A modified principal component technique based on the lasso", "venue": "Journal of computational and Graphical Statistics,", "year": 2003 }, { "authors": [ "Jürgen Jost", "Jèurgen Jost" ], "title": "Riemannian geometry and geometric analysis, volume 42005", "venue": null, "year": 2005 }, { "authors": [ "Hiroyuki Kasai", "Pratik Jawanpuria", "Bamdev Mishra" ], "title": "Riemannian adaptive stochastic gradient algorithms on matrix manifolds", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Masoud Badiei Khuzani", "Na Li" ], "title": "Stochastic primal-dual method on riemannian manifolds of bounded sectional curvature", "venue": "IEEE International Conference on Machine Learning and Applications, ICMLA 2017,", "year": 2017 }, { "authors": [ "E. Kreyszig" ], "title": "Differential Geometry. Differential Geometry", "venue": "Dover Publications,", "year": 1991 }, { "authors": [ "Walid Krichene", "Alexandre M. Bayen", "Peter L. Bartlett" ], "title": "Accelerated mirror descent in continuous and discrete time", "venue": "In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Mario Lezcano-Casado" ], "title": "Trivializations for gradient-based optimization on manifolds. In Advances in Neural Information Processing Systems", "venue": "Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Mario Lezcano-Casado" ], "title": "Curvature-dependant global convergence rates for optimization on manifolds of bounded geometry", "venue": "arXiv preprint arXiv:2008.02517,", "year": 2020 }, { "authors": [ "Mario Lezcano-Casado", "David Martínez-Rubio" ], "title": "Cheap orthogonal constraints in neural networks: A simple parametrization of the orthogonal and unitary group", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Yuanyuan Liu", "Fanhua Shang", "James Cheng", "Hong Cheng", "Licheng Jiao" ], "title": "Accelerated first-order methods for geodesically convex optimization on riemannian manifolds", "venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Bamdev Mishra", "Rodolphe Sepulchre" ], "title": "R3MC: A riemannian three-factor algorithm for low-rank matrix completion", "venue": "IEEE Conference on Decision and Control,", "year": 2014 }, { "authors": [ "Yurii Nesterov" ], "title": "A method of solving a convex programming problem with convergence rate o(1/k2)", "venue": "In Soviet Mathematics Doklady,", "year": 1983 }, { "authors": [ "Yurii Nesterov", "Alexander Gasnikov", "Sergey Guminov", "Pavel Dvurechensky" ], "title": "Primal-dual accelerated gradient descent with line search for convex and nonconvex optimization problems", "venue": "arXiv preprint arXiv:1809.05895,", "year": 2018 }, { "authors": [ "Hiroyuki Sato", "Hiroyuki Kasai", "Bamdev Mishra" ], "title": "Riemannian stochastic variance reduced gradient", "venue": "CoRR, abs/1702.05594,", "year": 2017 }, { "authors": [ "Hiroyuki Sato", "Hiroyuki Kasai", "Bamdev Mishra" ], "title": "Riemannian stochastic variance reduced gradient algorithm with retraction and vector transport", "venue": "SIAM Journal on Optimization,", "year": 2019 }, { "authors": [ "Weijie Su", "Stephen P. Boyd", "Emmanuel J. Candès" ], "title": "A differential equation for modeling nesterov’s accelerated gradient method: Theory and insights", "venue": "J. Mach. Learn. Res.,", "year": 2016 }, { "authors": [ "Ju Sun", "Qing Qu", "John Wright" ], "title": "Complete dictionary recovery over the sphere II: recovery by riemannian trust-region method", "venue": "IEEE Trans. Inf. Theory,", "year": 2017 }, { "authors": [ "Yue Sun", "Nicolas Flammarion", "Maryam Fazel" ], "title": "Escaping from saddle points on riemannian manifolds. In Advances in Neural Information Processing Systems", "venue": "Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Ilya Sutskever", "James Martens", "George E. Dahl", "Geoffrey E. Hinton" ], "title": "On the importance of initialization and momentum in deep learning", "venue": "In Proceedings of the 30th International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Mingkui Tan", "Ivor W. Tsang", "Li Wang", "Bart Vandereycken", "Sinno Jialin Pan" ], "title": "Riemannian pursuit for big matrix recovery", "venue": "In Proceedings of the 31th International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Nilesh Tripuraneni", "Nicolas Flammarion", "Francis Bach", "Michael I. Jordan" ], "title": "Averaging stochastic gradient descent on riemannian manifolds", "venue": "CoRR, abs/1802.09128,", "year": 2018 }, { "authors": [ "Bart Vandereycken" ], "title": "Low-rank matrix completion by riemannian optimization", "venue": "SIAM Journal on Optimization,", "year": 2013 }, { "authors": [ "Di Wang", "Satish Rao", "Michael W. Mahoney" ], "title": "Unified acceleration method for packing and covering problems via diameter reduction", "venue": "In 43rd International Colloquium on Automata, Languages, and Programming,", "year": 2016 }, { "authors": [ "X.M. Wang", "C. Li", "J.C. Yao" ], "title": "Subgradient projection algorithms for convex feasibility on riemannian manifolds with lower bounded curvatures", "venue": "J. Optim. Theory Appl.,", "year": 2015 }, { "authors": [ "Melanie Weber", "Suvrit Sra" ], "title": "Frank-wolfe methods for geodesically convex optimization with application to the matrix geometric mean", "venue": "CoRR, abs/1710.10770,", "year": 2017 }, { "authors": [ "Melanie Weber", "Suvrit Sra" ], "title": "Nonconvex stochastic optimization on manifolds via riemannian frank-wolfe methods", "venue": "CoRR, abs/1910.04194,", "year": 2019 }, { "authors": [ "Ke Wei", "Jian-Feng Cai", "Tony F Chan", "Shingyu Leung" ], "title": "Guarantees of riemannian optimization for low rank matrix completion", "venue": "arXiv preprint arXiv:1603.06610,", "year": 2016 }, { "authors": [ "Andre Wibisono", "Ashia C. Wilson", "Michael I. Jordan" ], "title": "A variational perspective on accelerated methods in optimization", "venue": "CoRR, abs/1603.04245,", "year": 2016 }, { "authors": [ "Ami Wiesel" ], "title": "Geodesic convexity and covariance estimation", "venue": "IEEE Trans. Signal Process.,", "year": 2012 }, { "authors": [ "Hongyi Zhang", "Suvrit Sra" ], "title": "First-order methods for geodesically convex optimization", "venue": "In Proceedings of the 29th Conference on Learning Theory, COLT 2016,", "year": 2016 }, { "authors": [ "Hongyi Zhang", "Suvrit Sra" ], "title": "An estimate sequence for geodesically convex optimization", "venue": "Conference On Learning Theory, COLT 2018,", "year": 2018 }, { "authors": [ "Hongyi Zhang", "Sashank J. Reddi", "Suvrit Sra" ], "title": "Riemannian SVRG: fast stochastic optimization on riemannian manifolds", "venue": "In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Jingzhao Zhang", "Hongyi Zhang", "Suvrit Sra" ], "title": "R-SPIDER: A fast riemannian stochastic optimization algorithm with curvature independent rate", "venue": "CoRR, abs/1811.04194,", "year": 2018 }, { "authors": [ "Pan Zhou", "Xiao-Tong Yuan", "Jiashi Feng" ], "title": "Faster first-order methods for stochastic non-convex optimization on riemannian manifolds", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 } ]
[ { "heading": null, "text": "We further research on the acceleration phenomenon on Riemannian manifolds by introducing the first global first-order method that achieves the same rates as accelerated gradient descent in the Euclidean space for the optimization of smooth and geodesically convex (g-convex) or strongly g-convex functions defined on the hyperbolic space or a subset of the sphere, up to constants and log factors. To the best of our knowledge, this is the first method that is proved to achieve these rates globally on functions defined on a Riemannian manifoldM other than the Euclidean space. Additionally, for any Riemannian manifold of bounded sectional curvature, we provide reductions from optimization methods for smooth and gconvex functions to methods for smooth and strongly g-convex functions and vice versa." }, { "heading": "1 INTRODUCTION", "text": "Acceleration in convex optimization is a phenomenon that has drawn lots of attention and has yielded many important results, since the renowned Accelerated Gradient Descent (AGD) method of Nesterov (1983). Having been proved successful for deep learning Sutskever et al. (2013), among other fields, there have been recent efforts to better understand this phenomenon Allen Zhu & Orecchia (2017); Diakonikolas & Orecchia (2019); Su et al. (2016); Wibisono et al. (2016). These have yielded numerous new results going beyond convexity or the standard oracle model, in a wide variety of settings Allen-Zhu (2017; 2018a;b); Allen Zhu & Orecchia (2015); Allen Zhu et al. (2016); Allen-Zhu et al. (2017); Carmon et al. (2017); Cohen et al. (2018); Cutkosky & Sarlós (2019); Diakonikolas & Jordan (2019); Diakonikolas & Orecchia (2018); Gasnikov et al. (2019); Wang et al. (2016). This surge of research that applies tools of convex optimization to models going beyond convexity has been fruitful. One of these models is the setting of geodesically convex Riemannian optimization. In this setting, the function to optimize is geodesically convex (g-convex), i.e. convex restricted to any geodesic (cf. Definition 1.1).\nRiemannian optimization, g-convex and non-g-convex alike, is an extensive area of research. In recent years there have been numerous efforts towards obtaining Riemannian optimization algorithms that share analogous properties to the more broadly studied Euclidean first-order methods: deterministic de Carvalho Bento et al. (2017); Wei et al. (2016); Zhang & Sra (2016), stochastic Hosseini & Sra (2017); Khuzani & Li (2017); Tripuraneni et al. (2018), variance-reduced Sato et al. (2017; 2019); Zhang et al. (2016), adaptive Kasai et al. (2019), saddle-point-escaping Criscitiello & Boumal (2019); Sun et al. (2019); Zhang et al. (2018); Zhou et al. (2019); Criscitiello & Boumal (2020), and projection-free methods Weber & Sra (2017; 2019), among others. Unsurprisingly, Riemannian optimization has found many applications in machine learning, including low-rank matrix completion Cambier & Absil (2016); Heidel & Schulz (2018); Mishra & Sepulchre (2014); Tan et al. (2014); Vandereycken (2013), dictionary learning Cherian & Sra (2017); Sun et al. (2017), optimization under orthogonality constraints Edelman et al. (1998), with applications to Recurrent Neural Networks Lezcano-Casado (2019); Lezcano-Casado & Martínez-Rubio (2019), robust covariance estimation in Gaussian distributions Wiesel (2012), Gaussian mixture models Hosseini & Sra (2015), operator scaling Allen-Zhu et al. (2018), and sparse principal component analysis Genicot et al. (2015); Huang & Wei (2019b); Jolliffe et al. (2003).\nHowever, the acceleration phenomenon, largely celebrated in the Euclidean space, is still not understood in Riemannian manifolds, although there has been some progress on this topic recently (cf. Related work). This poses the following question, which is the central subject of this paper:\nCan a Riemannian first-order method enjoy the same rates as AGD in the Euclidean space?\nIn this work, we provide an answer in the affirmative for functions defined on hyperbolic and spherical spaces, up to constants depending on the curvature and the initial distance to an optimum, and up to log factors. In particular, the main results of this work are the following.\nMain Results:\n• Full acceleration. We design algorithms that provably achieve the same rates of convergence as AGD in the Euclidean space, up to constants and log factors. More precisely, we obtain the rates Õ(L/ √ ε) and O∗( √ L/µ log(µ/ε)) when optimizing L-smooth functions that\nare, respectively, g-convex and µ-strongly g-convex, defined on the hyperbolic space or a subset of the sphere. The notation Õ(·) and O∗(·) omits log(L/ε) and log(L/µ) factors, respectively, and constants. Previous approaches only showed local results Zhang & Sra (2018) or obtained results with rates in between the ones obtainable by Riemannian Gradient Descent (RGD) and AGD Ahn & Sra (2020). Moreover, these previous works only apply to functions that are smooth and strongly g-convex and not to smooth functions that are only g-convex. As a proxy, we design an accelerated algorithm under a condition between of convexity and quasar-convexity in the constrained setting, which is of independent interest.\n• Reductions. We present two reductions for any Riemannian manifold of bounded sectional curvature. Given an optimization method for smooth and g-convex functions they provide a method for optimizing smooth and strongly g-convex functions, and vice versa. This allows to focus on designing methods for one set of assumptions only.\nIt is often the case that methods and key geometric inequalities that apply to manifolds with bounded sectional curvatures are obtained from the ones existing for the spaces of constant extremal sectional curvature Grove et al. (1997); Zhang & Sra (2016; 2018). Consequently, our contribution is relevant not only because we establish an algorithm achieving global acceleration on functions defined on a manifold other than the Euclidean space, but also because understanding the constant sectional curvature case is an important step towards understanding the more general case of obtaining algorithms that optimize g-convex functions, strongly or not, defined on manifolds of bounded sectional curvature.\nOur main technique for designing the accelerated method consists of mapping the function domain to a subset B of the Euclidean space via a geodesic map: a transformation that maps geodesics to geodesics. Given the gradient of a point x ∈M, which defines a lower bound on the function that is linear over the tangent space of x, we find a lower bound of the function that is linear over B, despite the map being non-conformal, deforming distances, and breaking convexity. This allows to aggregate the lower bounds easily. We believe that effective lower bound aggregation is key to achieving Riemannian acceleration and optimality. Using this strategy, we are able to provide an algorithm along the lines of the one in Diakonikolas & Orecchia (2018) to define a continuous method that we discretize using an approximate implementation of the implicit Euler method, obtaining a method achieving the same rates as the Euclidean AGD, up to constants and log factors. Our reductions take into account the deformations produced by the geometry to generalize existing Euclidean reductions Allen Zhu & Hazan (2016); Allen Zhu & Orecchia (2017).\nBasic Geometric Definitions. We recall basic definitions of Riemannian geometry that we use in this work. For a thorough introduction we refer to Petersen et al. (2006). A Riemannian manifold (M, g) is a real smooth manifoldM equipped with a metric g, which is a smoothly varying inner product. For x ∈M and any two vectors v, w ∈ TxM in the tangent space ofM, the inner product 〈v, w〉x is g(v, w). For v ∈ TxM, the norm is defined as usual ‖v‖x def = √ 〈v, v〉x. Typically, x is known given v or w, so we will just write 〈v, w〉 or ‖v‖ if x is clear from context. A geodesic is a curve γ : [0, 1]→M of unit speed that is locally distance minimizing. A uniquely geodesic space is a space such that for every two points there is one and only one geodesic that joins them. In such a case the exponential map Expx : TxM→M and inverse exponential map Exp−1x :M→ TxM are well defined for every pair of points, and are as follows. Given x, y ∈ M, v ∈ TxM, and a\ngeodesic γ of length ‖v‖ such that γ(0) = x, γ(1) = y, γ′(0) = v/‖v‖, we have that Expx(v) = y and Exp−1x (y) = v. Note, however, that Expx(·) might not be defined for each v ∈ TxM. We denote by d(x, y) the distance between x and y. Its value is the same as ‖Exp−1x (y)‖. Given a 2-dimensional subspace V ⊆ TxM, the sectional curvature at x with respect to V is defined as the Gauss curvature of the manifold Expx(V ) at x.\nNotation. Let M be a manifold and let B ⊆ Rd. We denote by h : M → B a geodesic map Kreyszig (1991), which is a diffeomorphism such that the image and the inverse image of a geodesic is a geodesic. Usually, given an initial point x0 of our algorithm, we will have h(x0) = 0. Given a point x ∈ M we use the notation x̃ = h(x) and vice versa, any point in B will use a tilde. Given two points x, y ∈ M and a vector v ∈ TxM in the tangent space of x, we use the formal notation 〈v, y− x〉 def= −〈v, x− y〉 def= 〈v,Exp−1x (y)〉. Given a vector v ∈ TxM, we call ṽ ∈ Rd the vector of the same norm such that {x̃+ λ̃ṽ|λ̃ ∈ R+, x̃+ λ̃ṽ ∈ B} = {h(Expx(λv))|λ ∈ I ⊆ R+}, for some interval I . Likewise, given x and a vector ṽ ∈ Rd, we define v ∈ TxM. Let x∗ be any minimizer of F :M→ R. We denote by R ≥ d(x0, x∗) a bound on the distance between x∗ and the initial point x0. Note that this implies that x∗ ∈ Expx0(B̄(0, R)), for the closed ball B̄(0, R) ⊆ Tx0M. Consequently, we will work with the manifold that is a subset of a d-dimensional complete and simply connected manifold of constant sectional curvature K, namely a subset of the hyperbolic space or sphere Petersen et al. (2006), defined as Expx0(B̄(0, R)), with the inherited metric. Denote byH this manifold in the former case and S in the latter, and note that we are not making explicit the dependence on d, R and K. We want to work with the standard choice of uniquely geodesic manifolds Ahn & Sra (2020); Liu et al. (2017); Zhang & Sra (2016; 2018). Therefore, in the case that the manifold is S, we restrict ourselves to R < π/2 √ K, so S is contained in an open hemisphere. The big O notations Õ(·) and O∗(·) omit log(L/ε) and log(L/µ) factors, respectively, and constant factors depending on R and K.\nWe define now the main properties that will be assumed on the function F to be minimized. Definition 1.1 (Geodesic Convexity and Smoothness). Let F :M→ R be a differentiable function defined on a Riemannian manifold (M, g). Given L ≥ µ > 0, we say that F is L-smooth, and respectively µ-strongly g-convex, if for any two points x, y ∈M, F satisfies\nF (y) ≤ F (x) + 〈∇F (x), y−x〉+ L 2 d(x, y)2, resp. F (y) ≥ F (x) + 〈∇F (x), y−x〉+ µ 2 d(x, y)2.\nWe say F is g-convex if the second inequality above, i.e. µ-strong g-convexity, is satisfied with µ = 0. Note that we have used the formal notation above for the subtraction of points in the inner product.\nComparison with Related Work. There are a number of works that study the problem of firstorder acceleration in Riemannian manifolds of bounded sectional curvature. The first study is Liu et al. (2017). In this work, the authors develop an accelerated method with the same rates as AGD for both g-convex and strongly g-convex functions, provided that at each step a given nonlinear equation can be solved. No algorithm for solving this equation has been found and, in principle, it could be intractable or infeasible. In Alimisis et al. (2019) a continuous method analogous to the continuous approach to accelerated methods is presented, but it is not known if there exists an accelerated discretization of it. In Alimisis et al. (2020), an algorithm presented is claimed to enjoy an accelerated rate of convergence, but fails to provide convergence when the function value gets below a potentially large constant that depends on the manifold and smoothness constant. In Huang & Wei (2019a) an accelerated algorithm is presented but relying on strong geometric inequalities that are not proved to be satisfied. Zhang & Sra (2018) obtain a local algorithm that optimizes L-smooth and µ-strongly g-convex functions achieving the same rates as AGD in the Euclidean space, up to constants. That is, the initial point needs to start close to the optimum, O((µ/L)3/4) close, to be precise. Their approach consists of adapting Nesterov’s estimate sequence technique by keeping a quadratic on TxtM that induces onM a regularized lower bound on F (x∗) via Expxt(·). They aggregate the information yielded by the gradient to it, and use a geometric lemma to find a quadratic in Txt+1M whose induced function lower bounds the other one. Ahn & Sra (2020) generalize the previous algorithm and, by using similar ideas for the lower bound, they adapt it to work globally, obtaining strictly better rates than RGD, recovering the local acceleration of the previous paper, but not achieving global rates comparable to the ones of AGD. In fact, they prove that their algorithm eventually decreases the function value at a rate close to AGD but this can take as many iterations as the ones needed by RGD to minimize the function. In our work, we take a step back and focus\non the constant sectional curvature case to provide a global algorithm that achieves the same rates as AGD, up to constants and log factors. It is common to characterize the properties of spaces of bounded sectional curvature by using the ones of the spaces of constant extremal sectional curvature Grove et al. (1997); Zhang & Sra (2016; 2018), which makes the study of the constant sectional curvature case critical to the development of full accelerated algorithms in the general bounded sectional curvature case. Additionally, our work studies g-convexity besides strong g-convexity.\nAnother related work is the approximate duality gap technique Diakonikolas & Orecchia (2019), which presents a unified view of the analysis of first-order methods for the optimization of convex functions defined in the Euclidean space. It defines a continuous duality gap and by enforcing a natural invariant, it obtains accelerated continuous dynamics and their discretizations for most classical first-order methods. A derived work Diakonikolas & Orecchia (2018) obtains acceleration in a fundamentally different way from previous acceleration approaches, namely using an approximate implicit Euler method for the discretization of the acceleration dynamics. The convergence analysis of Theorem 2.4 is inspired by these two works. We will see in the sequel that, for our manifolds of interest, g-convexity is related to a model known in the literature as quasar-convexity or weak-quasiconvexity Guminov & Gasnikov (2017); Hinder et al. (2019); Nesterov et al. (2018)." }, { "heading": "2 ALGORITHM", "text": "We study the minimization problem minx∈M F (x) with a gradient oracle, for a smooth function F :M→ R that is g-convex or strongly g-convex. In this section,M refers to a manifold that can beH or S, i.e. the subset of the hyperbolic space or sphere Expx0(B̄(0, R)), for an initial point x0. For simplicity, we do not use subdifferentials so we assume F :M→ R is a differentiable function that is defined over the manifold of constant sectional curvatureM′ def= Expx0(B(0, R\n′)), for an R′ > R, and we avoid writing F :M′ → R. We defer the proofs of the lemmas and theorems in this and following sections to the supplementary material. We assume without loss of generality that the sectional curvature ofM is K ∈ {1,−1}, since for any other value of K and any function F : M → R defined on such a manifold, we can reparametrize F by a rescaling, so it is defined over a manifold of constant sectional curvature K ∈ {1,−1}. The parameters L, µ and R are rescaled accordingly as a function of K, cf. Remark C.1. We denote the special cosine by CK(·), which is cos(·) if K = 1 and cosh(·) if K = −1. We define X = h(M) ⊆ B ⊆ Rd. We use classical geodesic maps for the manifolds that we consider: the Gnomonic projection for S and the Beltrami-Klein projection forH Greenberg (1993). They map an open hemisphere and the hyperbolic space of curvature K ∈ {1,−1} to B = Rd and B = B(0, 1) ⊆ Rd, respectively. We will derive our results from the following characterization Greenberg (1993). Let x̃, ỹ ∈ B be two points. Recall that we denote x = h−1(x̃), y = h−1(ỹ) ∈M. Then we have that d(x, y), the distance between x and y with the metric ofM, satisfies\nCK(d(x, y)) = 1 +K〈x̃, ỹ〉√ 1 +K‖x̃‖2 · √ 1 +K‖ỹ‖2 . (1)\nObserve that the expression is symmetric with respect to rotations. In particular, the symmetry implies X is a closed ball of radius R̃, with CK(R) = (1 +KR̃2)−1/2. Consider a point x ∈ M and the lower bound provided by the g-convexity assumption when computing ∇F (x). Dropping the µ term in case of strong g-convexity, this bound is linear over TxM. We would like our algorithm to aggregate effectively the lower bounds it computes during the course of the optimization. The deformations of the geometry make it a difficult task, despite the fact that we have a simple description of each individual lower bound. We deal with this problem in the following way: our approach is to obtain a lower bound that is looser by a constant depending on R, and that is linear over B. In this way the aggregation becomes easier. Then, we are able to combine this lower bound with decreasing upper bounds in the fashion some other accelerated methods work in the Euclidean space Allen Zhu & Orecchia (2017); Diakonikolas & Orecchia (2018; 2019); Nesterov (1983). Alternatively, we can see the approach in this work as the constrained non-convex optimization problem of minimizing the function f : X → R, x̃ 7→ F (h−1(x̃)):\nminimize f(x̃), for x̃ ∈ X . In the rest of the section, we will focus on the g-convex case. For simplicity, instead of solving the strongly g-convex case directly in an analogous way by finding a lower bound that is quadratic over B, we rely on the reductions of Section 3 to obtain the accelerated algorithm in this case.\nThe following two lemmas show that finding the aforementioned linear lower bound is possible, and is defined as a function of ∇f(x̃). We first gauge the deformations caused by the geodesic map h. Distances are deformed, the map h is not conformal and, in spite of it being a geodesic map, the image of the geodesic Expx(λ∇F (x)) is not mapped into the image of the geodesic x̃+ λ̃∇f(x̃), i.e. the direction of the gradient changes. We are able to find the linear lower bound after bounding these deformations. Lemma 2.1. Let x, y ∈ M be two different points, and in part b) different from x0. Let α̃ be the angle ∠x̃0x̃ỹ, formed by the vectors x̃0 − x̃ and ỹ − x̃. Let α be the corresponding angle between the vectors Exp−1x (x0) and Exp −1 x (y). Assume without loss of generality that x̃ ∈ span{ẽ1} and ∇f(x̃) ∈ span{ẽ1, ẽ2} for the canonical orthonormal basis {ẽi}di=1. Let ei ∈ TxM be the unit vector such that h maps the image of the geodesic Expx(λei) to the image of the geodesic x̃+ λ̃ei, for i = 1, . . . , d, and λ, λ̃ ≥ 0. Then, the following holds.\na) Distance deformation:\nKC2K(R) ≤ K d(x, y)\n‖x̃− ỹ‖ ≤ K.\nb) Angle deformation:\nsin(α) = sin(α̃)\n√ 1 +K‖x̃‖2\n1 +K‖x̃‖2 sin2(α̃) , cos(α) = cos(α̃)\n√ 1\n1 +K‖x̃‖2 sin2(α̃) .\nc) Gradient deformation: ∇F (x) = (1 +K‖x̃‖2)∇f(x̃)1e1 + √ 1 +K‖x̃‖2∇f(x̃)2e2 and ei ⊥ ej for i 6= j.\nAnd if v ∈ TxM is a vector normal to∇F (x), then ṽ is normal to∇f(x).\nThe following uses the deformations described in the previous lemma to obtain the linear lower bound on the function, given a gradient at a point x̃. Note that Lemma 2.1.c implies that we have 〈∇f(x̃), ỹ − x̃〉 = 0 if and only if 〈∇F (x), y − x〉 = 0. In the proof we lower bound, generally, linear functions defined on TxM by linear functions in the Euclidean space B. This generality allows to obtain a result with constants that only depends on R. Lemma 2.2. Let F : M → R be a differentiable function and let f = F ◦ h−1. Then, there are constants γn, γp ∈ (0, 1] depending on R such that for all x, y ∈M satisfying 〈∇f(x̃), ỹ − x̃〉 6= 0 we have:\nγp ≤ 〈∇F (x), y − x〉 〈∇f(x̃), ỹ − x̃〉 ≤ 1 γn . (2)\nIn particular, if F is g-convex we have:\nf(x̃) + 1\nγn 〈∇f(x̃), ỹ − x̃〉 ≤ f(ỹ) if 〈∇f(x̃), ỹ − x̃〉 ≤ 0,\nf(x̃) + γp〈∇f(x̃), ỹ − x̃〉 ≤ f(ỹ) if 〈∇f(x̃), ỹ − x̃〉 ≥ 0. (3)\nThe two inequalities in (3) show the linear lower bound. Only the first one is needed to bound f(x̃∗) = F (x∗). The first inequality applied to ỹ = x̃∗ defines a model known in the literature as quasar-convexity or weak-quasi-convexity Guminov & Gasnikov (2017); Hinder et al. (2019); Nesterov et al. (2018), for which accelerated algorithms exist in the unconstrained case, provided smoothness is also satisfied. However, to the best of our knowledge, there is no known algorithm for solving the constrained case in an accelerated way. The condition in (3) is, trivially, a relaxation of convexity that is stronger than quasar-convexity. We will make use of (3) in order to obtain acceleration in the constrained setting. This is of independent interest. Recall that we need the constraint to guarantee bounded deformation due to the geometry. We also require smoothness of f . The following lemma shows that f is as smooth as F up to a constant depending on R. Lemma 2.3. Let F :M→ R be an L-smooth function and f = F ◦ h−1. Assume there is a point x∗ ∈M such that∇F (x∗) = 0. Then f is O(L)-smooth.\nUsing the approximate duality gap technique Diakonikolas & Orecchia (2019) we obtain accelerated continuous dynamics, for the optimization of the function f . Then we adapt AXGD to obtain an accelerated discretization. AXGD Diakonikolas & Orecchia (2018) is a method that is based on implicit Euler discretization of continuous accelerated dynamics and is fundamentally different from AGD and techniques as Linear Coupling Allen Zhu & Orecchia (2017) or Nesterov’s estimate sequence Nesterov (1983). The latter techniques use a balancing gradient step at each iteration and our use of a looser lower bound complicates guaranteeing keeping the gradient step within the constraints. We state the accelerated theorem and provide a sketch of the proof in Section 2.1.\nTheorem 2.4. Let Q ⊆ Rd be a convex set of diameter 2R. Let f : Q→ R be an L̃-smooth function satisfying (3) with constants γn, γp ∈ (0, 1]. Assume there is a point x̃∗ ∈ Q such that ∇f(x̃∗) = 0. Then, we can obtain an ε-minimizer of f using Õ( √ L̃/(γ2nγpε)) queries to the gradient oracle of f .\nFinally, we have Riemannian acceleration as a direct consequence of Theorem 2.4, Lemma 2.2 and Lemma 2.3. Theorem 2.5 (g-Convex Acceleration). Let F : M → R be an L-smooth and g-convex function and assume there is a point x∗ ∈M satisfying∇F (x∗) = 0. Algorithm 1 computes a point xt ∈M satisfying F (xt)− F (x∗) ≤ ε using Õ( √ L/ε) queries to the gradient oracle.\nWe observe that if there is a geodesic map mapping a manifold into a convex subset of the Euclidean space then the manifold must necessarily have constant sectional curvature, cf. Beltrami’s Theorem Busemann & Phadke (1984); Kreyszig (1991). This precludes a straightforward generalization from our method to the case of non-constant bounded sectional curvature.\nAlgorithm 1 Accelerated g-Convex Minimization Input: Smooth and g-convex function F :M→ R, forM = H orM = S .\nInitial point x0; Constants L̃, γp, γn. Geodesic map h satisfying (1) and h(x0) = 0. Bound on the distance to a minimum R ≥ d(x0, x∗). Accuracy ε and number of iterations t.\n1: X def= h(Expx0(B(0, R))) ⊆ B; f def = F ◦ h−1 and ψ(x̃) def= 12‖x̃‖ 2 2: z̃0 ← ∇ψ(x̃0); A0 ← 0 3: for i from 0 to t− 1 do 4: ai+1 ← (i+ 1)γ2nγp/2L̃ 5: Ai+1 ← Ai + ai+1 6: λ← BinaryLineSearch(x̃i, z̃i, f,X , ai+1, Ai, ε, L̃, γn, γp) (cf. Algorithm 2 in Appendix A) 7: χ̃i ← (1− λ)x̃i + λ∇ψ∗(z̃i) 8: ζ̃i ← z̃i − (ai+1/γn)∇f(χ̃i) 9: x̃i+1 ← (1− λ)x̃i + λ∇ψ∗(ζ̃i) [ ∇ψ∗(p̃) = arg minz̃∈X {‖z̃ − p̃‖} = ΠX (p̃)\n] 10: z̃i+1 ← z̃i − (ai+1/γn)∇f(x̃i+1) 11: end for 12: return xt." }, { "heading": "2.1 SKETCH OF THE PROOF OF THEOREM 2.4.", "text": "Inspired by the approximate duality gap technique Diakonikolas & Orecchia (2019), let αt be an increasing function of time t, and denote At = ∫ t t0 dατ = ∫ t t0 α̇τdτ . We define a continuous method that keeps a solution x̃t, along with a differentiable upper bound Ut on f(xt) and a lower bound Lt on f(x̃∗). In our case f is differentiable so we can just take Ut = f(xt). The lower bound comes from\nf(x̃∗) ≥ ∫ t t0 f(x̃τ )dατ\nAt +\n∫ t t0 1 γn 〈∇f(x̃τ ), x̃∗ − x̃τ 〉dατ\nAt , (4)\nafter applying some desirable modifications, like regularization with a 1-strongly convex function ψ and removing the unknown x̃∗ by taking a minimum over X . Note (4) comes from averaging (3) for ỹ = x̃∗. Then, if we define the gap Gt = Ut − Lt and design a method that forces αtGt to be non-increasing, we can deduce f(xt)− f(x∗) ≤ Gt ≤ αt0Gt0/αt. By forcing ddt (αtGt) = 0, we naturally obtain the following continuous dynamics, where zt is a mirror point and ψ∗ is the Fenchel\ndual of ψ, cf. Definition A.2.\n˙̃zt = − 1\nγn α̇t∇f(x̃t); ˙̃xt =\n1 γn α̇t ∇ψ∗(z̃t)− x̃t αt ; z̃t0 = ∇ψ(x̃t0), x̃t0 ∈ X (5)\nWe note that except for the constant γn, these dynamics match the accelerated dynamics used in the optimization of convex functions Diakonikolas & Orecchia (2019; 2018); Krichene et al. (2015). The AXGD algorithm Diakonikolas & Orecchia (2018), designed for the accelerated optimization of convex functions, discretizes the latter dynamics following an approximate implementation of implicit Euler discretization. This has the advantage of not needing a gradient step per iteration to compensate for some positive discretization error. Note that in our case we must use (3) instead of convexity for a discretization. We are able to obtain the following discretization coming from an approximate implicit Euler discretization:{\nχ̃i = γ̂iAi\nAiγ̂i+ai+1/γn x̃i + ai+1/γn Aiγ̂i+ai+1/γn ∇ψ∗(z̃i); ζ̃i = z̃i − ai+1γn ∇f(χ̃i) x̃i+1 =\nγ̂iAi Aiγ̂i+ai+1/γn x̃i + ai+1/γn Aiγ̂i+ai+1/γn ∇ψ∗(ζ̃i); z̃i+1 = z̃i − ai+1γn ∇f(x̃i+1)\n(6)\nwhere γ̂i ∈ [γp, 1/γn] is a parameter, x̃0 ∈ X is an arbitrary point, z̃0 = ∇ψ(x̃0) and now αt is a discrete measure and α̇t is a weighted sum of Dirac delta functions α̇t = ∑∞ i=1 aiδ(t− (t0 + i− 1)). Compare (6) with the discretization in AXGD Diakonikolas & Orecchia (2018) that is equal to our discretization but with no γn or γ̂i. Or equivalently with γ̂i = 1/γn and with no γn for the mirror descent updates of ζ̃i and z̃i+1. However, not having convexity, in order to have per-iteration discretization error less than ε̂/AT , we require γ̂i to be such that x̃i+1 satisfies\nf(x̃i+1)− f(x̃i) ≤ γ̂i〈∇f(x̃i+1), x̃i+1 − x̃i〉+ ε̂, (7) where ε̂ is chosen so that the accumulated discretization error is < ε/2, after having performed the steps necessary to obtain an ε/2 minimizer. We would like to use (3) to find such a γ̂i but we need to take into account that we only know x̃i+1 a posteriori. Indeed, using (3) we conclude that setting γ̂i to 1/γn or γp then we either satisfy (7) or there is a point γ̂i ∈ (γp, 1/γn) for which 〈∇f(x̃i+1), x̃i+1 − x̃i〉 = 0, which satisfies the equation for ε̂ = 0. Then, using smoothness of f , existence of x∗ (that satisfies∇f(x∗) = 0), and boundedness of X we can guarantee that a binary search finds a point satisfying (7) in O(log(L̃i/γnε̂)) iterations. Each iteration of the binary search requires to run (6), that is, one step of the discretization. Computing the final discretization error, we obtain acceleration after choosing appropriate learning rates ai. Algorithm 1 contains the pseudocode of this algorithm along with the reduction of the problem from minimizing F to minimizing f . We chose ψ(x̃) def= 12‖x̃‖ 2 as our strongly convex regularizer." }, { "heading": "3 REDUCTIONS", "text": "The construction of reductions proves to be very useful in order to facilitate the design of algorithms in different settings. Moreover, reductions are a helpful tool to infer new lower bounds without extra ad hoc analysis. We present two reductions. We will see in Corollary 3.2 and Example 3.4 that one can obtain full accelerated methods to minimize smooth and strongly g-convex functions from methods for smooth and g-convex functions and vice versa. These are generalizations of some reductions designed to work in the Euclidean space Allen Zhu & Hazan (2016); Allen Zhu & Orecchia (2017). The reduction to strongly g-convex functions takes into account the effect of the deformation of the space on the strong convexity of the function Fy(x) = d(x, y)2/2, for x, y ∈M. The reduction to g-convexity requires the rate of the algorithm that applies to g-convex functions to be proportional to the distance between the initial point and the optimum d(x0, x∗). The proofs of the statements in this section can be found in the supplementary material. We will use Timens(·) and Time(·) to denote the time algorithms Ans and A below require, respectively, to perform the tasks we define below. Theorem 3.1. LetM be a Riemannian manifold, let F :M→ R be an L-smooth and µ-strongly g-convex function, and let x∗ be its minimizer. Let x0 be a starting point such that d(x0, x∗) ≤ R. Suppose we have an algorithmAns to minimize F , such that in time T = Timens(L, µ,R) it produces a point x̂T satisfying F (x̂T )− F (x∗) ≤ µ · d(x0, x∗)2/4. Then we can compute an ε-minimizer of F in time O(Timens(L, µ,R) log(R2µ/ε)).\nTheorem 3.1 implies that if we forget about the strong g-convexity of a function and we treat it as it is just g-convex we can run in stages an algorithm designed for optimizing g-convex functions. The\nfact that the function is strongly g-convex is only used between stages, as the following corollary shows by making use of Algorithm 1. Corollary 3.2. We can compute an ε-minimizer of an L-smooth and µ-strongly g-convex function F :M→ R in O∗( √ L/µ log(µ/ε)) queries to the gradient oracle, whereM = S orM = H.\nWe note that in the strongly convex case, by decreasing the function value by a factor we can guarantee we decrease the distance to x∗ by another factor, so we can periodically recenter the geodesic map to reduce the constants produced by the deformations of the geometry, see the proof of Corollary 3.2. Finally, we show the reverse reduction. Theorem 3.3. LetM be a Riemannian manifold of bounded sectional curvature, let F :M→ R be an L-smooth and g-convex function, and assume there is a point x∗ ∈ M such that ∇F (x∗) = 0. Let x0 be a starting point such that d(x0, x∗) ≤ R and let ∆ satisfy F (x0)− F (x∗) ≤ ∆. Assume we have an algorithm A that given an L-smooth and µ-strongly g-convex function F̂ :M→ R, with minimizer in Expx0(B̄(0, R)), and any initial point x̂0 ∈M produces a point x̂ ∈ Expx0(B̄(0, R)) in time T̂ = Time(L, µ,M, R) satisfying F̂ (x̂) −minx∈M F̂ (x) ≤ (F̂ (x̂0) −minx∈M F̂ (x))/4. Let T = dlog2(∆/ε)/2e + 1. Then, we can compute an ε-minimizer in time ∑T−1 t=0 Time(L + 2−t∆K−R/R2, 2−t∆K + R/R 2,M, R), where K+R and K − R are constants that depend on R and the bounds on the sectional curvature ofM. Example 3.4. Applying reduction Theorem 3.3 to the algorithm in Corollary 3.2 we can optimize L-smooth and g-convex functions defined onH or S with a gradient oracle complexity of Õ(L/ √ ε).\nNote that this reduction cannot be applied to the locally accelerated algorithm in (Zhang & Sra, 2018), that we discussed in the related work section. The reduction runs in stages by adding decreasing µi-strongly convex regularizers until we reach µi = O(ε). The local assumption required by the algorithm in (Zhang & Sra, 2018) on the closeness to the minimum cannot be guaranteed. In (Ahn & Sra, 2020), the authors give an unconstrained global algorithm whose rates are strictly better than RGD. The reduction could be applied to a constrained version of this algorithm to obtain a method for smooth and g-convex functions defined on manifolds of bounded sectional curvature and whose rates are strictly better than RGD." }, { "heading": "4 CONCLUSION", "text": "In this work we proposed a first-order method with the same rates as AGD, for the optimization of smooth and g-convex or strongly g-convex functions defined on a manifold other than the Euclidean space, up to constants and log factors. We focused on the hyperbolic and spherical spaces, that have constant sectional curvature. The study of geometric properties for the constant sectional curvature case can be usually employed to conclude that a space of bounded sectional curvature satisfies a property that is in between the ones for the cases of constant extremal sectional curvature. Several previous algorithms have been developed for the optimization in Riemannian manifolds of bounded sectional curvature by utilizing this philosophy, for instance Ahn & Sra (2020); Ferreira et al. (2019); Wang et al. (2015); Zhang & Sra (2016; 2018). In future work, we will attempt to use the techniques and insights developed in this work to give an algorithm with the same rates as AGD for manifolds of bounded sectional curvature.\nThe key technique of our algorithm is the effective lower bound aggregation. Indeed, lower bound aggregation is the main hurdle to obtain accelerated first-order methods defined on Riemannian manifolds. Whereas the process of obtaining effective decreasing upper bounds on the function works similarly as in the Euclidean space—the same approach of locally minimizing the upper bound given by the smoothness assumption is used—obtaining adequate lower bounds proves to be a difficult task. We usually want a simple lower bound such that it, or a regularized version of it, can be easily optimized globally. We also want that the lower bound combines the knowledge that the g-convexity or g-strong convexity provides for all the queried points, commonly an average. These Riemannian convexity assumptions provide simple lower bounds, namely linear or quadratic, but each with respect to each of the tangent spaces of the queried points only. The deformations of the space complicate the aggregation of the lower bounds. Our work deals with this problem by finding appropriate lower bounds via the use of a geodesic map and takes into account the deformations incurred to derive a fully accelerated algorithm. We also needed to deal with other technical problems. Firstly, we\nneeded a lower bound on the whole function and not only on F (x∗), for which we had to construct two different linear lower bounds, obtaining a relaxation of convexity. Secondly, we had to use an implicit discretization of an accelerated continuous dynamics, since at least the vanilla application of usual approaches like Linear Coupling Allen Zhu & Orecchia (2017) or Nesterov’s estimate sequence Nesterov (1983), that can be seen as a forward Euler discretization of the accelerated dynamics combined with a balancing gradient step Diakonikolas & Orecchia (2019), did not work in our constrained case. We interpret that the difficulty arises from trying to keep the gradient step inside the constraints while being able to compensate for a lower bound that is looser by a constant factor." } ]
2,020
null
SP:759f85692cb4edfe6521d013dbbb55e20a458a4b
[ "This paper examines the impact of forcing units in a CNN to be more or less “class-selective” – i.e. respond preferentially to one image class compared to another. The approach taken is to include a regularizer in the loss that directly penalizes or encourages class selectivity in individual units. They report that penalizing class selectivity at intermediate layers has little-to-no effect on classification performance, and in some cases mildly improves performance. They authors conclude that class selectivity is not an essential component of successful performance in CNNs, and that methods which use class selectivity to interpret CNNs should be approached with caution. ", "This paper asks the interesting question of whether you need individual neuron (or even population level) class selectivity at intermediate stages in order to have good classification performance. The authors introduce a regularization term to the loss that controls the amount of selectivity in the units of the network. They find that the selectivity of the units in standard networks can be reduced while maintaining classification performance. " ]
The properties of individual neurons are often analyzed in order to understand the biological and artificial neural networks in which they’re embedded. Class selectivity—typically defined as how different a neuron’s responses are across different classes of stimuli or data samples—is commonly used for this purpose. However, it remains an open question whether it is necessary and/or sufficient for deep neural networks (DNNs) to learn class selectivity in individual units. We investigated the causal impact of class selectivity on network function by directly regularizing for or against class selectivity. Using this regularizer to reduce class selectivity across units in convolutional neural networks increased test accuracy by over 2% in ResNet18 and 1% in ResNet50 trained on Tiny ImageNet. For ResNet20 trained on CIFAR10 we could reduce class selectivity by a factor of 2.5 with no impact on test accuracy, and reduce it nearly to zero with only a small (∼2%) drop in test accuracy. In contrast, regularizing to increase class selectivity significantly decreased test accuracy across all models and datasets. These results indicate that class selectivity in individual units is neither sufficient nor strictly necessary, and can even impair DNN performance. They also encourage caution when focusing on the properties of single units as representative of the mechanisms by which DNNs function.
[ { "affiliations": [], "name": "Matthew L. Leavitt" }, { "affiliations": [], "name": "Ari S. Morcos" } ]
[ { "authors": [ "Alessandro Achille", "Matteo Rovere", "Stefano Soatto" ], "title": "Critical Learning Periods in Deep Networks. September 2018", "venue": "URL https://openreview.net/forum?id=BkeStsCcKQ", "year": 2018 }, { "authors": [ "E.D. Adrian" ], "title": "The impulses produced by sensory nerve endings", "venue": "The Journal of Physiology,", "year": 1926 }, { "authors": [ "Rana Ali Amjad", "Kairen Liu", "Bernhard C. Geiger" ], "title": "Understanding Individual Neuron Importance Using Information Theory. April 2018", "venue": "URL https://arxiv.org/abs/1804.06679v3", "year": 2018 }, { "authors": [ "H B Barlow" ], "title": "Single Units and Sensation: A Neuron Doctrine for Perceptual Psychology? Perception, 1(4):371–394", "venue": "ISSN 0301-0066. doi: 10.1068/p010371. URL https:// doi.org/10.1068/p010371", "year": 1972 }, { "authors": [ "Anthony Bau", "Yonatan Belinkov", "Hassan Sajjad", "Nadir Durrani", "Fahim Dalvi", "James Glass" ], "title": "Identifying and Controlling Important Neurons in Neural Machine Translation", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "David Bau", "Bolei Zhou", "Aditya Khosla", "Aude Oliva", "Antonio Torralba" ], "title": "Network Dissection: Quantifying Interpretability of Deep Visual Representations", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "David Bau", "Jun-Yan Zhu", "Hendrik Strobelt", "Bolei Zhou", "Joshua B. Tenenbaum", "William T. Freeman", "Antonio Torralba" ], "title": "GAN Dissection: Visualizing and Understanding Generative Adversarial Networks", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "David Bau", "Jun-Yan Zhu", "Hendrik Strobelt", "Agata Lapedriza", "Bolei Zhou", "Antonio Torralba. Understanding the role of individual units in a deep neural network. Proceedings of the National Academy of Sciences", "September" ], "title": "ISSN 0027-8424, 1091-6490", "venue": "doi:", "year": 2020 }, { "authors": [ "Thomas M Cover" ], "title": "Elements of information theory", "venue": null, "year": 1999 }, { "authors": [ "Fahim Dalvi", "Nadir Durrani", "Hassan Sajjad", "Yonatan Belinkov", "Anthony Bau", "James Glass" ], "title": "What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models", "venue": "Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Kedar Dhamdhere", "Mukund Sundararajan", "Qiqi Yan" ], "title": "How Important is a Neuron", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jonathan Donnelly", "Adam Roegiest" ], "title": "On Interpretability and Feature Representations: An Analysis of the Sentiment Neuron", "venue": "Advances in Information Retrieval, Lecture Notes in Computer Science,", "year": 2019 }, { "authors": [ "Dumitru Erhan", "Yoshua Bengio", "Aaron C. Courville", "Pascal Vincent" ], "title": "Visualizing Higher-Layer Features of a Deep Network", "venue": null, "year": 2009 }, { "authors": [ "Ruth Fong", "Andrea Vedaldi" ], "title": "Net2Vec: Quantifying and Explaining How Concepts are Encoded by Filters in Deep Neural Networks", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Jonathan Frankle", "David J. Schwab", "Ari S. Morcos" ], "title": "The Early Phase of Neural Network Training. arXiv:2002.10365 [cs, stat], February 2020", "venue": "URL http://arxiv.org/abs/2002.10365", "year": 2002 }, { "authors": [ "Stefano Fusi", "Earl K. Miller", "Mattia Rigotti" ], "title": "Why neurons mix: high dimensionality for higher cognition", "venue": "Current opinion in neurobiology,", "year": 2016 }, { "authors": [ "Marianne Fyhn", "Sturla Molden", "Menno P. Witter", "Edvard I. Moser", "May-Britt Moser" ], "title": "Spatial Representation in the Entorhinal Cortex", "venue": "doi: 10.1126/science.1099901. URL https:// science.sciencemag.org/content/305/5688/1258. Publisher: American Association for the Advancement of Science Section: Research Article", "year": 2004 }, { "authors": [ "Ella Gale", "Ryan Blything", "Nicholas Martin", "Jeffrey S. Bowers", "Anh Nguyen" ], "title": "Selectivity metrics provide misleading estimates of the selectivity of single units in neural networks", "venue": "Proceedings of the 41th Annual Meeting of the Cognitive Science Society,", "year": 2019 }, { "authors": [ "Juan A. Gallego", "Matthew G. Perich", "Stephanie N. Naufel", "Christian Ethier", "Sara A. Solla", "Lee E. Miller" ], "title": "Cortical population activity within a preserved neural manifold underlies multiple motor behaviors", "venue": "Nature Communications,", "year": 2018 }, { "authors": [ "Ragnar Granit" ], "title": "Receptors and sensory perception. Receptors and sensory perception", "venue": null, "year": 1955 }, { "authors": [ "Guy Gur-Ari", "Daniel A. Roberts", "Ethan Dyer" ], "title": "Gradient Descent Happens in a Tiny Subspace", "venue": "URL http://arxiv.org/abs/1812.04754", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "David J. Heeger", "Wayne E. Mackey" ], "title": "Oscillatory recurrent gated neural integrator circuits (organics), a unifying theoretical framework for neural dynamics", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Sara Hooker", "Dumitru Erhan", "Pieter-Jan Kindermans", "Been Kim" ], "title": "A Benchmark for Interpretability Methods in Deep Neural Networks", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "D.H. Hubel", "T.N. Wiesel" ], "title": "Receptive fields of single neurones in the cat’s striate cortex", "venue": "The Journal of Physiology,", "year": 1959 }, { "authors": [ "D.H. Hubel", "T.N. Wiesel" ], "title": "Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex", "venue": "The Journal of Physiology,", "year": 1962 }, { "authors": [ "David H. Hubel" ], "title": "Exploration of the primary visual cortex, 1955–78", "venue": "Nature,", "year": 1982 }, { "authors": [ "Michele N Insanally", "Ioana Carcea", "Rachel E Field", "Chris C Rodgers", "Brian DePasquale", "Kanaka Rajan", "Michael R DeWeese", "Badr F Albanna", "Robert C Froemke" ], "title": "Spike-timing-dependent ensemble encoding by non-classically responsive cortical neurons. eLife, 8:e42409, January 2019", "venue": "ISSN 2050-084X. doi: 10.7554/eLife.42409. URL https://doi.org/10.7554/ eLife.42409", "year": 2050 }, { "authors": [ "Yuta Kanda", "Kota S. Sasaki", "Izumi Ohzawa", "Hiroshi Tamura" ], "title": "Deleting object selective units in a fully-connected layer of deep convolutional networks improves classification performance", "venue": "[q-bio],", "year": 2020 }, { "authors": [ "E R Kandel", "J H Schwartz", "Jessica Chao" ], "title": "Principles of neural science", "venue": null, "year": 2000 }, { "authors": [ "Andrej Karpathy", "Justin Johnson", "Li Fei-Fei" ], "title": "Visualizing and Understanding Recurrent Networks", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning Multiple Layers of Features from Tiny Images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Matthew L Leavitt", "Florian Pieper", "Adam J Sachs", "Julio C Martinez-Trujillo" ], "title": "Correlated variability modifies working memory fidelity in primate prefrontal neuronal ensembles", "venue": "Proceedings of the National Academy of Sciences of the United States of America,", "year": 1619 }, { "authors": [ "Mario Lezcano-Casado" ], "title": "Trivializations for gradient-based optimization on manifolds", "venue": "In Advances in Neural Information Processing Systems, NeurIPS,", "year": 2019 }, { "authors": [ "Yixuan Li", "Jason Yosinski", "Jeff Clune", "Hod Lipson", "John Hopcroft" ], "title": "Convergent learning: Do different neural networks learn the same representations", "venue": "Proceedings of the 1st International Workshop on Feature Extraction: Modern Questions and Challenges at NIPS 2015,", "year": 2015 }, { "authors": [ "Peter E. Lillian", "Richard Meyes", "Tobias Meisen" ], "title": "Ablation of a Robot’s Brain: Neural Networks Under a Knife", "venue": "URL https://arxiv.org/abs/1812.05687v2", "year": 2018 }, { "authors": [ "Lu Lu", "Yeonjong Shin", "Yanhui Su", "George Em Karniadakis" ], "title": "Dying ReLU and Initialization: Theory and Numerical Examples. arXiv:1903.06733 [cs, math, stat], November 2019", "venue": "URL http://arxiv.org/abs/1903.06733", "year": 1903 }, { "authors": [ "Andrew L. Maas", "Awni Y. Hannun", "Andrew Y. Ng" ], "title": "Rectifier nonlinearities improve neural network acoustic models", "venue": "In ICML Workshop on Deep Learning for Audio, Speech and Language Processing,", "year": 2013 }, { "authors": [ "Neil A Macmillan", "C Douglas Creelman" ], "title": "Detection theory: A user’s guide", "venue": "Psychology press,", "year": 2004 }, { "authors": [ "Richard Meyes", "Melanie Lu", "Constantin Waubert de Puiseau", "Tobias Meisen" ], "title": "Ablation Studies in Artificial Neural Networks. arXiv:1901.08644 [cs, q-bio], February 2019", "venue": "URL http:// arxiv.org/abs/1901.08644", "year": 1901 }, { "authors": [ "Sohie Lee Moody", "Steven P. Wise", "Giuseppe di Pellegrino", "David Zipser" ], "title": "A Model That Accounts for Activity in Primate Frontal Cortex during a Delayed Matching-to-Sample Task", "venue": "Journal of Neuroscience,", "year": 1998 }, { "authors": [ "Ari Morcos", "Maithra Raghu", "Samy Bengio" ], "title": "Insights on representational similarity in neural networks with canonical correlation", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Ari S. Morcos", "Christopher D. Harvey" ], "title": "History-dependent variability in population dynamics during evidence accumulation in cortex", "venue": "Nature Neuroscience,", "year": 2016 }, { "authors": [ "Ari S. Morcos", "David G.T. Barrett", "Neil C. Rabinowitz", "Matthew Botvinick" ], "title": "On the importance of single directions for generalization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Seil Na", "Yo Joong Choe", "Dong-Hyun Lee", "Gunhee Kim" ], "title": "Discovery of Natural Language Concepts in Individual Units of CNNs", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Anh Nguyen", "Alexey Dosovitskiy", "Jason Yosinski", "Thomas Brox", "Jeff Clune" ], "title": "Synthesizing the preferred inputs for neurons in neural networks via deep generator networks", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Ramon Nogueira", "Nicole E. Peltier", "Akiyuki Anzai", "Gregory C. DeAngelis", "Julio Martínez-Trujillo", "Rubén Moreno-Bote" ], "title": "The Effects of Population Tuning and Trial-by-Trial Variability on Information Encoding and Behavior", "venue": "The Journal of Neuroscience,", "year": 2020 }, { "authors": [ "J. O’Keefe", "J. Dostrovsky" ], "title": "The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat", "venue": "Brain Research,", "year": 1971 }, { "authors": [ "Chris Olah", "Alexander Mordvintsev", "Ludwig Schubert" ], "title": "Feature Visualization. Distill, 2(11):e7, November 2017", "venue": "ISSN 2476-0757. doi: 10.23915/distill.00007. URL https://distill.pub/", "year": 2017 }, { "authors": [ "Chris Olah", "Arvind Satyanarayan", "Ian Johnson", "Shan Carter", "Ludwig Schubert", "Katherine Ye", "Alexander Mordvintsev" ], "title": "The Building Blocks of Interpretability. Distill, 3(3):e10, March 2018", "venue": "ISSN 2476-0757. doi: 10.23915/distill.00010. URL https://distill.pub/2018/ building-blocks", "year": 2018 }, { "authors": [ "J Andrew Pruszynski", "Joel Zylberberg" ], "title": "The language of the brain: real-world neural population codes", "venue": "Current Opinion in Neurobiology,", "year": 2019 }, { "authors": [ "Alec Radford", "Rafal Jozefowicz", "Ilya Sutskever" ], "title": "Learning to Generate Reviews and Discovering Sentiment", "venue": "[cs],", "year": 2017 }, { "authors": [ "Ivet Rafegas", "Maria Vanrell", "Luis A. Alexandre", "Guillem Arias" ], "title": "Understanding trained CNNs by indexing neuron selectivity", "venue": "Pattern Recognition Letters, page S0167865519302909,", "year": 2019 }, { "authors": [ "Maithra Raghu", "Justin Gilmer", "Jason Yosinski", "Jascha Sohl-Dickstein" ], "title": "SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "David Raposo", "Matthew T Kaufman", "Anne K Churchland" ], "title": "A category-free neural population supports evolving demands during decision-making", "venue": "Nature Neuroscience,", "year": 2014 }, { "authors": [ "Shreya Saxena", "John P Cunningham" ], "title": "Towards the neural population doctrine", "venue": "Current Opinion in Neurobiology,", "year": 2019 }, { "authors": [ "Krishna V Shenoy", "Maneesh Sahani", "Mark M Churchland" ], "title": "Cortical control of arm movements: a dynamical systems perspective", "venue": "Annual Review of Neuroscience,", "year": 2013 }, { "authors": [ "Charles S. Sherrington" ], "title": "The integrative action of the nervous system. The integrative action of the nervous system", "venue": null, "year": 1906 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Karen Simonyan", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps", "venue": "[cs],", "year": 2014 }, { "authors": [ "Stephen M. Smith", "Thomas E. Nichols", "Diego Vidaurre", "Anderson M. Winkler", "Timothy E.J. Behrens", "Matthew F. Glasser", "Kamil Ugurbil", "Deanna M. Barch", "David C. Van Essen", "Karla L. Miller" ], "title": "A positive-negative mode of population covariation links brain connectivity, demographics and behavior", "venue": "Nature Neuroscience,", "year": 2015 }, { "authors": [ "Harold Stanislaw", "Natasha Todorov" ], "title": "Calculation of signal detection theory measures", "venue": "Behavior Research Methods, Instruments,", "year": 1999 }, { "authors": [ "David Sussillo", "Mark M Churchland", "Matthew T Kaufman", "Krishna V Shenoy" ], "title": "A neural network that finds a naturalistic solution for the production of muscle activity", "venue": "Nature Neuroscience,", "year": 2015 }, { "authors": [ "Jumpei Ukita" ], "title": "Causal importance of orientation selectivity for generalization in image recognition", "venue": "URL https://openreview.net/forum?id=Bkx_Dj09tQ", "year": 2018 }, { "authors": [ "tero", "Charles R. Harris", "Anne M. Archibald", "Antônio H. Ribeiro", "Fabian Pedregosa", "Paul van" ], "title": "Mulbregt, and SciPy 1 0 Contributors. SciPy 1.0–Fundamental Algorithms for Scientific Computing in Python. arXiv:1907.10121 [physics], July 2019", "venue": "URL http://arxiv.org/abs/ 1907.10121", "year": 1907 }, { "authors": [ "T.N. Wiesel" ], "title": "Postnatal development of the visual cortex and the influence of environment", "venue": "Nature,", "year": 1982 }, { "authors": [ "Jason Yosinski", "Jeff Clune", "Anh Nguyen", "Thomas Fuchs", "Hod Lipson" ], "title": "Understanding Neural Networks Through Deep Visualization", "venue": "In ICML Workshop on Visualization for Deep Learning,", "year": 2015 }, { "authors": [ "Matthew D. Zeiler", "Rob Fergus" ], "title": "Visualizing and Understanding Convolutional Networks", "venue": "Computer Vision – ECCV", "year": 2014 }, { "authors": [ "B. Zhou", "D. Bau", "A. Oliva", "A. Torralba" ], "title": "Interpreting Deep Visual Representations via Network Dissection", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2019 }, { "authors": [ "Bolei Zhou", "Aditya Khosla", "Agata Lapedriza", "Aude Oliva", "Antonio Torralba" ], "title": "Object Detectors Emerge in Deep Scene CNNs", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Bolei Zhou", "Yiyou Sun", "David Bau", "Antonio Torralba" ], "title": "Revisiting the Importance of Individual Units in CNNs via Ablation", "venue": "[cs],", "year": 2018 }, { "authors": [ "Joel Zylberberg" ], "title": "The role of untuned neurons in sensory information coding. bioRxiv, page 134379, May 2018. doi: 10.1101/134379", "venue": "URL https://www.biorxiv.org/content/10.1101/ 134379v6", "year": 2018 }, { "authors": [ "Joel Zylberberg", "Jon Cafaro", "Maxwell H Turner", "Eric Shea-Brown", "Fred Rieke" ], "title": "DirectionSelective Circuits Shape Noise to Ensure a Precise Population", "venue": "Code. Neuron,", "year": 2016 }, { "authors": [ "Figure A" ], "title": "Removing the dead units in ResNet20 makes the relationship between regularization and selectivity in ResNet20 more consistent at large regularization scales (see Appendix A8). The presence of dead units is not unexpected, as units with the ReLU activation function are known to suffer from the \"dying ReLU problem\"(Lu et al., 2019): If, during training, a weight update causes a unit to cease activating in response", "venue": null, "year": 2019 }, { "authors": [ "Zhou" ], "title": "2019) used N = 100. We chose to use the number of samples per class in the test set data and thus the largest possible sample size. This yielded N = 1000 for CIFAR10 and N = 50 for Tiny Imagenet", "venue": null, "year": 2019 }, { "authors": [ "Morcos" ], "title": "2018b) did not observe a clear relationship between mutual information, class selectivity, and individual unit importance (as measured by impact of ablation), we hesitate to make strong conclusions about the role of mutual information in our selectivity regularizer. While there are additional class selectivity metrics that we could have used to further assess the effect of our regularizer, many of them are based on relating the activity of a neuron to the accuracy", "venue": null, "year": 2018 }, { "authors": [ "Gale" ], "title": "2019) and class correlation Li et al", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Our ability to understand deep learning systems lags considerably behind our ability to obtain practical outcomes with them. A breadth of approaches have been developed in attempts to better understand deep learning systems and render them more comprehensible to humans (Yosinski et al., 2015; Bau et al., 2017; Olah et al., 2018; Hooker et al., 2019). Many of these approaches examine the properties of single neurons and treat them as representative of the networks in which they’re embedded (Erhan et al., 2009; Zeiler and Fergus, 2014; Karpathy et al., 2016; Amjad et al., 2018; Lillian et al., 2018; Dhamdhere et al., 2019; Olah et al., 2020).\nThe selectivity of individual units (i.e. the variability in a neuron’s responses across data classes or dimensions) is one property that has been of particular interest to researchers trying to better understand deep neural networks (DNNs) (Zhou et al., 2015; Olah et al., 2017; Morcos et al., 2018b; Zhou et al., 2018; Meyes et al., 2019; Na et al., 2019; Zhou et al., 2019; Rafegas et al., 2019; Bau et al., 2020). This focus on individual neurons makes intuitive sense, as the tractable, semantic nature of selectivity is extremely alluring; some measure of selectivity in individual units is often provided as an explanation of \"what\" a network is \"doing\". One notable study highlighted a neuron selective for sentiment in an LSTM network trained on a word prediction task (Radford et al., 2017). Another attributed visualizable, semantic features to the activity of individual neurons across GoogLeNet trained on ImageNet (Olah et al., 2017). Both of these examples influenced many subsequent studies, demonstrating the widespread, intuitive appeal of \"selectivity\" (Amjad et al., 2018; Meyes et al., 2019; Morcos et al., 2018b; Zhou et al., 2015; 2018; Bau et al., 2017; Karpathy et al., 2016; Na et al., 2019; Radford et al., 2017; Rafegas et al., 2019; Morcos et al., 2018b; Olah et al., 2017; 2018; 2020).\n∗Work performed as part of the Facebook AI Residency\nFinding intuitive ways of representing the workings of DNNs is essential for making them understandable and accountable, but we must ensure that our approaches are based on meaningful properties of the system. Recent studies have begun to address this issue by investigating the relationships between selectivity and measures of network function such as generalization and robustness to perturbation (Morcos et al., 2018b; Zhou et al., 2018; Dalvi et al., 2019). Selectivity has also been used as the basis for targeted modulation of neural network function through individual units (Bau et al., 2019a;b).\nHowever there is also growing evidence from experiments in both deep learning (Fong and Vedaldi, 2018; Morcos et al., 2018b; Gale et al., 2019; Donnelly and Roegiest, 2019) and neuroscience (Leavitt et al., 2017; Zylberberg, 2018; Insanally et al., 2019) that single unit selectivity may not be as important as once thought. Previous studies examining the functional role of selectivity in DNNs have often measured how selectivity mediates the effects of ablating single units, or used indirect, correlational approaches that modulate selectivity indirectly (e.g. batch norm) (Morcos et al., 2018b; Zhou et al., 2018; Lillian et al., 2018; Meyes et al., 2019; Kanda et al., 2020). But single unit ablation in trained networks has two critical limitations: it cannot address whether the presence of selectivity is beneficial, nor whether networks need to learn selectivity to function properly. It can only address the effect of removing a neuron from a network whose training process assumed the presence of that neuron. And even then, the observed effect might be misleading. For example, a property that is critical to network function may be replicated across multiple neurons. This redundancy means that ablating any one of these neurons would show little effect, and could thus lead to the erroneous conclusion that the examined property has little impact on network function.\nWe were motivated by these issues to pursue a series of experiments investigating the causal importance of class selectivity in artificial neural networks. To do so, we introduced a term to the loss function that allows us to directly regularize for or against class selectivity, giving us a single knob to control class selectivity in the network. The selectivity regularizer sidesteps the limitations of single unit ablation and other indirect techniques, allowing us to conduct a series of experiments evaluating the causal impact of class selectivity on DNN performance. Our findings are as follows:\n• Performance can be improved by reducing class selectivity, suggesting that naturally-learned levels of class selectivity can be detrimental. Reducing class selectivity could improve test accuracy by over 2% in ResNet18 and 1% in ResNet50 trained on Tiny ImageNet.\n• Even when class selectivity isn’t detrimental to network function, it remains largely unnecessary. We reduced the mean class selectivity of units in ResNet20 trained on CIFAR10 by a factor of ∼2.5 with no impact on test accuracy, and by a factor of ∼20—nearly to a mean of 0—with only a 2% change in test accuracy.\n• Our regularizer does not simply cause networks to preserve class-selectivity by rotating it off of unit-aligned axes (i.e. by distributing selectivity linearly across units), but rather seems to suppress selectivity more generally, even when optimizing for high-selectivity basis sets . This demonstrates the viability of low-selectivity representations distributed across units.\n• We show that regularizing to increase class selectivity, even by small amounts, has significant negative effects on performance. Trained networks seem to be perched precariously at a performance cliff with regard to class selectivity. These results indicate that the levels of class selectivity learned by individual units in the absence of explicit regularization are at the limit of what will impair the network.\nOur findings collectively demonstrate that class selectivity in individual units is neither necessary nor sufficient for convolutional neural networks (CNNs) to perform image classification tasks, and in some cases can actually be detrimental. This alludes to the possibility of class selectivity regularization as a technique for improving CNN performance. More generally, our results encourage caution when focusing on the properties of single units as representative of the mechanisms by which CNNs function, and emphasize the importance of analyses that examine properties across neurons (i.e. distributed representations). Most importantly, our results are a reminder to verify that the properties we do focus on are actually relevant to CNN function." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 SELECTIVITY IN DEEP LEARNING", "text": "Examining some form of selectivity in individual units constitutes the bedrock of many approaches to understanding DNNs. Sometimes the goal is simply to visualize selectivity, which has been pursued\nusing a breadth of methods. These include identifying the input sample(s) (e.g. images) or sample subregions that maximally activate a given neuron (Zhou et al., 2015; Rafegas et al., 2019), and numerous optimization-based techniques for generating samples that maximize unit activations (Erhan et al., 2009; Zeiler and Fergus, 2014; Simonyan et al., 2014; Yosinski et al., 2015; Nguyen et al., 2016; Olah et al., 2017; 2018). While the different methods for quantifying single unit selectivity are often conceptually quite similar (measuring how variable are a neuron’s responses across different classes of data samples), they have been applied across a broad range of contexts (Amjad et al., 2018; Meyes et al., 2019; Morcos et al., 2018b; Zhou et al., 2015; 2018; Bau et al., 2017; Karpathy et al., 2016; Na et al., 2019; Radford et al., 2017; Rafegas et al., 2019). For example, Bau et al. (2017) quantified single unit selectivity for \"concepts\" (as annotated by humans) in networks trained for object and scene recognition. Olah et al. (2018; 2020) have pursued a research program examining single unit selectivity as a building block for understanding DNNs. And single units in models trained to solve natural language processing tasks have been found to exhibit selectivity for syntactical and semantic features (Karpathy et al., 2016; Na et al., 2019), of which the \"sentiment-selective neuron\" reported by Radford et al. (2017) is a particularly recognized example.\nThe relationship between individual unit selectivity and various measures of DNN performance have been examined in prior studies, but the conclusions have not been concordant. Morcos et al. (2018b), using single unit ablation and other techniques, found that a network’s test set generalization is negatively correlated (or uncorrelated) with the class selectivity of its units, a finding replicated by Kanda et al. (2020). In contrast, though Amjad et al. (2018) confirmed these results for single unit ablation, they also performed cumulative ablation analyses which suggested that selectivity is beneficial, suggesting that redundancy across units may make it difficult to interpret single unit ablation studies.\nIn a follow-up study, Zhou et al. (2018) found that ablating class-selective units impairs classification accuracy for specific classes (though interestingly, not always the same class the unit was selective for), but a compensatory increase in accuracy for other classes can often leave overall accuracy unaffected. Ukita (2018) found that orientation selectivity in individual units is correlated with generalization performance in convolutional neural networks (CNNs), and that ablating highly orientation-selective units impairs classification accuracy more than ablating units with low orientation-selectivity. But while orientation selectivity and class selectivity can both be considered types of feature selectivity, orientation selectivity is far less abstract and focuses on specific properties of the image (e.g., oriented edges) rather than semantically meaningful concepts and classes. Nevertheless, this study still demonstrates the importance of some types of selectivity.\nResults are also variable for models trained on NLP tasks. Dalvi et al. (2019) found that ablating units selective for linguistic features causes greater performance deficits than ablating less-selective units, while Donnelly and Roegiest (2019) found that ablating the \"sentiment neuron\" of Radford et al. (2017) has equivocal effects on performance. These findings seem challenging to reconcile.\nAll of these studies examining class selectivity in single units are hamstrung by their reliance on single unit ablation, which could account for their conflicting results. As discussed earlier, single unit ablation can only address whether class selectivity affects performance in trained networks, and not whether individual units to need to learn class selectivity for optimal network function. And even then, the conclusions obtained from single neuron ablation analyses can be misleading due to redundancy across units (Amjad et al., 2018; Meyes et al., 2019)." }, { "heading": "2.2 SELECTIVITY IN NEUROSCIENCE", "text": "Measuring the responses of single neurons to a relevant set of stimuli has been the canonical first-order approach for understanding the nervous system (Sherrington, 1906; Adrian, 1926; Granit, 1955; Hubel and Wiesel, 1959; Barlow, 1972; Kandel et al., 2000); its application has yielded multiple Nobel Prizes (Hubel and Wiesel, 1959; 1962; Hubel, 1982; Wiesel, 1982; O’Keefe and Dostrovsky, 1971; Fyhn et al., 2004). But recent experimental findings have raised doubts about the necessity of selectivity for high-fidelity representations in neuronal populations (Leavitt et al., 2017; Insanally et al., 2019; Zylberberg, 2018), and neuroscience research seems to be moving beyond characterizing neural systems at the level of single neurons, towards population-level phenomena (Shenoy et al., 2013; Raposo et al., 2014; Fusi et al., 2016; Morcos and Harvey, 2016; Pruszynski and Zylberberg, 2019; Heeger and Mackey, 2019; Saxena and Cunningham, 2019).\nSingle unit selectivity-based approaches are ubiquitous in attempts to understand artificial and biological neural systems, but growing evidence has led to questions about the importance of focusing on selectivity and its role in DNN function. These factors, combined with the limitations of prior approaches, lead to the question: is class selectivity necessary and/or sufficient for DNN function?" }, { "heading": "3 APPROACH", "text": "Networks naturally seem to learn solutions that result in class-selective individual units (Zhou et al., 2015; Olah et al., 2017; Morcos et al., 2018b; Zhou et al., 2018; Meyes et al., 2019; Na et al., 2019; Zhou et al., 2019; Rafegas et al., 2019; Amjad et al., 2018; Meyes et al., 2019; Bau et al., 2017; Karpathy et al., 2016; Radford et al., 2017; Olah et al., 2018; 2020). We examined whether learning class-selective representations in individual units is actually necessary for networks to function properly. Motivated by the limitations of single unit ablation techniques and the indirectness of using batch norm or dropout to modulate class selectivity (e.g. Morcos et al. (2018b); Zhou et al. (2018); Lillian et al. (2018); Meyes et al. (2019)), we developed an alternative approach for examining the necessity of class selectivity for network performance. By adding a term to the loss function that serves as a regularizer to suppress (or increase) class selectivity, we demonstrate that it is possible to directly modulate the amount of class selectivity in all units in aggregate. We then used this approach as the basis for a series of experiments in which we modulated levels of class selectivity across individual units and measured the resulting effects on the network. Critically, the selectivity regularizer sidesteps the limitations of single unit ablation-based approaches, allowing us to answer otherwise-inaccessible questions such as whether single units actually need to learn class selectivity, and whether increased levels of class selectivity are beneficial.\nUnless otherwise noted: all experimental results were derived from the test set with the parameters from the epoch that achieved the highest validation set accuracy over the training epochs; 20 replicates with different random seeds were run for each hyperparameter set; error bars and shaded regions denote bootstrapped 95% confidence intervals; selectivity regularization was not applied to the output (logits), nor was the output included in any of our analyses because by definition the output must be class selective in a classification task. Selectivity regularization was only applied to intermediate (hidden) layers with non-linearities." }, { "heading": "3.1 MODELS AND DATASETS", "text": "Our experiments were performed on ResNet18 and ResNet50 (He et al., 2016) trained on Tiny ImageNet (Fei-Fei et al., 2015), and ResNet20 (He et al., 2016) and a VGG16-like network (Simonyan and Zisserman, 2015), both trained on CIFAR10 (Krizhevsky, 2009). Additional details about hyperparameters, data, training, and software are in Appendix A.1. We focus on ResNet18 trained on Tiny ImageNet in the main text, but results were qualitatively similar across models and datasets except where noted." }, { "heading": "3.2 DEFINING CLASS SELECTIVITY", "text": "There are a breadth of approaches for quantifying class selectivity in individual units (Moody et al., 1998; Zhou et al., 2015; Li et al., 2015; Zhou et al., 2018; Gale et al., 2019). We chose the neuroscience-inspired approach of Morcos et al. (2018b) because it is similar to many widely-used metrics, easy to compute, and most importantly, differentiable (the utility of this is addressed in the next section). We also confirmed the efficacy of our regularizer on a different, non-differentiable selectivity metric (see Appendix A.13). For a single convolutional feature map (which we refer to as a \"unit\"), we computed the mean activation across elements of the filter map in response to a single sample, after the non-linearity. Then the class-conditional mean activation (i.e. the mean activation for each class) was calculated across all samples in the test set, and the class selectivity index (SI) was calculated as follows:\nSI = µmax − µ−max\nµmax + µ−max + (1)\nwhere µmax is the largest class-conditional mean activation, µ−max is the mean response to the remaining (i.e. non-µmax) classes, and is a small value to prevent division by zero (we used 10−7) in the case of a dead unit. The selectivity index can range from 0 to 1. A unit with identical average\nactivity for all classes would have a selectivity of 0, and a unit that only responded to a single class would have a selectivity of 1.\nAs Morcos et al. (2018b) note, this selectivity index is not a perfect measure of information content in single units. For example, a unit with some information about many classes would have a low selectivity index. But it achieves the goal of identifying units that are class-selective in a similarly intuitive way as prior studies (Zhou et al., 2018), while also being differentiable with respect to the model parameters." }, { "heading": "3.3 A SINGLE KNOB TO CONTROL CLASS SELECTIVITY", "text": "Because the class selectivity index is differentiable, we can insert it into the loss function, allowing us to directly regularize for or against class selectivity. Our loss function, which we seek to minimize, thus takes the following form:\nloss = − C∑ c yc· log(ŷc)− αµSI (2)\nThe left-hand term in the loss function is the traditional cross-entropy between the softmax of the output units and the true class labels, where c is the class index, C is the number of classes, yc is the true class label, and ŷc is the predicted class probability. We refer to the right-hand component of the loss function, −αµSI , as the class selectivity regularizer (or regularizer, for brevity). The regularizer consists of two terms: the selectivity term,\nµSI = 1\nL L∑ l 1 U U∑ u SIu (3)\nwhere l is a convolutional layer, L is number of layers, u is a unit (i.e. feature map), U is the number of units in a given layer, and SIu is the class selectivity index of unit u. The selectivity term of the regularizer is obtained by computing the selectivity index for each unit in a layer, then computing the mean selectivity index across units within each layer, then computing the mean selectivity index across layers. Computing the mean within layers before computing the mean across layers (as compared to computing the mean across all units in the network) mitigates the biases induced by the larger numbers of units in deeper layers. The remaining term in the regularizer is α, the regularizer scale. The sign of α determines whether class selectivity is promoted or discouraged. Negative values of α discourage class selectivity in individual units, while positive values promote it. The magnitude of α controls the contribution of the selectivity term to the overall loss. α thus serves as a single knob with which we can modulate class selectivity across all units in the network in aggregate. During training, the class selectivity index was computed for each minibatch. For the results presented here, the class selectivity index was computed across the entire test set. We also tried restricting regularization to the first or final three layers (Appendix A.15), and warming up the class selectivity regularization over the initial training epochs (Appendix A.16), all of which yielded qualitatively similar results." }, { "heading": "4 RESULTS", "text": "" }, { "heading": "4.1 TEST ACCURACY IS IMPROVED OR UNAFFECTED BY REDUCING CLASS SELECTIVITY", "text": "Prior research has yielded equivocal results regarding the importance of class selectivity in individual units. We sidestepped the limitations of previous approaches by regularizing against selectivity directly in the loss function through the addition of the selectivity term (see Approach 3.3), giving us a knob with which to causally manipulate class selectivity. We first verified that the regularizer works as intended (Figure A1). Indeed, class selectivity across units in a network decreases as α becomes more negative. We also confirmed that our class selectivity regularizer has similar effects when measured using a different class selectivity metric and mutual information (see Appendix A.13), and when regularizing to control d′—a measure of class discriminability—in individual units (Appendix A.14). The consistency of our observations across metrics of selectivity indicates that our results are not unique to the metric used in our regularizer. The regularizer thus allows us to to examine the causal impact of class selectivity on test accuracy.\nRegularizing against class selectivity could yield three possible outcomes: If the previously-reported anti-correlation between selectivity and generalization is causal, then test accuracy should increase. But if class selectivity is necessary for high-fidelity class representations, then we should observe a decrease in test accuracy. Finally, if class selectivity is an emergent phenomenon and/or irrelevant to network performance, test accuracy should remain unchanged.\nSurprisingly, we observed that reducing selectivity significantly improves test accuracy in ResNet18 trained on Tiny ImageNet for all examined values of α ∈ [−0.1,−2.5] (Figure 1; p < 0.01, Bonferroni-corrected t-test). Test accuracy increases with the magnitude of α, reaching a maximum at α = −1.0 (test accuracy at α−1.0 = 53.60 ± 0.13, α0 (i.e. no regularization) = 51.57 ± 0.18), at which point there is a 1.6x reduction in class selectivity (mean class selectivity at α−1.0 = 0.22± 0.0009, α0 = 0.35± 0.0007). Test accuracy then begins to decline; at α−3.0 test accuracy is statistically indistinct from α0, despite a 3x decrease in class class selectivity (mean class selectivity at α−3.0 = 0.12±0.0007, α0 = 0.35±0.0007). Further reducing class selectivity beyond α = −3.5 (mean class selectivity = 0.10± 0.0007) has increasingly detrimental effects on test accuracy. These results show that the amount of class selectivity naturally learned by a network (i.e. the amount learned in the absence of explicit regularization) can actually constrain the network’s performance.\nResNet20 trained on CIFAR10 also learned superfluous class selectivity. Although reducing class selectivity does not improve performance, it causes minimal detriment, except at extreme regularization scales (α ≤ −30; Figure A2). Increasing the magnitude of α decreases mean class selectivity across the network, with little impact on test accuracy until mean class selectivity reaches 0.003± 0.0002 at α−30 (Figure A1d). Reducing class selectivity only begins to have a statistically significant effect on performance at α−1.0 (Figure A2a), at which point mean class selectivity across the network has decreased from 0.22 ± 0.002 at α0 (i.e. no regularization) to 0.07 ± 0.0013 at α−1.0—a factor of more than 3 (Figure A2c; p = 0.03, Bonferroni-corrected t-test). This implies that ResNet20 learns more than three times the amount of class selectivity required for maximum test accuracy.\nWe observed qualitatively similar results for VGG16 (see Appendix A.12). Although the difference is significant at α = −0.1 (p = 0.004, Bonferroni-corrected t-test), it is possible to reduce mean class selectivity by a factor of 5 with only a 0.5% decrease in test accuracy, and by a factor of 10 with only a ∼1% drop in test accuracy. These differences may be due to VGG16’s naturally higher levels of class selectivity (see Figure A16 for comparisons between VGG16 and ResNet20). We also observed qualitatively similar results when using our regularization approach to decrease d′, a measure of class discriminability, in ResNet20 trained on CIFAR10 and ResNet18 trained on Tiny ImageNet (Appendix A.14). Furthermore, we also find that regularizing to reduce class selectivity improves test accuracy in ResNet50 trained on Tiny ImageNet (Appendix A.18). Together, these results demonstrate that class selectivity in individual units is largely unnecessary for optimal performance in CNNs trained on image classification tasks." }, { "heading": "4.2 DOES SELECTIVITY SHIFT TO A DIFFERENT BASIS SET?", "text": "We were able to reduce mean class selectivity in all examined networks by a factor of at least three with minimal negative impact on test accuracy (∼1%, at worst, for VGG16). However, one trivial solution for reducing class selectivity is for the network to \"hide\" it from the regularizer by rotating it off of unit-aligned axes or performing some other linear transformation. In this scenario the selectivity in individual units would be reduced, but remain accessible through linear combinations of activity across units. In order to test this possibility, we used CCA (see Appendix A.3), which is invariant to rotation and other invertible affine transformations, to compare the representations in regularized (i.e. low-selectivity) networks to the representations in unregularized networks.\nWe first established a meaningful baseline for comparison by computing the CCA distances between each pair of 20 replicate networks for a given value of α (we refer to this set of distances as ρ(αr, αr)). If regularizing against class selectivity causes the network to move selectivity off-axis, the CCA distances between regularized and unregularized networks —which we term ρ(αr, α0)—should be similar to ρ(αr, αr). Alternatively, if class selectivity is suppressed via some non-affine transformation of the representation, ρ(αr, α0) should exceed ρ(αr, αr).\nOur analyses confirm the latter hypothesis: we find that ρ(αr, α0) significantly exceeds ρ(αr, αr) for all values of α except α = −0.1 in ResNet18 trained on Tiny ImageNet (Figure 2 p < 1.3× 10−5, paired t-test). The effect is even more striking in ResNet20 trained on CIFAR10; all tested values of α are significant (Figure A3; p < 5 × 106, paired t-test). Furthermore, the size of the effect is proportional to α in both models; larger α values yield representations that are more dissimilar to unregularized representations. These results support the conclusion that our regularizer doesn’t just cause class selectivity to be rotated off of unit-aligned axes, but also suppresses it.\nAs an additional control to ensure that our regularizer did not simply shift class selectivity to off-axis directions in activation space, we calculated an upper bound on the amount of class selectivity that could be recovered by finding the linear projection of unit activations that maximizes class selectivity (see Appendix A.7 for methodological details). For both ResNet18 trained on Tiny ImageNet (Figure 2c) and ResNet20 trained on CIFAR10 (Figure A4b), the amount of class selectivity in the optimized projection decreases as a function of increasing |α|, indicating that regularizing against class selectivity does not simply rotate the selectivity off-axis. Interestingly, the upper bound on class selectivity is very similar across regularization scales in the final two convolutional layers in both models (Figure A4a; A4c), indicating that immediate proximity to the logits (output) may mitigate\nthe effect of class selectivity regularization. While we also found that the amount of class selectivity in the optimized projection is consistently higher than the observed axis-aligned class selectivity, we consider this to be an expected result, as the optimized projection represents an upper bound on the amount class selectivity that could be recovered from the models’ representations. However, the decreasing upper bound as a function of increasing |α| indicates that our class selectivity regularizer decreases selectivity across all basis sets, and not just along unit-aligned axes." }, { "heading": "4.3 INCREASED CLASS SELECTIVITY CONSIDERED HARMFUL", "text": "We have demonstrated that class selectivity can be significantly reduced with minimal impact on test accuracy. However, we only examined the effects of reducing selectivity. What are the effects of increasing selectivity? We examined this question by regularizing for class selectivity, instead of against it. This is achieved quite easily, as it requires only a change in the sign of α. We first confirmed that changing the sign of the scale term in the loss function causes the intended effect of increasing class selectivity in individual units (see Appendix A.8).\nDespite class selectivity not being strictly necessary for high performance, its ubiquity across biological and artificial neural networks leads us to suspect it may still be sufficient. We thus expect that increasing it would either improve test accuracy or yield no effect. For the same reason, we would consider it unexpected if increasing selectivity impairs test accuracy.\nSurprisingly, we observe the latter outcome: increasing class selectivity negatively impacts network performance in ResNet18 trained on Tiny ImageNet (Figure 3a). Scaling the regularization has an immediate effect: a significant decline in test accuracy is present even at the smallest tested value of α (p ≤ 6 × 10−5 for all α, Bonferroni-corrected t-test) and falls catastrophically to ∼25% by α = 5.0. The effect proceeds even more dramatically in ResNet20 trained on CIFAR10 (Figure A6a). Note that we observed a correlation between the strength of regularization and the presence of dead units in ResNet20 (but not ResNet18), however further analyses ruled this out as an explanation for the decline in test accuracy (see Appendix A.10).\nOne solution to generate a very high selectivity index is if a unit is silent for the vast majority of inputs and has low activations for remaining set of inputs. If this were the case, we would expect that regularizing to increase selectivity would cause units to be silent for the majority of inputs. However, we found that the majority of units were active for ≥80% of inputs even at α = 0.7, after significant performance deficits have emerged in both ResNet18 and ResNet20 (Appendix A.11). These findings rule out sparsity as a potential explanation for our results. It is also possible that regularizing to increase class selectivity could discourage individual neurons from changing their preferred class during training, even if changing their preferred class would improve performance. If regularizing to increase class selectivity did indeed lock units in to their initial preferred class, this could impose a constraint on performance. We tested for this possibility in two ways (Appendix A.16): by examining the statistics of units’ changes in preferred class during training (Figures A28, A29), and slowly warming up class selectivity regularization over the initial training epochs (Figures A30; A31). Approximately 100% of units across all examined models and regularization scales change their preferred class at least once during training (Figure A28), and the relationship between class selectivity regularization and the number of changes in a unit’s preferred class over training is inconsistent (Figure A29). Furthermore, warming up the regularization has qualitatively similar effects as using a constant α (Figure A31). None of these analyses indicate that an inability to change preferred locations can fully explain the class selectivity-induced test accuracy impairment (see Appendix A.16 for additional details).\nThe effects of regularizing to increase class selectivity are qualitatively similar for VGG16 (see Appendix A.12) and ResNet50 (Appendix A.18); we observed across all models that increasing class selectivity beyond the levels that are learned naturally (i.e. without regularization, α = 0) impairs network performance.\nRecapitulation We directly compare the effects of increasing vs. decreasing class selectivity in Figure 4 and Appendix A.17. The effects diverge immediately at |α| = 0.1, and suppressing class selectivity yields a 6% increase in test accuracy relative to increasing class selectivity by |α| = 2.0." }, { "heading": "5 DISCUSSION", "text": "We examined the causal role of class selectivity in CNN performance by adding a term to the loss function that allows us to directly manipulate class selectivity across all neurons in the network. We found that class selectivity is not strictly necessary for networks to function, and that reducing it can even improve test accuracy. In ResNet18 trained on Tiny Imagenet, reducing class selectivity by 1.6× improved test accuracy by over 2%. In ResNet20 trained on CIFAR10, we could reduce the mean class selectivity of units in a network by factor of ∼2.5 with no impact on test accuracy, and by a factor of ∼20—nearly to a mean of 0—with only a 2% change in test accuracy. We confirmed that our regularizer seems to suppress class selectivity, and not simply cause the network to rotate it off of unit-aligned axes. We also found that regularizing a network to increase class selectivity in individual units has negative effects on performance. These results resolve questions about class selectivity that remained inaccessible to previous approaches: class selectivity in individual units is neither necessary nor sufficient for—and can sometimes even constrain—CNN performance.\nOne caveat to our results is that they are limited to CNNs trained to perform image classification. It’s possible that our findings are due to idiosyncracies of benchmark datasets, and wouldn’t generalize to more naturalistic datasets and tasks. Given that class selectivity is ubiquitous across DNNs trained on different tasks and datasets, future work should examine how broadly our results generalize, and the viability of class selectivity regularization as a general-purpose tool to improve DNN performance.\nThe presence of non-selective units in a network trained to perform image classification could appear puzzling. It may be difficult to envision how non-selective units could shape representations in a manner that is useful for a classification task. One possibility is that the class-conditional joint distribution of activations across units facilitates readout. Put another way, the correlations between units’ activations can help separate the distributions of activations for different classes. Indeed, there is evidence that correlated variability between neurons can facilitate information readout in the brain (Zylberberg et al., 2016; Leavitt et al., 2017; Zylberberg, 2018; Nogueira et al., 2020).\nWe know that class selectivity in individual units naturally emerges over the course of learning. The single unit ablation studies show that class selectivity can have an effect on the performance of trained networks. And while it is possible that networks trained with selectivity regularization learn different solutions from networks trained without it, our results show that class selectivity is not strictly necessary for networks to learn representations that result in class-selective units. This finding naturally leads to a compelling question: if class selectivity is unnecessary, why does it emerge?\nOur results also make a broader point about the potential pitfalls of focusing on the properties of single units when trying to understand DNNs, emphasizing instead the importance of analyses that focus on distributed representations. While we consider it essential to find tractable, intuitive approaches for understanding complex systems, it’s critical to empirically verify that these approaches actually reflect functionally relevant properties of the system being examined." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We would like to thank Tatiana Likhomanenko, Tiffany Cai, Eric Mintun, Janice Lan, Mike Rabbat, Sergey Edunov, Yuandong Tian, and Lyndon Duong for their productive scrutiny and insightful feedback." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 MODELS, TRAINING, DATASETS, AND SOFTWARE", "text": "Our experiments were performed on ResNet18 and ResNet50 (He et al., 2016) trained on Tiny Imagenet (Fei-Fei et al., 2015), and ResNet20 (He et al. (2016); code modified from Idelbayev (2020)) and a VGG16-like network (Simonyan and Zisserman, 2015), both trained on CIFAR10 (Krizhevsky, 2009). All models were trained using stochastic gradient descent (SGD) with momentum = 0.9 and weight decay = 0.0001.\nThe maxpool layer after the first batchnorm layer (see He et al. (2016)) was removed because of the smaller size of Tiny Imagenet images compared to standard ImageNet images (64x64 vs. 256x256, respectively). ResNet18 were trained for 90 epochs with a minibatch size of 4096 samples with a learning rate of 0.1, multiplied (annealed) by 0.1 at epochs 35, 50, 65, and 80. ResNet50 was trained identically, except with a batch size of 1400 samples. Tiny Imagenet (Fei-Fei et al., 2015) consists of 500 training images and 50 images for each of its 200 classes. We used the validation set for testing and created a new validation set by taking 50 images per class from the training set, selected randomly for each training run.\nThe VGG16-like network is identical to the batch norm VGG16 in Simonyan and Zisserman (2015), except the final two fully-connected layers of 4096 units each were replaced with a single 512-unit layer. ResNet20 and VGG16 were trained for 200 epochs using a minibatch size of 256 samples. ResNet20 were trained with a learning rate of 0.1 and VGG16 with a learning rate of 0.01, both annealed by 10−1 at epochs 100 and 150. We split the 50k CIFAR10 training samples into a 45k sample training set and a 5k validation set, similar to our approach with Tiny Imagenet.\nAll experimental results were derived from the test set with the parameters from the epoch that achieved the highest validation set accuracy over the training epochs. 20 replicates with different random seeds were run for each hyperparameter set, except for ResNet50, which only used 5 replicates per hyperparameter set. Selectivity regularization was not applied to the output (logit) layer, nor was the output layer included any of our analyses.\nExperiments were conducted using PyTorch (Paszke et al., 2019), analyzed using the SciPy ecosystem (Virtanen et al., 2019), and visualized using Seaborn (Waskom et al., 2017)." }, { "heading": "A.2 EFFECT OF SELECTIVITY REGULARIZER ON TRAINING TIME", "text": "We quantified the number of training epochs required to reach 95% of maximum test accuracy (t95). The t95 without selectivity regularization (t95α=0) for ResNet20 is 45±15 epochs (median ± IQR). α in [-2, 0.7] had overlapping IQRs with α = 0. For ResNet18, t95α=0 = 35± 1, while t95 for α in [-2, 1] was as high as 51±1.5. Beyond these ranges, the t95 exceeded 1.5×t95α=0 and/or was highly variable." }, { "heading": "A.3 CCA", "text": "" }, { "heading": "A.3.1 AN INTUITION", "text": "We used Canonical Correlation Analysis (CCA) to examine the effects of class selectivity regularization on hiden layer representations. CCA is a statistical method that takes two sets of multidimensional variates and finds the linear combinations of these variates that have maximum correlation with each other (Hotelling, 1936). Critically, CCA is invariant to rotation and other invertible affine transformations. CCA has been productively applied to analyze and compare representations in (and between) biological and neural networks (Sussillo et al., 2015; Smith et al., 2015; Raghu et al., 2017; Morcos et al., 2018a; Gallego et al., 2018).\nWe use projection-weighted CCA (PWCCA), a variant of CCA introducted in Morcos et al. (2018a) that has been shown to be more robust to noise than traditional CCA and other CCA variants (though for brevity we just use the term \"CCA\" in the main text). PWCCA generates a scalar value, ρ, that can be thought of as the distance or dissimilarity between the two sets of multidimensional variates, L1 and L2. For example, if L2 = L1, then ρL1,L2 = 0. Now let R be a rotation matrix. Because CCA is invariant to rotation and other invertible affine transformations, if L2 = RL1 (i.e. if L2\nis a rotation of L1), then ρL1,L2 = 0. In contrast, traditional similarity metrics such as Pearson’s Correlation and cosine similarity would obtain different values if L2 = L1 compared to L2 = RL1. We use the PWCCA implementation available at https://github.com/google/svcca/, as provided in Morcos et al. (2018a)." }, { "heading": "A.3.2 OUR APPLICATION", "text": "As an example for the analyses in our experiments, L1 is the activation matrix for a layer in a network that was not regularized against class selectivity (i.e. α = 0), and L2 is the activation matrix for the same layer in a network that was structured and initialized identically, but subject to regularization against class selectivity (i.e. α < 0). If regularizing against class selectivity causes the network’s representations to be rotated (or to undergo to some other invertible affine transformation), then ρL1,L2 = 0. In practice ρL1,L2 > 0 due to differences in random seeds and/or other stochastic factors in the training process, so we can determine a threshold value and say ρL1,L2 ≤ . If regularizing against class selectivity instead causes a non-affine transformation to the network’s representations, then ρL1,L2 > .\nIn our experiments we empirically establish a distribution of values by computing the PWCCA distances between ρL2aL2b , where L2a and L2b are two networks from the set of 20 replicates for a given hyperparameter combination that differ only in their initial random seed values (and thus have the same α). This gives ( 20 2 ) = 190 values of . We then compute the PWCCA distance between each {L1, L2} replicate pair, yielding a distribution of 20× 20 = 400 values of ρL1,L2 , which we compare to the distribution of ." }, { "heading": "A.3.3 FORMALLY", "text": "For the case of our analyses, let us start with a dataset X , which consists of M data samples {x1, ...xM}. Using the notation from Raghu et al. (2017), the scalar output (activation) of a single neuron i on layer ι in response to each data sample collectively form the vector\nzιi = (z(x ι i(x1), ..., x ι i(xM ))\nWe then collect the activation vector zli of every neuron in layer ι into a matrix L = {zι1, ..., zιM} of size N ×M , N is the number of neurons in layer ι, and M is the number of data samples. Given two such activation matrices L1, of size Na ×M , and L2, of size Nb ×M , CCA finds the vectors w (in RNa ) and s (in RNb ), such that the inner product\nρ = 1− 〈w TL1, s TL2〉 ‖wTL1‖· ‖sTL2‖\nis maximized.\nA.4 REGULARIZING TO DECREASE CLASS SELECTIVITY IN RESNET18 AND RESNET20\na)\n0 2 4 6 8 10 12 14 16 Layer\n0.00\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\n0.35\n0.40 0.45 Cl as s Se le ct iv ity In de x\n-100.0 -30.0 -10.0 -5.0 -2.0 -1.0\n-0.7 -0.4 -0.3 -0.2 -0.1 0.0\nRegularization Scale (α) b)\n-100.0 -30.0 -10.0 -5.0 -2.0 -1.0 -0.7 -0.4 -0.3 -0.2 -0.1 0.0\nRegularization Scale (α)\n0.00\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\n0.35\nCl as\ns Se\nle ct\niv ity\nIn de\nx\nc)\n0 2 4 6 8 10 12 14 16 18 Layer\n0.0\n0.1\n0.2\n0.3\n0.4\n0.5\nCl as\ns Se\nle ct\niv ity\nIn de x -100.0 -30.0 -10.0 -5.0 -2.0 -1.0\n-0.7 -0.4 -0.3 -0.2 -0.1 0.0\nRegularization Scale (α)\nd)\n-100.0 -30.0 -10.0 -5.0 -2.0 -1.0 -0.7 -0.4 -0.3 -0.2 -0.1 0.0\nRegularization Scale (α)\n0.00\n0.05\n0.10\n0.15\n0.20\n0.25\nCl as\ns Se\nle ct\niv ity\nIn de\nx\nFigure A1: Manipulating class selectivity by regularizing against it in the loss function. (a) Mean class selectivity index (y-axis) as a function of layer (x-axis) for different regularization scales (α; denoted by intensity of blue) for ResNet18. (b) Similar to (a), but mean is computed across all units in a network instead of per layer. (b) Similar to (a), but mean is computed across all units in a network instead of per layer. (c) and (d) are identical to (a) and (b), respectively, but for ResNet20. Error bars denote bootstrapped 95% confidence intervals." }, { "heading": "A.5 DECREASING CLASS SELECTIVITY WITHOUT DECREASING TEST ACCURACY IN", "text": "RESNET20\na)\n-100 -30-10-5-2-1-0.7 -0.4 -0.3 -0.2 -0.1 0\nRegularization Scale (α)\n0\n20\n40\n60\n80\n100\nTe st\nA cc\nur ac\ny\n* ** ** **\n*\n**\nb)\n-5.0 -2.0 -1.0 -0.7 -0.4 -0.3 -0.2 -0.1 -0.0 Regularization Scale (α)\n88.5\n89.0\n89.5\n90.0\n90.5\n91.0\n91.5\nTe st\nA cc\nur ac\ny\n* **\n**\nc)\n0.00 0.05 0.10 0.15 0.20 0.25 Mean Class Selectivity\n89.25\n89.50\n89.75\n90.00\n90.25\n90.50\n90.75\nTe st\nA cc\nur ac\ny\n*\n**\n** -5.0 -2.0 -1.0 -0.7\n-0.4 -0.3 -0.2 -0.1 0.0\nRegularization Scale (α)\nFigure A2: Effects of reducing class selectivity on test accuracy in ResNet20 trained on CIFAR10. (a) Test accuracy (y-axis) as a function of regularization scale (α, x-axis and intensity of blue). (b) Identical to (a), but for a subset of α values. The center of each violin plot contains a boxplot, in which the darker central lines denote the central two quartiles. (c) Test accuracy (y-axis) as a function of mean class selectivity (x-axis) for different values of α. Error bars denote 95% confidence intervals. *p < 0.05, **p < 5× 10−6 difference from α = 0, t-test, Bonferroni-corrected.\nA.6 CCA RESULTS FOR RESNET20\na)\n-5.0 -2.0 -1.0 -0.7 -0.4 -0.3 -0.2 -0.1\nRegularization Scale (α)\n1\n1.15\n1.32\n1.52\n1.74\n2\nCC A\nD is\nta nc\ne Ra\ntio\nFigure A3: Using CCA to check whether class selectivity is rotated off-axis in ResNet20 trained on CIFAR10. Similar to Figure 2, we plot the average CCA distance ratio (y-axis) as a function of α (x-axis, intensity of blue). The distance ratio is significantly greater than the baseline for all values of α (p < 5× 10−6, paired t-test). Error bars = 95% confidence intervals." }, { "heading": "A.7 CALCULATING AN UPPER BOUND FOR OFF-AXIS SELECTIVITY", "text": "As an additional control to ensure that our regularizer did not simply shift class selectivity to off-axis directions in activation space, we calculated an upper bound on the amount of class selectivity that could be recovered by finding the linear projection of unit activations that maximizes class selectivity. To do so, we first collected the validation set activation vector zli of every neuron in layer ι into a matrix Aval = {zι1, ..., zιM} of size M ×N , where M is the number of data samples in validation set and N is the number of neurons in layer ι. We then found the projection matrix W ∈ RN×N that minimizes the loss\nloss = (1− SI(AvalW ))\nsuch that ||WTW − I||2 = 0\ni.e. W is orthonormal, where SI is the selectivity index from Equation 1. We constrained W to be orthonormal because the non-orthonormal solution to maximizing selectivity is degenerate: project all axes onto the single direction in activation space with the highest class selectivity. We used Lezcano-Casado (2019)’s toolbox to constrain W to be orthonormal. Because SI requires inputs ≥ 0, we shifted the columns of AW by subtracting the columnwise mininum value before computing SI . The optimization was performed using Adam (Kingma and Ba, 2015) with a learning rate of 0.001 for 3500 steps or until the magnitude of the change in loss was less than 10−6 for 10 steps. W was then used to project the activation matrix for the test set Atest, and the selectivity index was calculated for each axis of the new activation space (i.e. each column of AtestW ) after shifting the columns of AtestW to be ≥ 0. A separate W was obtained for each layer of each model and for each replicate and value of α.\na)\n0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 Layer\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\n0.35\n0.40\n0.45\nCl as\ns Se\nle ct\niv ity\nIn de x (U pp er B ou nd )\n-5.0 -2.0 -1.0 -0.7\n-0.4 -0.3 -0.2 -0.1 0.0\nRegularization Scale (α)\nb)\n-5.0 -2.0 -1.0 -0.7 -0.4 -0.3 -0.2 -0.1 -0.0 Regularization Scale (α)\n0.025\n0\n0.050\n0.075\n0.100\n0.125\n0.150\n0.175\n0.200\nCl as\ns Se\nle ct\niv ity\nIn de\nx\nOptimized Projection (Upper bound) Axis-aligned (Observed)\nc)\n0 2 4 6 8 10 12 14 16 18 Layer\n0.0\n0.1\n0.2\n0.3\n0.4\nCl as\ns Se\nle ct\niv ity\nIn de x (U pp er B ou nd )\n-5.0 -2.0 -1.0 -0.7\n-0.4 -0.3 -0.2 -0.1 0.0\nRegularization Scale (α)\nFigure A4: An upper bound for off-axis class selectivity. (a) Upper bound on class selectivity (y-axis) as a function of layer (x-axis) for different regularization scales (α; denoted by intensity of blue) for ResNet18 trained on Tiny ImageNet. (b) Mean class selectivity (y-axis) as a function of regularization scale (α; x-axis) for ResNet20 trained on CIFAR10. Diamond-shaped data points denote the upper bound on class selectivity for a linear projection of activations as described in Appendix A.7, while circular points denote the amount of axis-aligned class selectivity for the corresponding values of α. (c) (a), but for ResNet20 trained on CIFAR10. Error bars = 95% confidence intervals.\nA.8 REGULARIZING TO INCREASE CLASS SELECTIVITY IN RESNET18 AND RESNET20\na)\n0 2 4 6 8 10 12 14 16 Layer\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9 1.0 Cl as s Se le ct iv ity In de x\n0.0 0.1 0.2 0.3 0.4 0.7\n1.0 2.0 5.0 10.0 30.0 100.0\nRegularization Scale (α)\nb)\n0.0 0.1 0.2 0.3 0.4 0.7 1.0 2.0 5.0 10.0 30.0 100.0 Regularization Scale (α)\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1.0\nCl as\ns Se\nle ct\niv ity\nIn de\nx\nc)\n0 2 4 6 8 10 12 14 16 18 Layer\n0.0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\nCl as\ns Se\nle ct\niv ity\nIn de\nx\n0.0 0.1 0.2 0.3 0.4 0.7\n1.0 2.0 5.0 10.0 30.0 100.0\nRegularization Scale (α) d)\n0.0 0.1 0.2 0.3 0.4 0.7 1.0 2.0 5.0 10.0 30.0 100.0 Regularization Scale (α)\n0.2\n0.3\n0.4\n0.5\n0.6\nCl as\ns Se\nle ct\niv ity\nIn de\nx\nFigure A5: Regularizing to increase class selectivity (a) Mean class selectivity index (y-axis) as a function of layer (x-axis) for different regularization scales (α; denoted by intensity of red) for ResNet18. (b) Similar to (a), but mean is computed across all units in a network instead of per layer. (c) and (d) are identical to (a) and (b), respectively, but for ResNet20. Note that the inconsistent effect of larger α values in (c) and (d) is addressed in Appendix A.10. Error bars denote bootstrapped 95% confidence intervals." }, { "heading": "A.9 ADDITIONAL EFFECTS OF CLASS SELECTIVITY REGULARIZATION ON TEST ACCURACY IN", "text": "RESNET20\na)\n0 0.1 0.2 0.3 0.4 0.7 1 2 5 10 30 100 Regularization Scale (α)\n0\n20\n40\n60\n80 100 Te st A cc ur ac y\n* ** ** ** **\n**\n** ** ** **\nb)\n0.0 0.1 0.2 0.3 0.4 0.7 1.0 Regularization Scale (α)\n65\n70\n75\n80\n85\n90\nTe st\nA cc\nur ac\ny\n* ** ** **\n** c)\n0.2 0.3 0.4 0.5 0.6 Mean Class Selectivity\n20\n30\n40\n50\n60\n70\n80\n90\nTe st\nA cc\nur ac\ny\n0.0 0.1 0.2 0.3 0.4\n0.7 1.0 2.0 5.0 10.0\n* ** **\n**\n**\n** **\n**\nRegularization Scale (α)\nFigure A6: Effects of increasing class selectivity on test accuracy on ResNet20 trained on CIFAR10. (a) Test accuracy (y-axis) as a function of regularization scale (α; x-axis, intensity of red). (b) Identical to (a), but for a subset of α values. The center of each violin plot contains a boxplot, in which the darker central lines denote the central two quartiles. (c) Test accuracy (y-axis) as a function of mean class selectivity (x-axis) for different values of α. Error bars denote 95% confidence intervals. *p < 2× 10−4, **p < 5× 10−7 difference from α = 0, t-test, Bonferroni-corrected." }, { "heading": "A.10 SINGLE UNIT NECROMANCY", "text": "Lethal ReLUs The inconsistent relationship between α and class selectivity for larger values of α led us to question whether the performance deficits were due to an alternative factor, such as the optimization process, rather than class selectivity per se. Interestingly, we observed that ResNet20 regularized to increase selectivity contained significantly higher proportions of dead units, and the number of dead units is roughly proportional to alpha (see Figure A8a). Regularizing to increase class selectivity did not cause units to die in ResNet18 trained on Tiny ImageNet except at very extreme values of α (α ≥ 30), though even at these values the proportion of dead units never exceeded 0.03 (Figure A7). Removing the dead units in ResNet20 makes the relationship between regularization and selectivity in ResNet20 more consistent at large regularization scales (see Appendix A8).\nThe presence of dead units is not unexpected, as units with the ReLU activation function are known to suffer from the \"dying ReLU problem\"(Lu et al., 2019): If, during training, a weight update causes a unit to cease activating in response to all training samples, the unit will be unaffected by subsequent weight updates because the ReLU gradient at x ≤ 0 is zero, and thus the unit’s activation will forever remain zero. The dead units could explain the decrease in performance from regularizing to increase selectivity as simply a decrease in model capacity.\nFruitless resuscitation One solution to the dying ReLU problem is to use a leaky-ReLU activation function (Maas et al., 2013), which has a non-zero slope, b (and thus non-zero gradient) for x ≤ 0. Accordingly, we re-ran the previous experiment using units with a leaky-ReLU activation in an attempt to control for the potential confound of dead units. Note that because the class selectivity index assumes activations ≥ 0, we shifted activations by subtracting the minimum activation when computing selectivity for leaky-ReLUs. If the performance deficits from regularizing for selectivity are simply due to dead units, then using leaky-ReLUs should rescue performance. Alternatively, if dead units are not the cause of the performance deficits, then leaky-ReLUs should not have an effect.\nWe first confirmed that using leaky-ReLUs solves the dead unit problem. Indeed, the proportion of dead units is reduced to 0 in all networks across all tested values of b. Despite complete recovery of the dead units, however, using leaky-ReLUs does not rescue class selectivity-induced performance deficits (Figure A9). While the largest negative slope value improved test accuracy for larger values of α, the improvement was minor, and increasing α still had catastrophic effects. These results confirm that dead units cannot explain the rapid and catastrophic effects of increased class selectivity on performance.\n-0.0 0.1 0.2 0.3 0.4 0.7 1.0 2.0 5.0 10.0 30.0 100.0 Regularization Scale (α)\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nPr op\nor tio\nn of\nD ea\nd U\nni ts\nFigure A7: Regularizing to increase class selectivity does not cause units to die in ResNet18 trained on Tiny ImageNet. (a) Proportion of dead units (y-axis) for different for different regularization scales (α; x-axis, intensity of red). Error bars denote 95% confidence intervals.\na)\n0 2 4 6 8 10 12 14 16 18 Layer\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nPr op\nor tio\nn of\nD ea\nd U\nni ts\n0.0 0.1 0.2 0.3 0.4 0.7 1.0 2.0 5.0 10.0 30.0 100.0\nRegularization Scale (α) b)\n0.0 0.1 0.2 0.3 0.4 0.7 1.0 2.0 5.0 10.0 30.0 100.0 Regularization Scale (α)\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\nCl as\ns Se\nle ct\niv ity\nIn de\nx\nc)\n0.2 0.3 0.4 0.5 0.6 0.7 Mean Class Selectivity\n20\n30\n40\n50\n60\n70\n80\n90\nTe st\nA cc\nur ac\ny\n* ** ** ** **\n**\n** 0.0 0.1 0.2 0.3 0.4 0.7 1.0 2.0 5.0\nRegularization Scale (α)\nFigure A8: Removing dead units partially stabilizes the effects of large positive regularization scales in ResNet20. (a) Proportion of dead units (y-axis) as a function of layer (x-axis) for different regularization scales (α, intensity of red). (b) Mean class selectivity index (y-axis) as a function of regularization scale (α; x-axis and intensity of red) after removing dead units. Removing dead units from the class selectivity calculation establishes a more consistent relationship between α and the mean class selectivity index (compare to Figure A5d). (c) Test accuracy (y-axis) as a function of mean class selectivity (x-axis) for different values of α after removing dead units from the class selectivity calculation. Error bars denote 95% confidence intervals. *p < 2× 10−4, **p < 5× 10−7 difference from α = 0 difference from α = 0, t-test, Bonferroni-corrected. All results shown are for ResNet20.\na)\n0.0 0.1 0.2 0.3 0.4 0.7 1.0 2.0 5.0 10 .0 30 .0 10 0.0\nRegularization Scale (α)\n20\n30\n40\n50\n60\n70\n80\n90\nTe st\nA cc\nur ac\ny\nnegative slope 0.0 0.0001 0.001 0.01 0.1 0.5\nb)\n0.4 0.5 0.6 0.7 0.8 Mean Class Selectivity\n20\n30\n40\n50\n60\n70\n80\n90\nTe st\nA cc\nur ac\ny\n0.0 0.1 0.2 0.3\n0.4 0.7 1.0 2.0 5.0\nRegularization Scale (α)\n** *\n***\n******\nFigure A9: Reviving dead units does not rescue the performance deficits caused by increasing selectivity in ResNet20. (a) Test accuracy (y-axis) as a function of regularization scale (α; x-axis) for different leaky-ReLU negative slopes (intensity of red). Leaky-ReLUs completely solve the dead unit problem but do not fully rescue test accuracy for networks with α > 0. (b) Mean class selectivity index (y-axis) as a function of regularization scale (α; x-axis and intensity of red) for leaky-ReLU negative slope = 0.5. *p < 0.001, **p < 2 × 10−4, ***p < 5× 10−10 difference from α = 0, t-test, Bonferroni-corrected. Error bars denote bootstrapped 95% confidence intervals." }, { "heading": "A.11 RULING OUT A DEGENERATE SOLUTION FOR INCREASING SELECTIVITY", "text": "One degenerate solution to generate a very high selectivity index is for a unit to be silent for the vast majority of inputs, and have low activations for the small set of remaining inputs. We refer to this scenario as \"activation minimization\". We verified that activation minimization does not fully account for our results by examining the proportion of activations which elicit non-zero activations in the units in our models. If our regularizer is indeed using activation minimization to generate high selectivity in individual units, then regularizing to increase class selectivity should cause most units to have non-zero activations for only a very small proportion of samples. In ResNet18 trained on Tiny ImageNet, we found that sparse units, defined as units that do not respond to at least half of the data samples, only constitute more than 10% of the total population at extreme positive regularization scales (α ≥ 10; Figure A10a), well after large performance deficits >4% emerge (Figure 3). In ResNet20 trained on CIFAR10, networks regularized to have higher class selectivity (i.e. positive α) did indeed have more sparse units (Figure A10b). However, this effect does not explain away our findings: by α = 0.7, the majority of units respond to over 80% of samples (i.e. they are not sparse), but test accuracy has already decreased by 5% (Figure A6). These results indicate that activation minimization does not explain class selectivity-related changes in test accuracy.\na) b)\nFigure A10: Activation minimization does not explain selectivity-induced performance changes. (a) Proportion of samples eliciting a non-zero activation (y-axis) vs. regularization scale (α; x-axis) in ResNet18 trained on Tiny ImageNet. Data points denote individual units. Boxes denote IQR, whiskers extend 2×IQR. Note that the boxes are very compressed because the distribution is confined almost entirely to y=1.0 for all values of x. (b) Identical to (a), but for ResNet20 trained on CIFAR10." }, { "heading": "A.12 RESULTS FOR VGG16", "text": "Modulating class selectivity in VGG16 yielded results qualitatively similar to those we observed in ResNet20. The regularizer reliably decreases class selectivity for negative values of α (Figure A11), and class selectivity can be drastically reduced with little impact on test accuracy (Figure A12. Although test accuracy decreases significantly at α = −0.1 (p = 0.004, Bonferroni-corrected t-test), the effect is small: it is possible to reduce mean class selectivity by a factor of 5 with only a 0.5% decrease in test accuracy, and by a factor of 10—to 0.03—with only a ∼1% drop in test accuracy. Regularizing to increase class selectivity also has similar effects in VGG16 and ResNet20. Increasing α causes class selectivity to increase, and the effect becomes less consistent at large values of α (Figure A5). Although the class selectivity-induced collapse in test accuracy does not emerge quite as rapidly in VGG16 as it does in ResNet20, the decrease in test accuracy is still significant at the smallest tested value of α (α = 0.1, p = 0.02, Bonferroni-corrected t-test), and the effects on test accuracy of regularizing to promote vs. discourage class selectivity become significantly different at α = 0.3 (p = 10−4, Wilcoxon rank-sum test; Figure A15). Our observations that class selectivity is neither necessary nor sufficient for performance in both VGG16 and ResNet20 indicates that this is likely a general property of CNNs.\nIt is worth noting that VGG16 exhibits greater class selectivity than ResNet20. In the absence of regularization (i.e. α = 0), mean class selectivity in ResNet20 is 0.22, while in VGG16 it is 0.35, a 1.6x increase. This could explain why positive values of α seem to have a stronger effect on class selectivity in VGG16 relative to ResNet20 (compare Figure A5 and Figure A13; also see Figure A16b)." }, { "heading": "A.13 ADDITIONAL MEASURES OF CLASS INFORMATION IN SINGLE UNITS", "text": "In order to confirm that the effect of the regularizer is not unique to our chosen class selectivity metric, we also examined the effect of our regularizer on two different measures of class information in single units: the \"precision\" metric for class selectivity (Zhou et al., 2015; 2018; Gale et al., 2019), and mutual information (Cover, 1999).\nThe precision metric is calculated by finding the N images that most strongly activate a given unit, then finding the image class Ci that constitutes the largest proportion of the N images. Precision is defined as this proportion. For example, ifN = 200, and the \"cats\" class, with 74 samples, constitutes the largest proportion of those 200 activations for a unit, then the precision of the unit is 74200 = 0.34. Note that for a given number of classes C, precision is bounded by [ 1C , 1], thus in our experiments the lower bound on precision is 0.1. Zhou et al. (2015) used N = 60, while Gale et al. (2019) used N = 100. We chose to use the number of samples per class in the test set data and thus the largest possible sample size. This yielded N = 1000 for CIFAR10 and N = 50 for Tiny Imagenet.\nThe class selectivity regularizer has similar effects on precision as it does on the class selectivity index. Regularizing against class selectivity has a consistent effect on precision (Figure A17), while regularizing to promote class selectivity has a consistent effect in ResNet18 trained on Tiny ImageNet and for smaller values of α in ResNet20 trained on CIFAR10. However, the relationship between precision and the class selectivity index becomes less consistent for larger positive values of α in ResNet20 trained on CIFAR10. One explanation for this is that activation sparsity is a valid solution for maximizing the class selectivity index but not precision. For example, a unit that responded only to ten samples from the class \"cat\" and not at all to the remaining samples would have a class selectivity index of 1, but a precision value of 0.11. This seems likely given the increase in sparsity observed for very large positive values of α (see Appendix A.11).\nThe effects of the class selectivity regularizer on mutual information are nearly identical to its effects on precision. Regularizing for and against class selectivity has a consistent effect on mutual information in ResNet18 trained on Tiny ImageNet (Figure A18). Regularizing to reduce class selectivity also has a consistent effect in ResNet20 trained on CIFAR10, but the relationship between mutual information and the class selectivity index becomes less consistent for larger positive values of α. However, given that class selectivity and mutual information are very different quantities, and that Morcos et al. (2018b) did not observe a clear relationship between mutual information, class selectivity, and individual unit importance (as measured by impact of ablation), we hesitate to make strong conclusions about the role of mutual information in our selectivity regularizer.\nWhile there are additional class selectivity metrics that we could have used to further assess the effect of our regularizer, many of them are based on relating the activity of a neuron to the accuracy of the network’s output(s) (e.g. top class selectivity Gale et al. (2019) and class correlation Li et al. (2015); Zhou et al. (2018)), confounding classification accuracy and class selectivity. Accordingly, these metrics are unfit for use in experiments that examine the relationship between class selectivity and classification accuracy, which is exactly what we do here." }, { "heading": "A.14 REGULARIZING TO CONTROL CLASS DISCRIMINABILITY", "text": "We further confirmed that our results are not unique to our chosen class selectivity metric by examining a related, but distinct metric: class discriminability. d′ is a measure of the discriminability between two distributions (Stanislaw and Todorov, 1999; Macmillan and Creelman, 2004), defined as\nd′ = µmax − µ−max√\n1 2 (σ 2 max + σ 2 −max) +\n. (4)\nThe numerator of d′ is identical to the numerator of the selectivity index (Equation 1)—the sum of µmax, the largest class-conditional mean activation, and µ−max, the mean response to the remaining (i.e. non-µmax) classes—but the denominator differs. σ2 denotes the variance across activations.1 is a small value to prevent division by zero (we used 10−7).\nBecause d′ is differentiable with respect to the model parameters, we can control the amount of class discriminability learned by individual units using the same approach as when regularizing to control class selectivity. Instead of calculating the class selectivity index for each unit, we calculated d′. This leads to the following loss function:\nloss = − C∑ c yc· log(ŷc)− αµd′ (5)\nThis loss is identical to Equation 2, except the selectivity term, µSI , is replaced with µd′ :\nµd′ = 1\nL L∑ l 1 U U∑ u d′u (6)\nAs when computing µSI (Equation 3), l is a convolutional layer, L is the number of layers, u is a unit (i.e. feature map), U is the number of units in a given layer, and u is a unit. The procedure for regularizing d′ is otherwise identical to regularizing class selectivity (Approach 3.3).\nRegularizing to control d′ has the intended effect in both ResNet18 trained on Tiny ImageNet (Figure A19a) and ResNet20 trained on CIFAR10 (Figure A19c); the amount of class discriminability learned by individual units varies as a function of the sign and scale of regularization (α). d′ also correlates strongly with class selectivity across units (Spearman’s ρ = 0.83 for ResNet18 trained on Tiny ImageNet; ρ = 0.90 for ResNet20 trained on CIFAR10; p < 10−5 for both). We observe very similar effects on test accuracy as when manipulating class selectivity: increasing d′ has rapid negative effects on test accuracy, while decreasing d′ has more modest effects. Though we do not observe an improvement in test accuracy as we did when regularizing to decrease class selectivity in ResNet18 trained on Tiny ImageNet, the asymmetry in effect between increasing vs. decreasing d′ is nevertheless consistent, indicating that neither class selectivity nor discriminability are sufficient nor strictly necessary for CNN performance.\n1Traditional signal detection terminology describes d′ as being computed for a \"signal\" distribution (µsignal, σsignal) and a \"noise\" distribution (µnoise, σnoise). In our setting, max is the signal distribution and −max is the noise distribution." }, { "heading": "A.15 RESTRICTING CLASS SELECTIVITY REGULARIZATION TO THE FIRST THREE OR FINAL THREE LAYERS", "text": "To investigate the layer-specificity of the effects of class selectivity regularization, we also examined the effects of restricting class selectivity regularization to the first three or last three layers of the networks. Interestingly, we found that much of the effect of regularizing for or against selectivity on test accuracy was replicated even when the regularization was restricted to the first or final three layers. For example, reducing class selectivity in the first three layers either improves test accuracy—in ResNet18 trained on Tiny ImageNet—or has little-to-no effect on test accuracy—in ResNet20 trained on CIFAR10 (Figures A20 and A21). Likewise, regularizing to increase class selectivity in the first three layers had an immediate negative impact on test accuracy in both models (Figures A22 and A23). Regularizing against class selectivity in the final three layers (Figures A24 and A25) caused a modest increase in test accuracy over a narrow range of α in ResNet18 trained on Tiny ImageNet: less than half a percent gain at most (at α = −0.2), and no longer present by α = −0.4 (Figure A25b). In ResNet20, regularizing against class selectivity in the final three layers actually causes a decrease in test accuracy (Figures A21c and A21d). Given that the logit (output) layer of CNNs trained for image classification are by definition class-selective, we thought that regularizing to increase class selectivity in the final three layers could improve performance, but surprisingly it causes an immediate drop in test accuracy in both models (Figures A26 and A27). Our observation that regularizing to decrease class selectivity provides greater benefits (in the case of ResNet18) or less impairment (in the case of ResNet20) in the first three layers compared to the final three layers leads to the conclusion that class selectivity is less necessary (or more detrimental) in early layers compared to late layers.\na)\n0 2 4 6 8 10 12 14 16 Layer\n0.0\n0.1\n0.2\n0.3 0.4 Cl as s Se le ct iv ity In de x\nRegularization Scale (α)\n-5.0 -2.0 -1.0 -0.7\n-0.4 -0.3 -0.2 -0.1 0.0\nb)\n-5.0 -2.0 -1.0 -0.7 -0.4 -0.3 -0.2 -0.1 -0.0 Regularization Scale (α)\n0.00\n0.05\n0.10\n0.15\n0.20\n0.25\nCl as\ns Se\nle ct\niv ity\nIn de x (F irs t 3 L ay er s)\nc)\n0 2 4 6 8 10 12 14 16 18 Layer\n0.0\n0.1\n0.2\n0.3\n0.4\n0.5\nCl as\ns Se\nle ct\niv ity\nIn de x RegularizationScale (α) -5.0 -2.0 -1.0 -0.7 -0.4 -0.3 -0.2 -0.1 0.0\nd)\n-5.0 -2.0 -1.0 -0.7 -0.4 -0.3 -0.2 -0.1 -0.0 Regularization Scale (α)\n0.00\n0.02\n0.04\n0.06\n0.08\n0.10\n0.12\n0.14\nCl as\ns Se\nle ct\niv ity\nIn de x (F irs t 3 L ay er s)\nFigure A20: Regularizing to decrease class selectivity in the first three network layers. (a) Mean class selectivity (y-axis) as a function of layer (x-axis) for different values of α (intensity of blue) when class selectivity regularization is restricted to the first three network layers in ResNet18 trained on Tiny ImageNet. (b) Mean class selectivity in the first three layers (y-axis) as a function of α (x-axis) in ResNet18 trained on Tiny ImageNet. (c) and (d) are identical to (a) and (b), respectively, but for ResNet20 trained on CIFAR10. Error bars = 95% confidence intervals." }, { "heading": "A.16 AN INABILITY TO CHANGE PREFERRED CLASSES DURING TRAINING DOES NOT EXPLAIN SELECTIVITY-INDUCED PERFORMANCE DEFICITS", "text": "One potential limitation of our selectivity regularizer is that regularizing to increase class selectivity could discourage individual neurons from changing their preferred class during training; units would be locked in to the class they preferred at initialization. If changing preferred classes is necessary to improve test accuracy, then regularizing to increase class selectivity could impose a constraint on performance. We controlled for this possibility in two ways. First, we analyzed the statistics of preferred class changes that occur as a function of the class selectivity regularization scale. We examined the proportion of units that change classes at least once during training when α ≥ 0. The mean proportion is one, for all examined regularization scales and models (Figure A28), indicating that regularizing to increase selectivity does not lock units into their initial preferred class. We also examined whether regularizing to increase selectivity affects the number of times a unit changes its preferred class during training. In ResNet18 trained on Tiny ImageNet, we found that the mean number of preferred class changes decreases as a function of α for α ≥ 0.3 (Figure A29a). However, we note that the number of preferred class changes is roughly equal for α = {0, 0.1, 0.2}, even though test accuracy for α = {0.1, 0.2} is significantly lower than for α = 0, indicating that preferred class changes cannot fully account for decreased test accuracy. Furthermore, in ResNet20 trained on CIFAR10 there is no clear relationship between α and the number of preferred class changes (Figure A29b). These results indicate that the test accuracy impairment caused by regularizing to increase class selectivity cannot be fully explained by the regularizer preventing units from changing their preferred class during training.\nWe also controlled for the possibility of preferred class \"lock-in\" by warming up selectivity regularization during training, which would allow units’ preferred classes to change during the early period of training, when learning-induced changes are most significant (Achille et al., 2018; Gur-Ari et al., 2018; Frankle et al., 2020). We implemented selectivity regularization warmup in an analogous manner to learning rate warmup—by scaling α (see Equation 2) linearly over the interval [0,1] across the epochs first five epochs of training. We observed qualitatively similar results as when we did not warm up selectivity regularization (Figure A30, Figure A31). Regularizing to reduce class selectivity either has minimal negative effects (ResNet20 trained on CIFAR10) or improves test accuracy (ResNet18 trained on Tiny ImageNet), while regularizing to increase class selectivity impairs test accuracy in all examined models (Figure A31). Interestingly, the severity of the test accuracy impairment is slightly reduced when using regularization warmup. This indicates that some of the test accuracy deficit from regularizing to increase class selectivity is attributable to the regularizer forcing the network into suboptimal solutions early in training. Nevertheless, the test accuracy deficit imparted by increased class selectivity remains even when the class selectivity regularizer is warmed up, further supporting the claim that that class selectivity is not sufficient for network performance.\na)\n0.0 0.1 0.2 0.3 0.4 0.7 1.0 2.0 5.0\nRegularization Scale (α)\n2\n4\n6\n8\n10\n12\n14\n16\nPr ef\ner re\nd Cl\nas s\nCh an\nge s\nb)\n0.0 0.1 0.2 0.3 0.4 0.7 1.0 2.0 5.0\nRegularization Scale (α)\n5\n10\n15\n20\n25\n30\n35\nPr ef\ner re\nd Cl\nas s\nCh an\nge s\nFigure A30: Effects of warming up class selectivity regularization. (a) Mean class selectivity (y-axis) as a function of layer (x-axis) for different values of α (hue) when class selectivity regularization is warmed up over the first 5 epochs in ResNet18 trained on Tiny ImageNet. (b) Mean class selectivity across all layers (y-axis) as a function of α (x-axis) when warming up selectivity regularization in ResNet18 trained on Tiny ImageNet. (c) and (d) are identical to (a) and (b), respectively, but for ResNet20 trained on CIFAR10. Error bars = 95% confidence intervals." }, { "heading": "A.17 DIRECTLY COMPARING THE EFFECTS OF REGULARIZING TO INCREASE VS. DECREASE CLASS SELECTIVITY", "text": "" }, { "heading": "A.18 EFFECTS OF CLASS SELECTIVITY REGULARIZATION IN RESNET50 TRAINED ON TINY IMAGENET", "text": "We also examined the effect regularizing to control class selectivity in ResNet50 trained on Tiny ImageNet. Our results were qualitatively similar to those observed in ResNet18: class selectivity in trained networks correlates with α (Figure A34a), and regularizing to decrease class selectivity can improve test accuracy, while regularizing to increase class selectivity impairs test accuracy (Figure A34b)." } ]
2,021
SELECTIVITY CONSIDERED HARMFUL: EVALUATING THE CAUSAL IMPACT OF CLASS SELECTIVITY IN DNNS
SP:a39d669cce510debfadda370c1cb47d2eb960795
[ "This paper proposes a method for domain adaptation in RL where the source and target domains differ only in the transition distriubtions. A theoretical derivation based on RL as probabilistic inference is presented that starts with the objective of matching the desired distribution of trajectories in the target domain with the distribution achieved by the policy in the source domain. The final objective appears as a modification to the reward function while training in the source domain and is implemented easily with just two binary classifiers that predict the domain given either state-action or state-action-next-state tuples. Theorem 4.1 provides a theoretical guarantee on the performance of a policy trained on such a modified reward in the source domain by giving a bound on the performance in the target domain, under a very mild assumption that the optimal policy on the target domain achieves similar rewards when put in the source domain. Experiments are presented that show improved performance in terms of rewards vs experience on target domain on environments such as broken reacher, broken ant, etc (where the target domain has some \"broken\" component). Further, it is also shown that the reward modification on source visually matches the reward expected in target (Fig 4), that without the reward modification the policy usually exploits the source domain's transitions which cannot be exploited in the target domain, and finally, that safety emerges from the proposed objective.", "The paper introduces DARC, a domain transfer algorithm motivated by maximum entropy RL. By introducing classifiers for the target and source domain, the reward function in the source domain can be modified such that it restricts the behavior of the optimized policy to transitions that reflect the target domain. In this way, the method achieves good domain transfer without having an explicit model." ]
We propose a simple, practical, and intuitive approach for domain adaptation in reinforcement learning. Our approach stems from the idea that the agent’s experience in the source domain should look similar to its experience in the target domain. Building off of a probabilistic view of RL, we achieve this goal by compensating for the difference in dynamics by modifying the reward function. This modified reward function is simple to estimate by learning auxiliary classifiers that distinguish source-domain transitions from target-domain transitions. Intuitively, the agent is penalized for transitions that would indicate that the agent is interacting with the source domain, rather than the target domain. Formally, we prove that applying our method in the source domain is guaranteed to obtain a near-optimal policy for the target domain, provided that the source and target domains satisfy a lightweight assumption. Our approach is applicable to domains with continuous states and actions and does not require learning an explicit model of the dynamics. On discrete and continuous control tasks, we illustrate the mechanics of our approach and demonstrate its scalability to high-dimensional tasks.
[ { "affiliations": [], "name": "DOMAIN CLASSIFIERS" }, { "affiliations": [], "name": "Benjamin Eysenbach" }, { "affiliations": [], "name": "Shreyas Chaudhari" }, { "affiliations": [], "name": "Swapnil Asawa" } ]
[ { "authors": [ "Abbas Abdolmaleki", "Jost Tobias Springenberg", "Yuval Tassa", "Remi Munos", "Nicolas Heess", "Martin Riedmiller" ], "title": "Maximum a posteriori policy optimisation", "venue": "arXiv preprint arXiv:1806.06920,", "year": 2018 }, { "authors": [ "Joshua Achiam", "David Held", "Aviv Tamar", "Pieter Abbeel" ], "title": "Constrained policy optimization", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Felix Berkenkamp", "Matteo Turchetta", "Angela Schoellig", "Andreas Krause" ], "title": "Safe model-based reinforcement learning with stability guarantees", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Steffen Bickel", "Michael Brückner", "Tobias Scheffer" ], "title": "Discriminative learning for differing training and test distributions", "venue": "In Proceedings of the 24th international conference on Machine learning,", "year": 2007 }, { "authors": [ "Konstantinos Bousmalis", "Alex Irpan", "Paul Wohlhart", "Yunfei Bai", "Matthew Kelcey", "Mrinal Kalakrishnan", "Laura Downs", "Julian Ibarz", "Peter Pastor", "Kurt Konolige" ], "title": "Using simulation and domain adaptation to improve efficiency of deep robotic grasping", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Yevgen Chebotar", "Ankur Handa", "Viktor Makoviychuk", "Miles Macklin", "Jan Issac", "Nathan Ratliff", "Dieter Fox" ], "title": "Closing the sim-to-real loop: Adapting simulation randomization with real world experience", "venue": "In 2019 International Conference on Robotics and Automation (ICRA),", "year": 2019 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ignasi Clavera", "Anusha Nagabandi", "Ronald S Fearing", "Pieter Abbeel", "Sergey Levine", "Chelsea Finn" ], "title": "Learning to adapt: Meta-learning for model-based control", "venue": "arXiv preprint arXiv:1803.11347,", "year": 2018 }, { "authors": [ "Corinna Cortes", "Mehryar Mohri" ], "title": "Domain adaptation and sample bias correction theory and algorithm for regression", "venue": "Theoretical Computer Science,", "year": 2014 }, { "authors": [ "Gabriela Csurka" ], "title": "Domain adaptation for visual applications: A comprehensive survey", "venue": "arXiv preprint arXiv:1702.05374,", "year": 2017 }, { "authors": [ "Mark Cutler", "Thomas J Walsh", "Jonathan P How" ], "title": "Reinforcement learning with multi-fidelity simulators", "venue": "In 2014 IEEE International Conference on Robotics and Automation (ICRA),", "year": 2014 }, { "authors": [ "Christoph Dann", "Gerhard Neumann", "Jan Peters" ], "title": "Policy evaluation with temporal differences: A survey and comparison", "venue": "Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Marc Deisenroth", "Carl E Rasmussen" ], "title": "Pilco: A model-based and data-efficient approach to policy search", "venue": "In Proceedings of the 28th International Conference on machine learning", "year": 2011 }, { "authors": [ "Yan Duan", "John Schulman", "Xi Chen", "Peter L Bartlett", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Rl 2̂: Fast reinforcement learning via slow reinforcement learning", "venue": "arXiv preprint arXiv:1611.02779,", "year": 2016 }, { "authors": [ "Miroslav Dudı́k", "John Langford", "Lihong Li" ], "title": "Doubly robust policy evaluation and learning", "venue": "arXiv preprint arXiv:1103.4601,", "year": 2011 }, { "authors": [ "Benjamin Eysenbach", "Shixiang Gu", "Julian Ibarz", "Sergey Levine" ], "title": "Leave no trace: Learning to reset for safe and autonomous reinforcement learning", "venue": "arXiv preprint arXiv:1711.06782,", "year": 2017 }, { "authors": [ "Alon Farchy", "Samuel Barrett", "Patrick MacAlpine", "Peter Stone" ], "title": "Humanoid robots learning to walk faster: From the real world to simulation and back", "venue": "In Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems,", "year": 2013 }, { "authors": [ "Basura Fernando", "Amaury Habrard", "Marc Sebban", "Tinne Tuytelaars" ], "title": "Unsupervised visual domain adaptation using subspace alignment", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2013 }, { "authors": [ "Justin Fu", "Katie Luo", "Sergey Levine" ], "title": "Learning robust rewards with adversarial inverse reinforcement learning", "venue": "arXiv preprint arXiv:1710.11248,", "year": 2017 }, { "authors": [ "Scott Fujimoto", "David Meger", "Doina Precup" ], "title": "Off-policy deep reinforcement learning without exploration", "venue": "arXiv preprint arXiv:1812.02900,", "year": 2018 }, { "authors": [ "Shani Gamrian", "Yoav Goldberg" ], "title": "Transfer learning for related reinforcement learning tasks via image-toimage translation", "venue": "arXiv preprint arXiv:1806.07377,", "year": 2018 }, { "authors": [ "Yaroslav Ganin", "Evgeniya Ustinova", "Hana Ajakan", "Pascal Germain", "Hugo Larochelle", "François Laviolette", "Mario Marchand", "Victor Lempitsky" ], "title": "Domain-adversarial training of neural networks", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Sergio Guadarrama", "Anoop Korattikara", "Oscar Ramirez", "Pablo Castro", "Ethan Holly", "Sam Fishman", "Ke Wang", "Ekaterina Gonina", "Chris Harris", "Vincent Vanhoucke" ], "title": "Tf-agents: A library for reinforcement learning in tensorflow, 2018", "venue": null, "year": 2018 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Ian Fischer", "Ruben Villegas", "David Ha", "Honglak Lee", "James Davidson" ], "title": "Learning latent dynamics for planning from pixels", "venue": "arXiv preprint arXiv:1811.04551,", "year": 2018 }, { "authors": [ "Irina Higgins", "Arka Pal", "Andrei Rusu", "Loic Matthey", "Christopher Burgess", "Alexander Pritzel", "Matthew Botvinick", "Charles Blundell", "Alexander Lerchner. Darla" ], "title": "Improving zero-shot transfer in reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Jonathan Ho", "Stefano Ermon" ], "title": "Generative adversarial imitation learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Judy Hoffman", "Dequan Wang", "Fisher Yu", "Trevor Darrell" ], "title": "Fcns in the wild: Pixel-level adversarial and constraint-based adaptation", "venue": "arXiv preprint arXiv:1612.02649,", "year": 2016 }, { "authors": [ "Michael Janner", "Justin Fu", "Marvin Zhang", "Sergey Levine" ], "title": "When to trust your model: Model-based policy optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Natasha Jaques", "Shixiang Gu", "Dzmitry Bahdanau", "José Miguel Hernández-Lobato", "Richard E Turner", "Douglas Eck" ], "title": "Sequence tutor: Conservative fine-tuning of sequence generation models with kl-control", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Hilbert J Kappen" ], "title": "Path integrals and symmetry breaking for optimal control theory", "venue": "Journal of statistical mechanics: theory and experiment,", "year": 2005 }, { "authors": [ "Taylor W Killian", "Samuel Daulton", "George Konidaris", "Finale Doshi-Velez" ], "title": "Robust and efficient transfer learning with hidden parameter markov decision processes", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Daphne Koller", "Nir Friedman" ], "title": "Probabilistic graphical models: principles and techniques", "venue": "MIT press,", "year": 2009 }, { "authors": [ "Sylvain Koos", "Jean-Baptiste Mouret", "Stéphane Doncieux" ], "title": "The transferability approach: Crossing the reality gap in evolutionary robotics", "venue": "IEEE Transactions on Evolutionary Computation,", "year": 2012 }, { "authors": [ "Wouter Marco Kouw", "Marco Loog" ], "title": "A review of domain adaptation without target labels", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2019 }, { "authors": [ "Alessandro Lazaric" ], "title": "Knowledge transfer in reinforcement learning", "venue": "PhD thesis, PhD thesis, Politecnico di Milano,", "year": 2008 }, { "authors": [ "Sergey Levine" ], "title": "Reinforcement learning and control as probabilistic inference: Tutorial and review", "venue": "arXiv preprint arXiv:1805.00909,", "year": 2018 }, { "authors": [ "Sergey Levine", "Peter Pastor", "Alex Krizhevsky", "Julian Ibarz", "Deirdre Quillen" ], "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection", "venue": "The International Journal of Robotics Research,", "year": 2018 }, { "authors": [ "Zachary C Lipton", "Yu-Xiang Wang", "Alex Smola" ], "title": "Detecting and correcting for label shift with black box predictors", "venue": "arXiv preprint arXiv:1802.03916,", "year": 2018 }, { "authors": [ "Lennart Ljung" ], "title": "System identification. Wiley encyclopedia of electrical and electronics engineering, pp", "venue": null, "year": 1999 }, { "authors": [ "Michael G Madden", "Tom Howley" ], "title": "Transfer of experience between reinforcement learning environments with progressive difficulty", "venue": "Artificial Intelligence Review,", "year": 2004 }, { "authors": [ "Oliver Mihatsch", "Ralph Neuneier" ], "title": "Risk-sensitive reinforcement learning", "venue": "Machine learning,", "year": 2002 }, { "authors": [ "Nikhil Mishra", "Mostafa Rohaninejad", "Xi Chen", "Pieter Abbeel" ], "title": "A simple neural attentive meta-learner", "venue": "arXiv preprint arXiv:1707.03141,", "year": 2017 }, { "authors": [ "Shakir Mohamed", "Balaji Lakshminarayanan" ], "title": "Learning in implicit generative models", "venue": "arXiv preprint arXiv:1610.03483,", "year": 2016 }, { "authors": [ "Rémi Munos", "Tom Stepleton", "Anna Harutyunyan", "Marc Bellemare" ], "title": "Safe and efficient off-policy reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Xue Bin Peng", "Marcin Andrychowicz", "Wojciech Zaremba", "Pieter Abbeel" ], "title": "Sim-to-real transfer of robotic control with dynamics randomization", "venue": "IEEE international conference on robotics and automation (ICRA),", "year": 2018 }, { "authors": [ "Theodore J Perkins", "Doina Precup" ], "title": "Using options for knowledge transfer in reinforcement learning", "venue": null, "year": 1999 }, { "authors": [ "Aravind Rajeswaran", "Sarvjeet Ghotra", "Balaraman Ravindran", "Sergey Levine" ], "title": "Epopt: Learning robust neural network policies using model ensembles", "venue": "arXiv preprint arXiv:1610.01283,", "year": 2016 }, { "authors": [ "Kate Rakelly", "Aurick Zhou", "Chelsea Finn", "Sergey Levine", "Deirdre Quillen" ], "title": "Efficient off-policy metareinforcement learning via probabilistic context variables", "venue": "In International conference on machine learning,", "year": 2019 }, { "authors": [ "Balaraman Ravindran", "Andrew G Barto" ], "title": "An algebraic approach to abstraction in reinforcement learning", "venue": "PhD thesis, University of Massachusetts at Amherst,", "year": 2004 }, { "authors": [ "Konrad Rawlik", "Marc Toussaint", "Sethu Vijayakumar" ], "title": "On stochastic optimal control and reinforcement learning by approximate inference", "venue": "In Twenty-Third International Joint Conference on Artificial Intelligence,", "year": 2013 }, { "authors": [ "Stephane Ross", "J Andrew Bagnell" ], "title": "Agnostic system identification for model-based reinforcement learning", "venue": "arXiv preprint arXiv:1203.1007,", "year": 2012 }, { "authors": [ "Fereshteh Sadeghi", "Sergey Levine" ], "title": "Cad2rl: Real single-image flight without a single real image", "venue": "arXiv preprint arXiv:1611.04201,", "year": 2016 }, { "authors": [ "Sosale Shankara Sastry", "Alberto Isidori" ], "title": "Adaptive control of linearizable systems", "venue": "IEEE Transactions on Automatic Control,", "year": 1989 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Oliver G Selfridge", "Richard S Sutton", "Andrew G Barto" ], "title": "Training and tracking in robotics", "venue": "In IJCAI, pp", "year": 1985 }, { "authors": [ "Alexander A Sherstov", "Peter Stone" ], "title": "Improving action selection in mdp’s via knowledge transfer", "venue": "In AAAI,", "year": 2005 }, { "authors": [ "Casper Kaae Sønderby", "Jose Caballero", "Lucas Theis", "Wenzhe Shi", "Ferenc Huszár" ], "title": "Amortised map inference for image super-resolution", "venue": "arXiv preprint arXiv:1610.04490,", "year": 2016 }, { "authors": [ "H Francis Song", "Abbas Abdolmaleki", "Jost Tobias Springenberg", "Aidan Clark", "Hubert Soyer", "Jack W Rae", "Seb Noury", "Arun Ahuja", "Siqi Liu", "Dhruva Tirumala" ], "title": "V-mpo: On-policy maximum a posteriori policy optimization for discrete and continuous control", "venue": "arXiv preprint arXiv:1909.12238,", "year": 2019 }, { "authors": [ "Funlade T Sunmola", "Jeremy L Wyatt" ], "title": "Model transfer for markov decision tasks via parameter matching", "venue": "In Proceedings of the 25th Workshop of the UK Planning and Scheduling Special Interest Group (PlanSIG", "year": 2006 }, { "authors": [ "Aviv Tamar", "Huan Xu", "Shie Mannor" ], "title": "Scaling up robust mdps by reinforcement learning", "venue": "arXiv preprint arXiv:1306.6189,", "year": 2013 }, { "authors": [ "Jie Tan", "Zhaoming Xie", "Byron Boots", "C Karen Liu" ], "title": "Simulation-based design of dynamic controllers for humanoid balancing", "venue": "In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2016 }, { "authors": [ "Fumihide Tanaka", "Masayuki Yamamura" ], "title": "Multitask reinforcement learning on the distribution of mdps", "venue": "In Proceedings 2003 IEEE International Symposium on Computational Intelligence in Robotics and Automation. Computational Intelligence in Robotics and Automation for the New Millennium (Cat. No. 03EX694),", "year": 2003 }, { "authors": [ "Marko Tanaskovic", "Lorenzo Fagiano", "Roy Smith", "Paul Goulart", "Manfred Morari" ], "title": "Adaptive model predictive control for constrained linear systems", "venue": "European Control Conference (ECC),", "year": 2013 }, { "authors": [ "Matthew E Taylor", "Peter Stone" ], "title": "Transfer learning for reinforcement learning domains: A survey", "venue": "Journal of Machine Learning Research,", "year": 2009 }, { "authors": [ "Louis C Tiao", "Edwin V Bonilla", "Fabio Ramos" ], "title": "Cycle-consistent adversarial learning as approximate bayesian inference", "venue": "arXiv preprint arXiv:1806.01771,", "year": 2018 }, { "authors": [ "Josh Tobin", "Rachel Fong", "Alex Ray", "Jonas Schneider", "Wojciech Zaremba", "Pieter Abbeel" ], "title": "Domain randomization for transferring deep neural networks from simulation to the real world", "venue": "IEEE/RSJ international conference on intelligent robots and systems (IROS),", "year": 2017 }, { "authors": [ "Emanuel Todorov" ], "title": "Linearly-solvable markov decision problems", "venue": "In Advances in neural information processing systems,", "year": 2007 }, { "authors": [ "Marc Toussaint" ], "title": "Robot trajectory optimization using approximate inference", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Masatoshi Uehara", "Issei Sato", "Masahiro Suzuki", "Kotaro Nakayama", "Yutaka Matsuo" ], "title": "Generative adversarial nets from a density ratio estimation perspective", "venue": "arXiv preprint arXiv:1610.02920,", "year": 2016 }, { "authors": [ "Anirudh Vemula", "Yash Oza", "J Andrew Bagnell", "Maxim Likhachev" ], "title": "Planning and execution using inaccurate models with provable guarantees", "venue": "arXiv preprint arXiv:2003.04394,", "year": 2020 }, { "authors": [ "Paul J Werbos" ], "title": "Neural networks for control and system identification", "venue": "In Proceedings of the 28th IEEE Conference on Decision and Control,,", "year": 1989 }, { "authors": [ "Martha White" ], "title": "Unifying task specification in reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Grady Williams", "Andrew Aldrich", "Evangelos Theodorou" ], "title": "Model predictive path integral control using covariance variable importance sampling", "venue": "arXiv preprint arXiv:1509.01149,", "year": 2015 }, { "authors": [ "Björn Wittenmark" ], "title": "Adaptive dual control methods: An overview", "venue": "In Adaptive Systems in Control and Signal Processing", "year": 1995 }, { "authors": [ "Markus Wulfmeier", "Alex Bewley", "Ingmar Posner" ], "title": "Addressing appearance change in outdoor robotics with adversarial domain adaptation", "venue": "IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2017 }, { "authors": [ "Markus Wulfmeier", "Ingmar Posner", "Pieter Abbeel" ], "title": "Mutual alignment transfer learning", "venue": "arXiv preprint arXiv:1707.07907,", "year": 2017 }, { "authors": [ "Wenhao Yu", "Jie Tan", "C Karen Liu", "Greg Turk" ], "title": "Preparing for the unknown: Learning a universal policy with online system identification", "venue": "arXiv preprint arXiv:1702.02453,", "year": 2017 }, { "authors": [ "Bianca Zadrozny" ], "title": "Learning and evaluating classifiers under sample selection bias", "venue": "In Proceedings of the twenty-first international conference on Machine learning,", "year": 2004 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Shaojun Zhu", "Andrew Kimmel", "Kostas E Bekris", "Abdeslam Boularias" ], "title": "Fast model identification via physics engines for data-efficient policy search", "venue": "arXiv preprint arXiv:1710.08893,", "year": 2017 }, { "authors": [ "Brian D. Ziebart" ], "title": "Modeling Purposeful Adaptive Behavior with the Principle of Maximum Causal Entropy", "venue": "PhD thesis,", "year": 2010 }, { "authors": [ "1Tiao" ], "title": "2018) show that observation adaptation using CycleGan (Zhu et al., 2017a) minimizes a JensenShannon divergence. Assuming sufficiently expressive models, the Jensen-Shannon divergence and the reverse KL divergence above have the same optimum", "venue": null, "year": 2017 }, { "authors": [ "Guadarrama" ], "title": "HYPERPARAMETERS Our implementation of DARC is built on top of the implementation of SAC from Guadarrama et al. (2018)", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) can automate the acquisition of complex behavioral policies through real-world trial-and-error experimentation. However, many domains where we would like to learn policies are not amenable to such trial-and-error learning, because the errors are too costly: from autonomous driving to flying airplanes to devising medical treatment plans, safety-critical RL problems necessitate some type of transfer learning, where a safer source domain, such as a simulator, is used to train a policy that can then function effectively in a target domain. In this paper, we examine a specific transfer learning scenario that we call domain adaptation, by analogy to domain adaptation problems in computer vision (Csurka, 2017), where the training process in a source domain can be modified so that the resulting policy is effective in a given target domain.\nRL algorithms today require a large amount of experience in the target domain. However, for many tasks we may have access to a different but structurally similar source domain. While the source domain has different dynamics than the target domain, experience in the source domain is much cheaper to collect. However, transferring policies from one domain to another is challenging because strategies which are effective in the source domain may not be effective in the target domain. For example, aggressive driving may work well on a dry racetrack but fail catastrophically on an icy road. ∗Equal contribution.\nWhile prior work has studied the domain adaptation of observations in RL (Bousmalis et al., 2018; Ganin et al., 2016; Higgins et al., 2017), it ignores the domain adaptation of the dynamics.\nThis paper presents a simple approach for domain adaptation in RL, illustrated in Fig. 1. Our main idea is that the agent’s experience in the source domain should look similar to its experience in the target domain. Building off of a probabilistic view of RL, we formally show that we can achieve this goal by compensating for the difference in dynamics by modifying the reward function. This modified reward function is simple to estimate by learning auxiliary classifiers that distinguish sourcedomain transitions from target-domain transitions. Because our method learns a classifier, rather than a dynamics model, we expect it to handle high-dimensional tasks better than model-based methods, a conjecture supported by experiments on the 111-dimensional Ant task. Unlike prior work based on similar intuition (Koos et al., 2012; Wulfmeier et al., 2017b), a key contribution of our work is a formal guarantee that our method yields a near-optimal policy for the target domain.\nThe main contribution of this work is an algorithm for domain adaptation to dynamics changes in RL, based on the idea of compensating for differences in dynamics by modifying the reward function. We call this algorithm Domain Adaptation with Rewards from Classifiers, or DARC for short. DARC does not estimate transition probabilities, but rather modifies the reward function using a pair of classifiers. We formally analyze the conditions under which our method produces nearoptimal policies for the target domain. On a range of discrete and continuous control tasks, we both illustrate the mechanics of our approach and demonstrate its scalability to higher-dimensional tasks." }, { "heading": "2 RELATED WORK", "text": "While our work will focus on domain adaptation applied to RL, we start by reviewing more general ideas in domain adaptation, and defer to Kouw & Loog (2019) for a recent review of the field. Two common approaches to domain adaptation are importance weighting and domain-agnostic features. Importance-weighting methods (e.g., (Zadrozny, 2004; Cortes & Mohri, 2014; Lipton et al., 2018)) estimate the likelihood ratio of examples under the target domain versus the source domain, and use this ratio to re-weight examples sampled from the source domain. Similar to prior work on importance weighting (Bickel et al., 2007; Sønderby et al., 2016; Mohamed & Lakshminarayanan, 2016; Uehara et al., 2016), our method will use a classifier to estimate a probability ratio. Since we will need to estimate the density ratio of conditional distributions (transition probabilities), we will learn two classifiers. Importantly, we will use the logarithm of the density ratio to modify the reward function instead of weighting samples by the density ratio, which is often numerically unstable (see, e.g., Schulman et al. (2017, §3)) and led to poor performance in our experiments. Prior methods for applying domain adaptation to RL include approaches based on system identification, domain randomization, and observation adaptation. Perhaps the most established approach, system identification (Ljung, 1999), uses observed data to tune the parameters of a simulator (Feldbaum, 1960; Werbos, 1989; Wittenmark, 1995; Ross & Bagnell, 2012; Tan et al., 2016; Zhu et al., 2017b; Farchy et al., 2013) More recent work has successfully used this strategy to bridge the sim2real gap (Chebotar et al., 2019; Rajeswaran et al., 2016). Closely related is work on online system identification and meta-learning, which directly uses the inferred system parameters to update the policy (Yu et al., 2017; Clavera et al., 2018; Tanaskovic et al., 2013; Sastry & Isidori, 1989). However, these approaches typically require either a model of the environment or a manually-specified distribution over potential test-time dynamics, requirements that our method will lift. Another approach, domain randomization, randomly samples the parameters of the source domain and then finds the best policy for this randomized environment (Sadeghi & Levine, 2016; Tobin et al., 2017; Peng et al., 2018; Cutler et al., 2014). While often effective, this method is sensitive to the choice of which parameters are randomized, and the distributions from which these simulator parameters are sampled. A third approach, observation adaptation, modifies the observations of the source domain to appear similar to those in the target domain (Fernando et al., 2013; Hoffman et al., 2016; Wulfmeier et al., 2017a). While this approach has been successfully applied to video games (Gamrian & Goldberg, 2018) and robot manipulation (Bousmalis et al., 2018), it ignores the fact that the source and target domains may have differing dynamics.\nFinally, our work is similar to prior work on transfer learning (Taylor & Stone, 2009) and metalearning in RL, but makes less strict assumptions than most prior work. For example, most work on meta-RL (Killian et al., 2017; Duan et al., 2016; Mishra et al., 2017; Rakelly et al., 2019) and\nsome work on transfer learning (Perkins et al., 1999; Tanaka & Yamamura, 2003; Sunmola & Wyatt, 2006) assume that the agent has access to many source tasks, all drawn from the same distribution as the target task. Selfridge et al. (1985); Madden & Howley (2004) assume a manually-specified curriculum of tasks, Ravindran & Barto (2004) assume that the source and target domains have the same dynamics locally, and Sherstov & Stone (2005) assume that the set of actions that are useful in the source domain is the same as the set of actions that will be useful in the target domain. Our method does not require these assumptions, allowing it to successfully learn in settings where these prior works would fail. For example, the assumption of Sherstov & Stone (2005) is violated in our experiments with broken robots: actions which move a joint are useful in the source domain (where the robot is fully-function) but not useful in the target domain (where that joint is disabled). Our method will significantly outperform an importance weighting baseline (Lazaric, 2008). Unlike Vemula et al. (2020), our method does not require learning a dynamics model and is applicable to stochastic environments and those with continuous states and actions. Our algorithm bears a resemblance to that in Wulfmeier et al. (2017b), but a crucial algorithmic difference allows us to prove that our method acquires a near-optimal policy in the target domain, and also leads to improved performance empirically.\nThe theoretical derivation of our method is inspired by prior work which formulates control as a problem of probabilistic inference (e.g., (Toussaint, 2009; Rawlik et al., 2013; Levine et al., 2018)). Algorithms for model-based RL (e.g., (Deisenroth & Rasmussen, 2011; Hafner et al., 2018; Janner et al., 2019)) and off-policy RL (e.g., (Munos et al., 2016; Fujimoto et al., 2018; Dann et al., 2014; Dudı́k et al., 2011) similarly aim to improve the sample efficiency of RL, but do use the source domain to accelerate learning. Our method is applicable to any maximum entropy RL algorithm, including on-policy (Song et al., 2019), off-policy (Abdolmaleki et al., 2018; Haarnoja et al., 2018), and model-based (Janner et al., 2019; Williams et al., 2015) algorithms. We will use the SAC (Haarnoja et al., 2018) in our experiments and compare against model-based baselines." }, { "heading": "3 PRELIMINARIES", "text": "In this section, we introduce notation and formally define domain adaptation for RL. Our problem setting will consider two MDPs: Msource represents the source domain (e.g., a practice facility, simulator, or learned approximate model of the target domain) whileMtarget represents a the target domain. We assume that the two domains have the same state space S, action space A, reward function r, and initially state distribution p1(s1); the only difference between the domains is the dynamics, psource(st+1 | st, at) and ptarget(st+1 | st, at). We will learn a Markovian policy πθ(a | s), parametrized by θ. Our objective is to learn a policy π that maximizes the expected discounted sum of rewards onMtarget, Eπ,Mtarget [ ∑ t γ\ntr(st, at)]. We now formally define our problem setting: Definition 1. Domain Adaptation for RL is the problem of using interactions in the source MDP Msource together with a small number of interactions in the target MDPMtarget to acquire a policy that achieves high reward in the target MDP,Mtarget.\nWe will assume every transition with non-zero probability in the target domain will have non-zero probability in the source domain:\nptarget(st+1 | st, at) > 0 =⇒ psource(st+1 | st, at) > 0 for all st, st+1 ∈ S, at ∈ A. (1) This assumption is common in work on importance sampling (Koller & Friedman, 2009, §12.2.2), and the converse need not hold: transitions that are possible in the source domain need not be possible in the target domain. If this assumption did not hold, then the optimal policy for the target domain might involve behaviors that are not possible in the source domain, so it is unclear how one could learn a near-optimal policy by practicing in the source domain." }, { "heading": "4 A VARIATIONAL PERSPECTIVE ON DOMAIN ADAPTATION IN RL", "text": "The probabilistic inference interpretation of RL (Kappen, 2005; Todorov, 2007; Toussaint, 2009; Ziebart, 2010; Rawlik et al., 2013; Levine, 2018) treats the reward function as defining a desired distribution over trajectories. The agent’s task is to sample from this distribution by picking trajectories with probability proportional to their exponentiated reward. This section will reinterpret this model in the context of domain transfer, showing that domain adaptation of dynamics can be done\nby modifying the rewards. To apply this model to domain adaptation, define p(τ) as the desired distribution over trajectories in the target domain,\np(τ) ∝ p1(s1) (∏\nt\nptarget(st+1 | st, at) ) exp (∑ t r(st, at) ) ,\nand q(τ) as our agent’s distribution over trajectories in the source domain, q(τ) = p1(s1) ∏ t psource(st+1 | st, at)πθ(at | st).\nAs noted in Section 3, we assume both trajectory distributions have the same initial state distribution. Our aim is to learn a policy whose behavior in the source domain both receives high reward and has high likelihood under the target domain dynamics. We codify this objective by minimizing the reverse KL divergence between these two distributions:\nmin π(a|s)\nDKL(q ‖ p) = −Epsource [∑\nt\nr(st, at) +Hπ[at | st] + ∆r(st+1, st, at) ] + c, (2)\nwhere ∆r(st+1, st, at) , log p(st+1 | st, at)− log q(st+1 | st, at).\nThe constant c is the partition function of p(τ), which is independent of the policy and dynamics. While ∆r is defined in terms of transition probabilities, in Sec. 5 we show how to estimate ∆r by learning a classifier. We therefore call our method domain adaptation with rewards from classifiers (DARC), and will use π∗DARC to refer to the policy that maximizes the objective in Eq. 2.\nWhere the source and target dynamics are equal, the correction term ∆r is zero and we recover maximum entropy RL (Ziebart, 2010; Todorov, 2007). The reward correction is different from prior work that adds log β(a | s) to the reward to regularize the policy to be close to the behavior policy β (e.g., Jaques et al. (2017); Abdolmaleki et al. (2018)). In the case where the source dynamics are not equal to the true dynamics, this objective is not the same as maximum entropy RL on trajectories sampled from the source domain. Instead, this objective suggests a corrective term ∆r that should be added to the reward function to account for the discrepancy between the source and target dynamics. The correction term, ∆r, is quite intuitive. If a transition (st, at, st+1) has equal probability in the source and target domains, then ∆r(st, at) = 0 so no correction is applied. For transitions that are likely in the source but are unlikely in the target domain, ∆r < 0, the agent is penalized for “exploiting” inaccuracies or discrepancies in the source domain by taking these transitions. For the example environment in Figure 1, transitions through the center of the environment are blocked in the target domain but not in the source domain. For these transitions, ∆r would serve as a large penalty, discouraging the agent from taking these transitions and instead learning to navigate around the wall. Appendix A presents additional interpretations of ∆r in terms of coding theory, mutual information, and a constraint on the discrepancy between the source and target dynamics. Appendix C discusses how prior work on domain agnostic feature learning can be viewed as a special case of our framework." }, { "heading": "4.1 THEORETICAL GUARANTEES", "text": "We now analyze when maximizing the modified reward r + ∆r in the source domain yields a nearoptimal policy for the target domain. Our proof relies on the following lightweight assumption: Assumption 1. Let π∗ = arg maxπ Ep [ ∑ r(st, at)] be the reward-maximizing policy in the target\ndomain. Then the expected reward in the source and target domains differs by at most 2Rmax √ /2:∣∣∣Epπ∗,source [∑ r(st, at)]− Eπ∗,ptarget [∑ r(st, at)]∣∣∣ ≤ 2Rmax√ /2.\nThe variable Rmax refers to the maximum entropy-regularized return of any trajectory. This assumption says that the optimal policy in the target domain is still a good policy for the source domain, and its expected reward is similar in both domains. We do not require that the opposite be true: the optimal policy in the source domain does not need to receive high reward in the target domain. If there are multiple optimal policies, we only require that this assumption hold for one of them. We now state our main result:\nAlgorithm 1 Domain Adaptation with Rewards from Classifiers [DARC] 1: Input: source MDPMsource and targetMtarget; ratio r of experience from source vs. target. 2: Initialize: replay buffers for source and target transitions, Dsource,Dtarget; policy π; parameters θ = (θSAS, θSA) for classifiers qθSAS(target | st, at, st+1) and qθSAS(target | st, at).\n3: for t = 1, · · · , num iterations do 4: Dsource ← Dsource ∪ ROLLOUT(π,Msource) . Collect source data. 5: if t mod r = 0 then . Periodically, collect target data. 6: Dtarget ← Dtarget ∪ ROLLOUT(π,Mtarget) 7: θ ← θ − η∇θ`(θ) . Update both classifiers. 8: r̃(st, at, st+1)← r(st, at) + ∆r(st, at, st+1) . ∆r is computed with Eq. 3. 9: π ← MAXENT RL(π,Dsource, r̃)\n10: return π\nTheorem 4.1. Let π∗DARC be the policy that maximizes the modified (entropy-regularized) reward in the source domain, let π∗ be the policy that maximizes the (unmodified, entropy-regularized) reward in the target domain, and assume that π∗ satisfies Assumption 1. Then the following holds:\nEptarget,π∗DARC [∑ r(st, at) +H[at | st] ] ≥ Eptarget,π∗ [∑ r(st, at) +H[at | st] ] − 4Rmax √ /2.\nSee Appendix B for the proof and definition of . This result says that π∗DARC attains near-optimal (entropy-regularized) reward on the target domain. Thus, we can expect that modifying the reward function should allow us to adapt to different dynamics. The next section will present a practical algorithm for acquiring π∗DARC by estimating and effectively maximizing the modified reward in the source domain." }, { "heading": "5 DOMAIN ADAPTATION IN RL WITH A LEARNED REWARD", "text": "The variational perspective on model-based RL in the previous section suggests that we should modify the reward in the source domain by adding ∆r. In this section we develop a practical algorithm for off-dynamics RL by showing how ∆r can be estimated without learning an explicit dynamics model.\nTo estimate ∆r, we will use a pair of (learned) binary classifiers, which will infer whether transitions came from the source or target domain. The key idea is that the transition proba-\nbilities are related to the classifier probabilities via Bayes’ rule:\np(target | st, at, st+1) = p(st+1 | st, at, target)︸ ︷︷ ︸ =ptarget(st+1|st,at) p(st, at | target)p(target)/p(st, at, st+1).\nWe estimate the term p(st, at | target) on the RHS via another classifier, p(target | st, at):\np(st, at | target) = p(target | st, at)p(st, at)\np(target) .\nSubstituting these expression into our definition for ∆r and simplifying, we obtain an estimate for ∆r that depends solely on the predictions of these two classifiers:\n∆r(st, at, st+1) = log p(target | st, at, st+1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . − log p(target | st, at)\n− log p(source | st, at, st+1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + log p(source | st, at) (3)\nThe . . . . . . . .orange terms are the difference in logits from the classifier conditioned on st, at, st+1, while the blue terms are the difference in logits from the classifier conditioned on just st, at. Intuitively,\n∆r answers the following question: for the task of predicting whether a transition came from the source or target domain, how much better can you perform after observing st+1? We make this connection precise in Appendix A.2 by relating ∆r to mutual information. Ablation experiments (Fig. 7) confirm that both classifiers are important to the success of our method. The use of transition classifiers makes our method look somewhat similar to adversarial imitation learning (Ho & Ermon, 2016; Fu et al., 2017). While our method is not solving an imitation learning problem (we do not assume access to any expert experience), our method can be interpreted as learning a policy such that the dynamics observed by that policy in the source domain imitate the dynamics of the target domain.\nAlgorithm Summary Our algorithm modifies an existing MaxEnt RL algorithm to additionally learn two classifiers, qθSAS(target | st, at, st+1) and qθSA(target | st, at), parametrized by θSAS and θSA respectively, to minimize the standard cross-entropy loss.\n`SAS(θSAS) , −EDtarget [log qθSAS(target | st, at, st+1)]− EDsource [log qθSA(source | st, at, st+1)] `SA(θSA) , −EDtarget [log qθSA(target | st, at)]− EDsource [log qθSA(source | st, at)] .\nOur algorithm, Domain Adaptation with Rewards from Classifiers (DARC), is presented in Alg. 1 and illustrated in Fig. 2. To simplify notation, we define θ , (θSAS, θSA) and `(θ) , `SAS(θSAS) + `SA(θSA). At each iteration, we collect transitions from the source and (less frequently) target domain, storing the transitions in separate replay buffers. We then sample a batch of experience from both buffers to update the classifiers. We use the classifiers to modify the rewards from the source domain, and apply MaxEnt RL to this experience. We use SAC (Haarnoja et al., 2018) as our MaxEnt RL algorithm, but emphasize that DARC is applicable to any MaxEnt RL algorithm (e.g., on-policy, off-policy, and model-based). When training the classifiers, we add Gaussian input noise to prevent overfitting to the small number of target-domain transitions (see Fig. 7 for an ablation)." }, { "heading": "6 EXPERIMENTS", "text": "We start with a didactic experiment to build intuition for the mechanics of our method, and then evaluate on more complex tasks. Our experiments will show that DARC outperforms alternative approaches, such as directly applying RL to the target domain or learning importance weights. We will also show that our method can account for domain shift in the termination condition, and confirm the importance of learning two classifiers.\nIllustrative example. We start with a simple gridworld example, shown on the right, where we can apply our method without function approximation. The goal is to navigate from the top left to the bottom left. The real environment contains an obstacle (shown in red), which is not present in the source domain. If we simply apply RL on the source domain, we obtain\na policy that navigates directly to the goal (blue arrows), and will fail when used in the target domain. We then apply our method: we collect trajectories from the source domain and real world to fit the two tabular classifiers. These classifiers give us a modified reward, which we use to learn a policy in the source domain. The modified reward causes our learned policy to navigate around the obstacle, which succeeds in the target environment.\nVisualizing the reward modification in stochastic domains. In our next experiment, we use an “archery” task to visualize how the modified reward accounts for differences in dynamics. The task, shown in Fig. 4, requires choosing an angle at which to shoot an arrow. The practice range (i.e., the source domain) is outdoors, with wind that usually blows from left to right. The competition range (i.e., the target domain) is indoors with no wind. The reward is the negative distance to the target. We\nplot the reward as a function of the angle in both domains in Fig. 4. The optimal strategy for the outdoor range is to compensate for the wind by shooting slightly to the left (θ = −0.8), while the optimal strategy for the indoor range is to shoot straight ahead (θ = 0). We estimate the modified reward function with DARC, and plot the modified reward in the windy outdoor range and indoor range. We aggregate across episodes using J(θ) = logEp(s′|θ)[exp(r(s′))]; see Appendix E.4 for details. We observe that maximizing the modified reward in the windy range does not yield high reward in the windy range, but does yield a policy that performs well in the indoor range.\nScaling to more complex tasks. We now apply DARC to the more complex tasks shown in Fig. 5. We define three tasks by crippling one of the joints of each robot in the target domain, but using the fully-functional robot in the source domain. We use three simulated robots taken from OpenAI Gym (Brockman et al., 2016): 7\nDOF reacher, half cheetah, and ant. The broken reacher is based on the task described by Vemula et al. (2020). We also include a task where the shift in dynamics is external to the robot, by modifying the cheetah task to reward the agent for running both forward and backwards. It is easier to learn to run backwards, an obstacle in the target domain prevents the agent from running backwards. This “half cheetah obstacle” task does not entirely satisfy the assumption in Eq. 1 because transitions such as bouncing off the obstacle are only possible in the target domain, not the source domain. Nonetheless, the success of our method on this task illustrates that DARC can excel even in settings that do not satisfy the assumption in Eq. 1.\nWe compare our method to eight baselines. RL on Source and RL on Target directly perform RL on the source and target domains, respectively. The Finetuning baseline takes the result of running RL on the source domain, and further finetunes the agent on the target domain. The Importance Weighting baseline performs RL on importance-weighted samples from the source domain; the importance weights are exp(∆r). Recall that DARC collects many (r = 10) transitions in the source domain and performs many gradient updates for each single transition collected in the target domain (Alg. 1 Line 5). We therefore compared against a RL on Target (10x) baseline that likewise performs many (r = 10) gradient updates per transition in the target domain. Next, we compared against two recent model-based RL methods: MBPO (Janner et al., 2019) and PETS (Chua et al., 2018). Finally, we also compared against MATL (Wulfmeier et al., 2017b), which is similar in spirit to our method but uses a different modified reward.\nWe show the results of this experiment in Fig. 6, plotting the reward on the target domain as a function of the number of transitions in the target domain. In this figure, the transparent lines correspond to different random seeds, and the darker lines are the average of these random seeds. On all tasks, the RL on source baseline (shown as a dashed line because it observes no target transitions) performs considerably worse than the optimal policy from RL on the target domain, suggesting that good policies for the source domain are suboptimal for the target domain. Nonetheless, on three of the four tasks our method matches (or even surpasses) the asymptotic performance of doing RL on the target domain, despite never doing RL on experience from the target domain, and despite observing 5 – 10× less experience from the target domain. On the broken reacher and broken half cheetah tasks, finetuning on the target domain performs on par with our method. On the simpler\nbroken reacher task, just doing RL on the target domain with a large number of gradient steps works quite well (we did not tune this parameter for our method). While the model-based baselines (PETS and MBPO) also performed well on for low-dimensional tasks (broken reacher, broken half cheetah), they perform quite poorly on more challenging tasks like broken ant, supporting our intuition that classification is easier than learning a dynamics model in high dimensional tasks. Finally, DARC outperforms MATL on all tasks.\nAblation Experiments. Our next experiment examines the importance of using two classifiers to estimate ∆r. We compared our method to an ablation that does not learn the SA classifier, effectively ignoring the blue terms in Eq. 3. As shown in Fig. 7 (left), this ablation performs considerably worse than our method. Intuitively, this makes sense: we might predict that a transition came from the source domain not because the next state had higher likelihood under the source dynamics, but rather because the state or action was visited more frequently in the source domain. The second classifier used in our method corrects for this distribution shift.\nNext, we examine the importance of input noise regularization in classifiers. As we observe only a handful of transitions from the target domain, we hypothesized that regularization would be important to prevent overfitting. We test this hypothesis in Fig. 7 (right) by training our method on the broken reacher environment with varying amounts of input noise. With no noise or little noise our method performs poorly (likely due to overfitting); too much noise also performs poorly (likely due to underfitting). We used a value of 1 in all our experiments, and did not tune this value. See Appendix D for more plots of both ablation experiments.\nTo gain more intuition for our method, we recorded the reward correction ∆r throughout training on the broken reacher environment. In this experiment, we ran RL on the source domain for 100k steps before switching to our method. Said another way, we ignored ∆r for the first 100k steps of training. As shown in Fig. 8, ∆r steadily decreases during these first 100k steps, suggesting that the agent is learning a strategy that takes transitions where the source domain and target domain have different dynamics: the agent is making use of its broken joint. After 100k steps, when we maximize the combination of task reward and ∆r, we observe that ∆r increases, so the agent’s transitions in the source domain are increasingly consistent with target domain dynamics. After around 1e6\ntraining steps ∆r is zero: the agent has learned a strategy that uses transitions that are indistinguishable between the source and target domains.\nSafety emerges from domain adaptation to the termination condition. In many safety-critical applications, the real-world and simulator have different safeguards, which kick in to stop the agent and terminate the episode. For an agent to effectively transfer from the simulator to the real world, it cannot rely on safeguards which are present in one domain but not the other. Since this termination condition is part of the dynamics (White, 2017), we can readily apply DARC to this setting.\nWe use the humanoid shown in Fig. 9 for this experiment and set the task reward to 0. In the source domain episodes have a fixed length of 300 steps; in the target domain the episode terminates when the robot falls. The scenario mimics the real-world setting where robots have freedom to practice in a safe, cushioned, practice facility, but are preemptively stopped when they try to take unsafe actions in the real world. Our aim is for the agent to learn to avoid unsafe transitions in the source domain\nthat would result in episode termination in the target domain. As shown in Fig. 9, our method learns to remain standing for nearly the entire episode. As expected, baselines that maximize the zero reward on the source and target domains fall immediately. While DARC was not designed as a method for safe RL (Tamar et al., 2013; Achiam et al., 2017; Eysenbach et al., 2017; Berkenkamp et al., 2017), this experiment suggests that safety may emerge automatically from DARC, without any manual reward function design.\nComparison with Prior Transfer Learning Methods. We are not the first work that modifies the reward function to perform transfer in RL (Koos et al., 2012), nor the first work to learn how to modify the reward function (Wulfmeier et al., 2017a). However, these prior works lack theoretical justification. In contrast, our approach maximizes a welldefined variational objective and our analysis guarantees that agents learned with our method will achieve similar rewards in the source and target domains. Our formal guarantees (Sec. 4)\ndo not apply to MATL (Wulfmeier et al., 2017b) because their classifier is not conditioned on the action. Indeed, our results on the four tasks in Fig. 6 indicate that DARC ourperforms MATL on all tasks. To highlight this difference, we compared DARC and MATL on a gridworld (right), where the source and target domains differed by assigning opposite effects to the “up” and “down” in the purple state in the source and target domains. We collected data from a uniform random policy, so the marginal distribution p(st+1 | st) was the same in the source and target domains, even though the dynamics p(st+1 | st, at) where different. In this domain, MATL fails to recognize that the source and target domains are different. DARC succeeds in this task for 80% of trials while MATL succeeds for 0% of trials. We conclude that the conditioning on the action, as suggested by our analysis, is especially important when using experience collected from stochastic policies." }, { "heading": "7 DISCUSSION", "text": "In this paper, we proposed a simple, practical, and intuitive approach for domain adaptation to changing dynamics in RL. We motivate this method from a novel variational perspective on domain adaptation in RL, which suggests that we can compensate for differences in dynamics via the reward function. Moreover, we formally prove that, subject to a lightweight assumption, our method is guaranteed to yield a near-optimal policy for the target domain. Experiments on a range of control tasks show that our method can leverage the source domain to learn policies that will work well in the target domain, despite observing only a handful of transitions from the target domain.\nLimitations The main limitation of our method is that the source dynamics must be sufficiently stochastic, an assumption that can usually be satisfied by adding noise to the dynamics, or ensembling a collection of sources. Empirically, we found that our method worked best on tasks that could be completed in many ways in the source domain, but some of these strategies were not compatible with the target dynamics. The main takeaway of this work is that inaccuracies in dynamics can be compensated for via the reward function. In future work we aim to use the variation perspective on domain adaptation (Sec. 4) to learn the dynamics for the source domain.\nAcknowledgements. We thank Anirudh Vemula for early discussions; we thank Karol Hausman, Vincent Vanhoucke and anonymous reviews for feedback on drafts of this work. We thank Barry Moore for providing containers with MuJoCo and Dr. Paul Munro granting access to compute at CRC. This work is supported by the Fannie and John Hertz Foundation, University of Pittsburgh Center for Research Computing (CRC), NSF (DGE1745016, IIS1763562), ONR (N000141812861), and US Army. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.\nContributions. BE proposed the idea of using rewards to correct for dynamics, designed and ran many of the experiments in the paper, and wrote much of the paper. SA did the initial literature review, wrote and designed some of the DARC experiments and environments, developed visualizations of the modified reward function, and ran the MBPO experiments. SC designed some of the initial environments, helped with the implementation of DARC, and ran the PETS experiments. RS and SL provided guidance throughout the project, and contributed to the structure and writing of the paper." }, { "heading": "A ADDITIONAL INTERPRETATIONS OF THE REWARD CORRECTION", "text": "This section presents four additional interpretations of the reward correction, ∆r." }, { "heading": "A.1 CODING THEORY", "text": "The reward correction ∆r can also be understood from the perspective of coding theory. Suppose that we use a data-efficient replay buffer that exploits that fact that the next state st+1 is highly redundant with the current state and action, st, at. If we assume that the replay buffer compression has been optimized to store transitions from the target environment, (negative) ∆r is the number of additional bits (per transition) needed for our source replay buffer, as compared with our target replay buffer. Thus, an agent which maximizes ∆r will seek those transitions that can be encoded most efficiently, minimizing the size of the source replay buffer." }, { "heading": "A.2 MUTUAL INFORMATION", "text": "We can gain more intuition in the modified reward by writing the expected value of ∆r from Eq. 3 in terms of mutual information:\nE[∆r(st, at, st+1)] = I(st+1; target | st, at)− I(st+1; source | st, at).\nThe mutual information I(st+1; target | st, at) reflects how much better you can predict the next state if you know that you are interacting with the target domain, instead of the source domain. Our approach does exactly this, rewarding the agent for taking transitions that provide information about the target domain while penalizing transitions that hint to the agent that it is interacting with a source domain rather than the target domain: we don’t want our are agent to find bugs in the Matrix." }, { "heading": "A.3 LOWER BOUND ON THE RISK-SENSITIVE REWARD OBJECTIVE.", "text": "While we derived DARC by minimizing a reverse KL divergence (Eq. 2), we can also show that DARC maximizes a lower bound on a risk-sensitive reward objective (Mihatsch & Neuneier, 2002):\nlogEs′∼ptarget(s′|s,a), a∼π(a|s)\n[ exp (∑ t r(st, at) )]\n= logEs′∼psource(s′|s,a), a∼π(a|s) [(∏ t ptarget(st+1 | st, at) psource(st+1 | st, at) ) exp (∑ t r(st, at) )]\n= logEs′∼psource(s′|s,a), a∼π(a|s)\nexp ∑\nt r(st, at) + log ptarget(st+1 | st, at)− log psource(st+1 | st, at)︸ ︷︷ ︸ ∆r(st,at,st+1) (4)\n≥ Es′∼psource(s′|s,a), a∼π(a|s) [∑ t r(st, at) + ∆r(st, at, st+1) ] . (5)\nThe inequality on the last line is an application of Jensen’s inequality. One interesting question is when it would be preferable to maximize Eq. 4 rather than Eq. 5. While Eq. 5 provides a loser bound on the risk sensitive objective, empirically it may avoid the risk-seeking behavior that can be induced by risk-sensitive objectives. We leave the investigation of this trade-off as future work." }, { "heading": "A.4 A CONSTRAINT ON DYNAMICS DISCREPANCY", "text": "Our method regularizes the policy to visit states where the transition dynamics are similar between the source domain and target domain:\nmax π E a∼π(a|s) s′∼p(s′|s,a) ∑ t\nr(st, at) + log ptarget(st+1 | st, at)− log psource(st+1 | st, at)︸ ︷︷ ︸ −DKL(psource ‖ ptarget) +Hπ[at | st]\n .\nThis objective can equivalently be expressed as applying MaxEnt RL to only those policies which avoid exploiting the dynamics discrepancy. More precisely, the KKT conditions guarantee that there exists a positive constant > 0 such that our objective is equivalent to the following constrained objective:\nmax π∈ΠDARC E a∼π(a|s) s′∼p(s′|s,a) [∑ t r(st, at) +Hπ[at | st] ] ,\nwhere ΠDARC denotes the set of policies that do not exploit the dynamics discrepancy:\nΠDARC , { π ∣∣∣E a∼π(a|s) s′∼p(s′|s,a) [∑ t DKL(psource(st+1 | st, at) ‖ ptarget(st+1 | st, at)) ] ≤ } . (6)\nOne potential benefit of considering our method as the unconstrained objective is that it provides a principled method for increasing or decreasing the weight on the ∆r term, depending on how much the policy is currently exploiting the dynamics discrepancy. We leave this investigation as future work." }, { "heading": "B PROOFS OF THEORETICAL GUARANTEES", "text": "In this section we present our analysis showing that maximizing the modified reward r + ∆r in the source domain yields a near-optimal policy for the target domain, subject to Assumption 1. To start, we show that maximizing the modified reward in the source domain is equivalent to maximizing the unmodified reward, subject to the constraint that the policy not exploit the dynamics: Lemma B.1. Let a reward function r(s, a), source dynamics psource(s′ | s, a), and target dynamics ptarget(s\n′ | s, a) be given. Then there exists > 0 such the optimization problem in Eq. 2 is equivalent to\nmax π∈Πno exploit\nEpsource,π [∑ r(st, at) +Hπ[at | st] ] ,\nwhere Πno exploit denotes the set of policies that do not exploit the dynamics:\nΠno exploit , { E a∼π(a|s) s′∼p(s′|s,a) [∑ t DKL(psource(st+1 | st, at) ‖ ptarget(st+1 | st, at)) ] ≤ } .\nThe proof is a straightforward application of the KKT conditions. This lemma says that maximizing the modified reward can be equivalently viewed as restricting the set of policies to those that do not exploit the dynamics. Next, we will show that policies that do not exploit the dynamics have an expected (entropy-regularized) reward that is similar in the source and target domains: Lemma B.2. Let policy π ∈ Πno exploit be given, and let Rmax be the maximum (entropy-regularized) return of any trajectory. Then the following inequality holds:∣∣∣Epsource [∑ r(st, at) +Hπ[at | st]]− Eptarget [∑ r(st, at) +Hπ[at | st]]∣∣∣ ≤ 2Rmax√ /2. This Lemma proves that all policies in Πno exploit satisfy the same condition as the optimal policy (Assumption 1).\nProof. To simplify notation, define r̃(s, a) = r(s, a) − log π(a | s). We then apply Holder’s inequality and Pinsker’s inequality to obtain the desired result:\nEpsource [∑ r̃(st, at) ] − Eptarget [∑ r̃(st, at) ] = ∑ τ (psource(τ)− ptarget(τ)) (∑ r̃(st, at) )\n≤ ‖ ∑ r̃(st, at)‖∞ · ‖psource(τ)− ptarget(τ)‖1\n≤ (\nmax τ\n∑ r(st, at) ) · 2 √ 1\n2 DKL(psource(τ) ‖ ptarget(τ)) ≤ 2Rmax √ /2.\nWe restate our main result: Theorem 4.1 (Repeated from main text). Let π∗DARC be the policy that maximizes the modified (entropy-regularized) reward in the source domain, let π∗ be the policy that maximizes the (unmodified, entropy-regularized) reward in the target domain, and assume that π∗ satisfies Assumption 1. Then π∗DARC receives near-optimal (entropy-regularized) reward on the target domain:\nEptarget,π∗DARC [∑ r(st, at) +H[at | st] ] ≥ Eptarget,π∗ [∑ r(st, at) +H[at | st] ] − 4Rmax √ /2.\nProof. Assumption 1 guarantees that the optimal policy for the target domain, π∗, lies within Πno exploit. Among all policies in Πno exploit, π∗DARC is (by definition) the one that receives highest reward on the source dynamics, so\nEpsource,π∗DARC [∑ r(st, at) +H[at | st] ] ≥ Epsource,π∗ [∑ r(st, at) +H[at | st] ] .\nSince the both π∗DARC and π ∗ lie inside the constraint set, Lemma B.2 dictates that their rewards on the target domain differ by at most 2Rmax √ /2 from their source domain rewards. In the worst case, the reward for π∗DARC decreases by this amount and the reward for π ∗ increases by this amount:\nEpsource,π∗DARC [∑ r(st, at) +H[at | st] ] ≤ Eptarget,π∗DARC [∑ r(st, at) +H[at | st] ] + 2Rmax √ /2\nEpsource,π∗ [∑ r(st, at) +H[at | st] ] ≥ Eptarget,π∗ [∑ r(st, at) +H[at | st] ] − 2Rmax √ /2\nSubstituting these inequalities on the LHS and RHS of Eq. B and rearranging terms, we obtain the desired result." }, { "heading": "C THE SPECIAL CASE OF AN OBSERVATION MODEL", "text": "To highlight the relationship between domain adaptation of dynamics versus observations, we now consider a special case. In this subsection, we will assume that the state st , (zt, ot) is a combination of the system latent state zt (e.g., the poses of all objects in a scene) and an observation ot (e.g., a camera observation). We will define q(ot | zt) and p(ot | zt) as the observation models for the source and target domains. In this special case, we can decompose the KL objective (Eq. 2) into three terms:\nDKL(q ‖ p) = −Eq [∑\nt r(st, at) +Hπ[at | st]︸ ︷︷ ︸ MaxEnt RL objective + log ptarget(ot | zt)− log psource(ot | zt)︸ ︷︷ ︸ Observation Adaptation\n+ log ptarget(zt+1 | zt, at)− log psource(zt+1 | zt, at)︸ ︷︷ ︸ Dynamics Adaptation\n] .\nPrior methods that perform observation adaptation (Bousmalis et al., 2018; Gamrian & Goldberg, 2018) effectively minimize the observation adaptation term,1 but ignore the effect of dynamics. In contrast, the ∆r reward correction in our method provides one method to address both dynamics and observations. These approaches could be combined; we leave this as future work.\n1Tiao et al. (2018) show that observation adaptation using CycleGan (Zhu et al., 2017a) minimizes a JensenShannon divergence. Assuming sufficiently expressive models, the Jensen-Shannon divergence and the reverse KL divergence above have the same optimum." }, { "heading": "D ADDITIONAL EXPERIMENTS", "text": "Figures 11 and 12 show the results of the ablation experiment from Fig. 7 run on all environments. The results support our conclusion in the main text regarding the importance of using two classifiers and using input noise. Figure 13 is a copy of Fig. 8 from the main text, modified to also show the agent’s reward on the target domain. We observe that the reward does not start increasing until we start using DARC." }, { "heading": "E EXPERIMENT DETAILS AND HYPERPARAMETERS", "text": "Our implementation of DARC is built on top of the implementation of SAC from Guadarrama et al. (2018). Unless otherwise specified, all hyperparameters are taken from Guadarrama et al. (2018). All neural networks (actor, critics, and classifiers) have two hidden layers with 256-units each and ReLU activations. Since we ultimately will use the difference in the predictions of the two classifiers, we use a residual parametrization for the SAS classifier q(target | st, at, st+1). Using fSAS(st, at, st+1), fSA(st, at) ∈ R2 to denote the outputs of the two classifier networks, we compute the classifier predictions as follows:\nqθSA(· | st, at) = SOFTMAX(fSA(st, at)) qθSAS(· | st, at, st+1) = SOFTMAX(fSAS(st, at, st+1) + fSA(st, at))\nFor the SAS classifier we propagate gradients back through both networks parameters, θSAS and θSA. Both classifiers use Gaussian input noise with σ = 1. Optimization of all networks is done with Adam (Kingma & Ba, 2014) with a learning rate of 3e-4 and batch size of 128. Most experiments with DARC collected 1 step in the target domain every 10 steps in the source domain (i.e., r = 10). The one exception is the half cheetah obstacle domain, where we tried increasing r beyond 10 to 30, 100, 300, and 1000. We found a large benefit from increasing r to 30 and 100, but did not run the other experiments long enough to draw any conclusions. Fig. 6 uses r = 30 for half cheetah obstacle. We did not tune this parameter, and expect that tuning it would result in significant improvements in sample efficiency.\nWe found that DARC was slightly more stable if we warm-started the method by applying RL on the source task without ∆r for the first twarmup iterations. We used twarmup = 1e5 for all tasks except the broken reacher, where we used twarmup = 2e5. This discrepancy was caused by a typo in an experiment, and subsequent experiments found that DARC is relatively robust to different values of twarmup; we did not tune this parameter." }, { "heading": "E.1 BASELINES", "text": "The RL on Source and RL on Target baselines are implemented identically to our method, with the exception that ∆r is not added to the reward function. The RL on Target (10x) is identical to RL on Target, with the exception that we take 10 gradient steps per environment interaction (instead of 1). The Importance Weighting baseline estimates the importance weights as ptarget(st+1 | st, at)/psource(st+1 | st, at) ≈ exp(∆r). The importance weight is used to weight transitions in the SAC actor and critic losses.\nPETS (Chua et al., 2018) The PETS baseline is implemented using the default configurations used by (Chua et al., 2018) for the environments evaluated. The broken-half-cheetah environment uses the hyperparameters as used by the half-cheetah environment in (Chua et al., 2018). The broken-ant environment uses the same set of hyperparameters, namely: task horizon = 1000, number of training iterations = 300, number of planning (real) steps per iteration = 30, number of particles to be used in particle propagation methods = 20. The PETS codebase can be found at https://github.com/kchua/handful-of-trials.\nMBPO (Janner et al., 2019) We used the authors implementation with the default hyperparameters: https://github.com/JannerM/mbpo. We kept the environment configurations the same as their default unmodified MuJoCo environments, except for the domain and task name. We added our custom environment xmls in mbpo/env/assets/ folder, and their corresponding environment python files in the mbpo/env/ folder. Their static files were added under mbpo/static/. These environments can be registered as gym environments in the init file under mbpo odrl/mbpo/env/ or can be initialized directly in softlearning/environments/adapters/gym adapter.py. We set the time limit to max episode steps=1000 for the Broken Half Cheetah, Broken Ant and Half Cheetah Obstacle environments and to 100 for the Broken Reacher environment." }, { "heading": "E.2 ENVIRONMENTS", "text": "Broken Reacher This environment uses the 7DOF robot arm from the Pusher environment in OpenAI Gym. The observation space is the position and velocities of all joints and the goal. The reward function is\nr(s, a) = −1 2 ‖send effector − sgoal‖2 − 1 10 ‖a‖22,\nand episodes are 100 steps long. In the target domain the 2nd joint (0-indexed) is broken: zero torque is applied to this joint, regardless of the commanded torque.\nBroken Half Cheetah This environment is based on the HalfCheetah environment in OpenAI Gym. We modified the observation to include the agent’s global X coordinate so the agent can infer its relative position to the obstacle. Episodes are 1000 steps long. In the target domain the 0th joint (0-indexed) is broken: zero torque is applied to this joint, regardless of the commanded torque.\nBroken Ant This environment is based on the Ant environment in OpenAI Gym. We use the standard termination condition and cap the maximum episode length at 1000 steps. In the target domain the 3rd joint (0-indexed) is broken: zero torque is applied to this joint, regardless of the commanded torque.\nIn all the broken joint environments, we choose which joint to break to computing which joint caused the “RL on Source” baseline to perform worst on the target domain, as compared with the “RL on Target” baseline.\nHalf Cheetah Obstacle This environment is based on the HalfCheetah environment in OpenAI Gym. Episodes are 1000 steps long. We modified the standard reward function to use the absolute value in place of the velocity, resulting the following reward function:\nr(s, a) = sx vel ·∆t− ‖a‖22,\nwhere sx vel is the velocity of the agent along the forward-aft axis and ∆t = 0.01 is the time step of the simulator. In the target domain, we added a wall at x = −3m, roughly 3 meters behind the agent.\nHumanoid Used for the experiment in Fig. 9, we used a modified version of Humanoid from OpenAI Gym. The source domain modified this environment to ignore the default termination condition and instead terminate after exactly 300 time steps. The target domain uses the unmodified environment, which terminates when the agent falls." }, { "heading": "E.3 FIGURES", "text": "Unless otherwise noted, all experiments were run with three random seeds. Figures showing learning curves (Figures 6, 7, 8, 11, and 12) plot the mean over the three random seeds, and also plot the results for each individual random seed with semi-transparent lines." }, { "heading": "E.4 ARCHERY EXPERIMENT", "text": "We used a simple physics model for the archery experiment. The target was located 70m North of the agent, and wind was applied along the East-West axis. The system dynamics:\nst+1 = 70 sin(θ) + f/ cos(θ) 2\n{ f ∼ N (µ = 1, σ = 1) in the source domain f ∼ N (µ = 0, σ = 0.3) in the target domain\nWe trained the classifier by sampling θ ∼ U [−2, 2] (measured in degrees) for 10k episodes in the source domain and 10k episodes in the target domain. The classifier was a neural network with 1 hidden layer with 32 hidden units and ReLU activation. We optimized the classifier using the Adam optimizer with a learning rate of 3e-3 and a batch size of 1024. We trained until the validation loss increased for 3 consecutive epochs, which took 16 epochs in our experiment. We generated Fig. 4 by sampling 10k episodes for each value of θ and aggregating the rewards using J(θ) = logEp(s′|θ)[exp(r(s′))]. We found that aggregating rewards by taking the mean did not yield meaningful results, perhaps because the mean corresponds to a (possibly loose) lower bound on J (see Appendix A.3)." } ]
2,021
null
SP:4341b2c3554d27983bb5077f0cb3448c0c764823
[ "This paper targets on addressing the node embedding problem in disassortative graphs. A non-local aggregation framework is proposed, since local aggregation may be harmful for some disassortative graphs. To address the high computational cost in the recent Geom-GCN model that has an attention-like step to compute the Euclidean distance between every pair of nodes, an idea of attention-guided sorting is introduced. It learns an ordering of nodes, such that distant but informative nodes are put near each other. The sorting order depends on the attention scores computed with the local embedding vector of a node. Then Covn(.) function is applied on the sorted sequence of local node embeddings to obtain the non-local embedding. The final node embedding is then the concatenation of the local and non-local embedding, which is used for node classification. ", "The goal of the paper is to perform node classification for graphs. The authors propose a strategy to augment message passing graph neural networks with information from non-local nodes in the graph - with a focus on dis-assortative graphs. Dis-assortative graphs are graph datasets - where nodes with identical node labels are distant from each other in terms of edge connectivity. " ]
Modern graph neural networks (GNNs) learn node embeddings through multilayer local aggregation and achieve great success in applications on assortative graphs. However, tasks on disassortative graphs usually require non-local aggregation. In addition, we find that local aggregation is even harmful for some disassortative graphs. In this work, we propose a simple yet effective non-local aggregation framework with an efficient attention-guided sorting for GNNs. Based on it, we develop various non-local GNNs. We perform thorough experiments to analyze disassortative graph datasets and evaluate our non-local GNNs. Experimental results demonstrate that our non-local GNNs significantly outperform previous state-of-the-art methods on six benchmark datasets of disassortative graphs, in terms of both model performance and efficiency.
[]
[ { "authors": [ "Paszke Adam", "Gross Sam", "Chintala Soumith", "Chanan Gregory", "Yang Edward", "D Zachary", "Lin Zeming", "Desmaison Alban", "Antiga Luca", "Lerer Adam" ], "title": "Automatic differentiation in pytorch", "venue": "In Proceedings of Neural Information Processing Systems Autodiff Workshop,", "year": 2017 }, { "authors": [ "Deli Chen", "Yankai Lin", "Wei Li", "Peng Li", "Jie Zhou", "Xu Sun" ], "title": "Measuring and relieving the oversmoothing problem for graph neural networks from the topological view", "venue": "In Thirty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "François Chollet" ], "title": "Xception: Deep learning with depthwise separable convolutions", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Federico Errica", "Marco Podda", "Davide Bacciu", "Alessio Micheli" ], "title": "A fair comparison of graph neural networks for graph classification", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Matthias Fey", "Jan E. Lenssen" ], "title": "Fast graph representation learning with PyTorch Geometric", "venue": "In ICLR Workshop on Representation Learning on Graphs and Manifolds,", "year": 2019 }, { "authors": [ "Hongyang Gao", "Shuiwang Ji" ], "title": "Graph representation learning via hard and channel-wise attention networks", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Hongyang Gao", "Zhengyang Wang", "Shuiwang Ji" ], "title": "Large-scale learnable graph convolutional networks", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Boris Knyazev", "Graham W Taylor", "Mohamed Amer" ], "title": "Understanding attention and generalization in graph neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Junhyun Lee", "Inyeop Lee", "Jaewoo Kang" ], "title": "Self-attention graph pooling", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Qimai Li", "Zhichao Han", "Xiao-Ming Wu" ], "title": "Deeper insights into graph convolutional networks for semi-supervised learning", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Mark EJ Newman" ], "title": "Assortative mixing in networks", "venue": "Physical review letters,", "year": 2002 }, { "authors": [ "Maximillian Nickel", "Douwe Kiela" ], "title": "Poincaré embeddings for learning hierarchical representations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Hongbin Pei", "Bingzhe Wei", "Kevin Chen-Chuan Chang", "Yu Lei", "Bo Yang" ], "title": "Geom-GCN: Geometric graph convolutional networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Leonardo FR Ribeiro", "Pedro HP Saverese", "Daniel R Figueiredo" ], "title": "struc2vec: Learning node representations from structural identity", "venue": "In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2017 }, { "authors": [ "Benedek Rozemberczki", "Carl Allen", "Rik Sarkar" ], "title": "Multi-scale attributed node embedding", "venue": "arXiv preprint arXiv:1909.13021,", "year": 2019 }, { "authors": [ "Kristof Schütt", "Pieter-Jan Kindermans", "Huziel Enoc Sauceda Felix", "Stefan Chmiela", "Alexandre Tkatchenko", "Klaus-Robert Müller" ], "title": "Schnet: A continuous-filter convolutional neural network for modeling quantum interactions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Galligher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI magazine,", "year": 2008 }, { "authors": [ "Jie Tang", "Jimeng Sun", "Chi Wang", "Zi Yang" ], "title": "Social influence analysis in large-scale networks", "venue": "In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2009 }, { "authors": [ "Joshua B Tenenbaum", "Vin De Silva", "John C Langford" ], "title": "A global geometric framework for nonlinear dimensionality reduction", "venue": null, "year": 2000 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Lio", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Zhengyang Wang", "Na Zou", "Dinggang Shen", "Shuiwang Ji" ], "title": "Non-local U-Nets for biomedical image segmentation", "venue": "In Thirty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Zonghan Wu", "Shirui Pan", "Fengwen Chen", "Guodong Long", "Chengqi Zhang", "Philip S Yu" ], "title": "A comprehensive survey on graph neural networks", "venue": null, "year": 1901 }, { "authors": [ "Keyulu Xu", "Chengtao Li", "Yonglong Tian", "Tomohiro Sonobe", "Ken-ichi Kawarabayashi", "Stefanie Jegelka" ], "title": "Representation learning on graphs with jumping knowledge networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Pinar Yanardag", "SVN Vishwanathan" ], "title": "Deep graph kernels", "venue": "In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2015 }, { "authors": [ "Zichao Yang", "Diyi Yang", "Chris Dyer", "Xiaodong He", "Alex Smola", "Eduard Hovy" ], "title": "Hierarchical attention networks for document classification", "venue": "In Proceedings of the 2016 Conference of the North American chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2016 }, { "authors": [ "Zhitao Ying", "Jiaxuan You", "Christopher Morris", "Xiang Ren", "Will Hamilton", "Jure Leskovec" ], "title": "Hierarchical graph representation learning with differentiable pooling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Hao Yuan", "Shuiwang Ji" ], "title": "StructPool: Structured graph pooling via conditional random fields", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Muhan Zhang", "Zhicheng Cui", "Marion Neumann", "Yixin Chen" ], "title": "An end-to-end deep learning architecture for graph classification", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Si Zhang", "Hanghang Tong", "Jiejun Xu", "Ross Maciejewski" ], "title": "Graph convolutional networks: Algorithms, applications and open challenges", "venue": "In International Conference on Computational Social Networks,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graph neural networks (GNNs) process graphs and map each node to an embedding vector (Zhang et al., 2018b; Wu et al., 2019). These node embeddings can be directly used for node-level applications, such as node classification (Kipf & Welling, 2017) and link prediction (Schütt et al., 2017). In addition, they can be used to learn the graph representation vector with graph pooling (Ying et al., 2018; Zhang et al., 2018a; Lee et al., 2019; Yuan & Ji, 2020), in order to fit graph-level tasks (Yanardag & Vishwanathan, 2015). Many variants of GNNs have been proposed, such as ChebNets (Defferrard et al., 2016), GCNs (Kipf & Welling, 2017), GraphSAGE (Hamilton et al., 2017), GATs (Veličković et al., 2018), LGCN (Gao et al., 2018) and GINs (Xu et al., 2019). Their advantages have been shown on various graph datasets and tasks (Errica et al., 2020). However, these GNNs share a multilayer local aggregation framework, which is similar to convolutional neural networks (CNNs) (LeCun et al., 1998) on grid-like data such as images and texts.\nIn recent years, the importance of non-local aggregation has been demonstrated in many applications in the field of computer vision (Wang et al., 2018; 2020) and natural language processing (Vaswani et al., 2017). In particular, the attention mechanism has been widely explored to achieve non-local aggregation and capture long-range dependencies from distant locations. Basically, the attention mechanism measures the similarity between every pair of locations and enables information to be communicated among distant but similar locations. In terms of graphs, non-local aggregation is also crucial for disassortative graphs, while previous studies of GNNs focus on assortative graph datasets (Section 2.2). In addition, we find that local aggregation is even harmful for some disassortative graphs (Section 4.3). The recently proposed Geom-GCN (Pei et al., 2020) explores to capture longrange dependencies in disassortative graphs. It contains an attention-like step that computes the Euclidean distance between every pair of nodes. However, this step is computationally prohibitive for large-scale graphs, as the computational complexity is quadratic in the number of nodes. In addition, Geom-GCN employs pre-trained node embeddings (Tenenbaum et al., 2000; Nickel & Kiela, 2017; Ribeiro et al., 2017) that are not task-specific, limiting the effectiveness and flexibility.\nIn this work, we propose a simple yet effective non-local aggregation framework for GNNs. At the heart of the framework lies an efficient attention-guided sorting, which enables non-local aggregation through classic local aggregation operators in general deep learning. The proposed framework can be flexibly used to augment common GNNs with low computational costs. Based on the framework, we build various efficient non-local GNNs. In addition, we perform detailed analysis on existing disassortative graph datasets, and apply different non-local GNNs accordingly. Experimental results show that our non-local GNNs significantly outperform previous state-of-the-art methods on node classification tasks on six benchmark datasets of disassortative graphs." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "" }, { "heading": "2.1 GRAPH NEURAL NETWORKS", "text": "We focus on learning the embedding vector for each node through graph neural networks (GNNs). Most existing GNNs are inspired by convolutional neural networks (CNNs) (LeCun et al., 1998) and follow a local aggregation framework. In general, each layer of GNNs scans every node in the graph and aggregates local information from directly connected nodes, i.e., the 1-hop neighbors.\nSpecifically, a common layer of GNNs performs a two-step processing similar to the depthwise separable convolution (Chollet, 2017): spatial aggregation and feature transformation. The first step updates each node embedding using embedding vectors of spatially neighboring nodes. For example, GCNs (Kipf & Welling, 2017) and GATs (Veličković et al., 2018) compute a weighted sum of node embeddings within the 1-hop neighborhood, where weights come from the degree of nodes and the interaction between nodes, respectively. GraphSAGE (Hamilton et al., 2017) applies the max pooling, while GINs (Xu et al., 2019) simply sums the node embeddings. The feature transformation step is similar to the 1×1 convolution, where each node embedding vector is mapped into a new feature space through a shared linear transformation (Kipf & Welling, 2017; Hamilton et al., 2017; Veličković et al., 2018) or multilayer perceptron (MLP) (Xu et al., 2019). Different from these studies, LGCN (Gao et al., 2018) explores to directly apply the regular convolution through top-k ranking.\nNevertheless, each layer of these GNNs only aggregates local information within the 1-hop neighborhood. While stacking multiple layers can theoretically enable communication between nodes across the multi-hop neighborhood, the aggregation is essentially local. In addition, deep GNNs usually suffer from the over-smoothing problem (Xu et al., 2018; Li et al., 2018; Chen et al., 2020)." }, { "heading": "2.2 ASSORTATIVE AND DISASSORTATIVE GRAPHS", "text": "There are many kinds of graphs in the literature, such as citation networks (Kipf & Welling, 2017), community networks (Chen et al., 2020), co-occurrence networks (Tang et al., 2009), and webpage linking networks (Rozemberczki et al., 2019). We focus on graph datasets corresponding to the node classification tasks. In particular, we categorize graph datasets into assortative and disassortative ones (Newman, 2002; Ribeiro et al., 2017) according to the node homophily in terms of labels, i.e., how likely nodes with the same label are near each other in the graph.\nAssortative graphs refer to those with a high node homophily. Common assortative graph datasets are citation networks and community networks. On the other hand, graphs in disassortative graph datasets contain more nodes that have the same label but are distant from each other. Example disassortative graph datasets are co-occurrence networks and webpage linking networks.\nAs introduced above, most existing GNNs perform local aggregation only and achieve good performance on assortative graphs (Kipf & Welling, 2017; Hamilton et al., 2017; Veličković et al., 2018; Gao et al., 2018). However, they may fail on disassortative graphs, where informative nodes in the same class tend to be out of the local multi-hop neighborhood and non-local aggregation is needed. Thus, in this work, we explore the non-local GNNs." }, { "heading": "2.3 ATTENTION MECHANISM", "text": "The attention mechanism (Vaswani et al., 2017) has been widely used in GNNs (Veličković et al., 2018; Gao & Ji, 2019; Knyazev et al., 2019) as well as other deep learning models (Yang et al., 2016; Wang et al., 2018; 2020). A typical attention mechanism takes three groups of vectors as inputs, namely the query vector q, key vectors (k1, k2, . . . , kn), value vectors (v1, v2, . . . , vn). Note that key and value vectors have a one-to-one correspondence and can be the same sometimes. The attention mechanism computes the output vector o as\nai = ATTEND(q, ki) ∈ R, i = 1, 2, . . . , n; o = ∑ i aivi, (1)\nwhere the ATTEND(·) function could be any function that outputs a scalar attention score ai from the interaction between q and ki, such as dot product (Gao & Ji, 2019) or even a neural net-\nwork (Veličković et al., 2018). The definition of the three groups of input vectors depends on the models and applications.\nNotably, existing GNNs usually use the attention mechanism for local aggregation (Veličković et al., 2018; Gao & Ji, 2019). Specifically, when aggregating information for node v, the query vector is the embedding vector of v while the key and value vectors come from node embeddings of v’s directly connected nodes. And the process is iterated for each v ∈ V . It is worth noting that the attention mechanism can be easily extended for non-local aggregation (Wang et al., 2018; 2020), by letting the key and value vectors correspond to all the nodes in the graph when aggregating information for each node. However, it is computationally prohibitive given large-scale graphs, as iterating it for each node in a graph of n nodes requires O(n2) time. In this work, we propose a novel non-local aggregation method that only requires O(n log n) time." }, { "heading": "3 THE PROPOSED METHOD", "text": "" }, { "heading": "3.1 NON-LOCAL AGGREGATION WITH ATTENTION-GUIDED SORTING", "text": "We consider a graph G = (V,E), where V is the set of nodes and E is the set of edges. Each edge e ∈ E connects two nodes so that E ⊆ V ×V . Each node v ∈ V has a node feature vector xv ∈ Rd. The k-hop neighborhood of v refers to the set of nodes Nk(v) that can reach v within k edges. For example, the set of v’s directly connected nodes is its 1-hop neighborhood N1(v). Our proposed non-local aggregation framework is composed of three steps, namely local embedding, attention-guided sorting, and non-local aggregation. In the following, we describe them one by one.\nLocal Embedding: Our proposed framework is built upon a local embedding step that extracts local node embeddings from the node feature vectors. The local embedding step can be as simple as\nzv = MLP(xv) ∈ Rf , ∀v ∈ V. (2) The MLP(·) function is a multilayer perceptron (MLP), and f is the dimension of the local node embedding zv . Note that the MLP(·) function is shared across all the nodes in the graph. Applying MLP only takes the node itself into consideration without aggregating information from the neighborhood. This property is very important on some disassortative graphs, as shown in Section 4.3.\nOn the other hand, graph neural networks (GNNs) can be used as the local embedding step as well, so that our proposed framework can be easily employed to augment existing GNNs. As introduced in Section 2.1, modern GNNs perform multilayer local aggregation. Typically, for each node, one layer of a GNN aggregates information from its 1-hop neighborhood. Stacking L such local aggregation layers allows each node to access information that is L hops away. To be specific, the `-th layer of a L-layer GNN (` = 1, 2, . . . , L) can be described as\nz(`)v = TRANSFORM (`) ( AGGREGATE(`) ( {z(`−1)u : u ∈ N1(v) ∪ v} )) ∈ Rf , ∀v ∈ V, (3)\nwhere z(0)v = xv , and zv = z (L) v represents the local node embedding. The AGGREGATE(`)(·) and TRANSFORM(`)(·) functions represent the spatial aggregation and feature transformation step introduced in Section 2.1, respectively. With the above framework, GNNs can capture the node feature information from nodes within a local neighborhood as well as the structural information.\nWhen either MLP or GNNs is used as the local embedding step, the local node embedding zv only contains local information of a node v. However, zv can be used to guide non-local aggregation, as distant but informative nodes are likely to have similar node features and local structures. Based on this intuition, we propose the attention-guided sorting to enable the non-local aggregation.\nAttention-Guided Sorting: The basic idea of the attention-guided sorting is to learn an ordering of nodes, where distant but informative nodes are put near each other. Specifically, given the local node embedding zv obtained through the local embedding step, we compute one set of attention scores by av = ATTEND(c, zv) ∈ R, ∀v ∈ V, (4) where c is a calibration vector that is randomly initialized and jointly learned during training (Yang et al., 2016). In this attention operator, c serves as the query vector and zv are the key vectors. In\naddition, we also treat zv as the value vectors. However, unlike the attention mechanism introduced in Section 2.3, we use the attention scores to sort the value vectors instead of computing a weighted sum to aggregating them. Note that originally there is no ordering among nodes in a graph. To be specific, as av and zv have one-to-one correspondence through Equation (4), sorting the attention scores in non-decreasing order into (a1, a2, . . . , an) provides an ordering among nodes, where n = |V | is the number of nodes in the graph. The resulting sequence of local node embeddings can be denoted as (z1, z2, . . . , zn).\nThe attention process in Equation (4) can be also understood as a projection of local node embeddings onto a 1-dimensional space. The projection depends on the concrete ATTEND(·) function and the calibration vector c. As indicated by its name, the calibration vector c is used to calibrate the 1-dimensional space, in order to push distant but informative nodes close to each other in this space. This goal is fulfilled through the following non-local aggregation step and the training of the calibration vector c, as demonstrated below.\nNon-Local Aggregation: We point out that, with the attention-guided sorting, the non-local aggregation can be achieved by convolution, the most common local aggregation operator in deep learning. Specifically, given the sorted sequence of local node embeddings (z1, z2, . . . , zn), we compute\n(ẑ1, ẑ2, . . . , ẑn) = CONV(z1, z2, . . . , zn), (5)\nwhere the CONV(·) function represents a 1D convolution with appropriate padding. Note that the CONV(·) function can be replaced by a 1D convolutional neural network as long as the number of input and output vectors remains the same.\nTo see how the CONV(·) function performs non-local aggregation with the attention-guided sorting, we take an example where the CONV(·) function is a 1D convolution of kernel size 2s + 1. In this case, ẑi is computed from (zi+s, . . . , zi−s), corresponding to the receptive field of the CONV(·) function. As a result, if the attention-guided sorting leads to (zi+s, . . . , zi−s) containing nodes that are distant but informative to zi, the output ẑi aggregates non-local information. Another view is that we can consider the attention-guided sorting as re-connects nodes in the graph, where (zi+s, . . . , zi−s) can be treated as the 1-hop neighborhood of zi. After the CONV(·) function, ẑi and zi are concatenated as the input to a classifier to predict the label of the corresponding node, where both non-local and local dependencies can be captured. In order to enable the end-to-end training of the calibration vector c, we modify Equation (5) into\n(ẑ1, ẑ2, . . . , ẑn) = CONV(a1z1, a2z2, . . . , anzn), (6)\nwhere we multiply the attention score with the corresponding local node embedding. As a result, the calibration vector c receives gradients through the attention scores during training.\nThe remaining question is how to make sure that the attention-guided sorting pushes distant but informative nodes together. The short answer is that it is not necessary to guarantee this, as the requirement of non-local aggregation depends on the concrete graphs. In fact, our proposed framework grants GNNs the ability of non-local aggregation but lets the end-to-end training process determine whether to use non-local information. The back-propagation from the supervised loss will tune the calibration vector c and encourage ẑi to capture useful information that is not encoded by zi. In the case of disassortative graphs, ẑi usually needs to aggregate information from distant but informative nodes. Hence, the calibration vector c tends to arrange the attention-guided sorting to put distant but informative nodes together, as demonstrated experimentally in Section 4.5. On the other hand, nodes within the local neighborhood are usually much more informative than distant nodes in assortative graphs. In this situation, ẑi may simply perform local aggregation that is similar to GNNs.\nIn Section 4, we demonstrate the effectiveness of our proposed non-local aggregation framework on six disassortative graph datasets. In particular, we achieve the state-of-the-art performance on all the datasets with significant improvements over previous methods." }, { "heading": "3.2 TIME COMPLEXITY ANALYSIS", "text": "We perform theoretical analysis of the time complexity of our proposed framework. As discussed in Section 2.3, using the attention mechanism (Vaswani et al., 2017; Wang et al., 2018; 2020) to achieve non-local aggregation requires O(n2) time for a graph of n nodes. Essentially, the O(n2)\ntime complexity is due to the fact that the ATTEND(·) function needs to be computed between every pair of nodes. In particular, the recently proposed Geom-GCN (Pei et al., 2020) contains a similar non-local aggregation step. For each v ∈ V , Geom-GCN finds the set of nodes from which the Euclidean distance to v is less than a pre-defined number, where the Euclidean distance between every pair of nodes needs to be computed. As the computation of the the Euclidean distance between two nodes can be understood as the ATTEND(·) function, Geom-GCN has at least O(n2) time complexity.\nIn contrast, our proposed non-local aggregation framework requires only O(n log n) time. To see this, note that the ATTEND(·) function in Equation (4) only needs to be computed once, instead of iterating it for each node. As a result, computing the attention scores only takes O(n) time. Therefore, the time complexity of sorting, i.e. O(n log n), dominates the total time complexity of our proposed framework. In Section 4.6, we compare the real running time on different datasets among common GNNs, Geom-GCN, and our non-local GNNs as introduced in the next section." }, { "heading": "3.3 EFFICIENT NON-LOCAL GRAPH NEURAL NETWORKS", "text": "We apply our proposed non-local aggregation framework to build efficient non-local GNNs. Recall that our proposed framework starts with the local embedding step, followed by the attention-guided sorting and the non-local aggregation step.\nIn particular, the local embedding step can be implemented by either MLP or common GNNs, such as GCNs (Kipf & Welling, 2017) or GATs (Veličković et al., 2018). MLP extracts the local node embedding only from the node feature vector and excludes the information from nodes within the local neighborhood. This property can be helpful on some disassortative graphs, where nodes within the local neighborhood provide more noises than useful information. On other disassortative graphs, informative nodes locate in both local neighborhood and distant locations. In this case, GNNs are more suitable as the local embedding step. Depending on the disassortative graphs in hand, we build different non-local GNNs with either MLP or GNNs as the local embedding step. In Section 4.3, we show that these two categories of disassortative graphs can be distinguished through simple experiments, where we apply different non-local GNNs accordingly. Specifically, the number of layers is set to 2 for both MLP and GNNs.\nIn terms of the attention-guided sorting, we only need to specify the ATTEND(·) function in Equation (4). In order to make it as efficient as possible, we choose the ATTEND(·) function as\nav = ATTEND(c, zv) = cT zv ∈ R, ∀v ∈ V, (7)\nwhere c is part of the training parameters, as described in Section 3.1.\nWith the attention-guided sorting, we can implement the non-local aggregation step through convolution, as explained in Section 3.1 and shown in Equation (6). Specifically, CONV(·) function is set as a 2-layer convolutional neural network composed of two 1D convolutions. The kernel size is set to 3 or 5 depending on the datasets. The activation function is ReLU (Krizhevsky et al., 2012).\nFinally, we use a linear classifier that takes the concatenation of ẑi and zi as inputs and makes prediction for the corresponding node. Depending on the local embedding step, we build three efficient non-local GNNs, namely non-local MLP (NLMLP), non-local GCN (NLGCN), and nonlocal GAT (NLGAT). The models can be end-to-end trained with the classification loss." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATASETS", "text": "We perform experiments on six disassortative graph datasets (Rozemberczki et al., 2019; Tang et al., 2009; Pei et al., 2020) (Chameleon, Squirrel, Actor, Cornell, Texas, Wisconsin) and three assortative graph datasets (Sen et al., 2008) (Cora, Citeseer, Pubmed). These datasets are commonly used to evaluate GNNs on node classification tasks (Kipf & Welling, 2017; Veličković et al., 2018; Gao et al., 2018; Pei et al., 2020). We provide detailed descriptions of disassortative graph datasets in Appendix A.1. In order to distinguish assortative and disassortative graph datasets, Pei et al. (2020)\npropose a metric to measure the homophily of a graph G, defined as\nH(G) = 1 |V | ∑ v∈V Number of v’s directly connected nodes who have the same label as v Number of v’s directly connected nodes . (8)\nIntuitively, a large H(G) indicates an assortative graph, and vice versa. The H(G) and other statistics are summarized in Table 1.\nIn our experiments, we focus on comparing the model performance on disassortative graph datasets, in order to demonstrate the effectiveness of our non-local aggregation framework. The performances on assortative graph datasets are provided for reference, indicating that the proposed framework will not hurt the performance when non-local aggregation is not strongly desired." }, { "heading": "4.2 BASELINES", "text": "We compare our proposed non-local MLP (NLMLP), non-local GCN (NLGCN), and non-local GAT (NLGAT) with various baselines: (1) MLP is the simplest deep learning model. It makes prediction solely based on the node feature vectors, without aggregating any local or non-local information. (2) GCN (Kipf & Welling, 2017) and GAT (Veličković et al., 2018) are the most common GNNs. As introduced in Section 2.1, they only perform local aggregation. (3) Geom-GCN (Pei et al., 2020) is a recently proposed GNN that can capture long-range dependencies. It is the current stateof-the-art model on several disassortative graph datasets. Geom-GCN requires the use of different node embedding methods, such as Isomap (Tenenbaum et al., 2000), Poincare (Nickel & Kiela, 2017), and struc2vec (Ribeiro et al., 2017). We simply report the best results from Pei et al. (2020) for Geom-GCN and the following two variants without specifying the node embedding method. (4) Geom-GCN-g (Pei et al., 2020) is a variant of Geom-GCN that performs local aggregation only. It is similar to common GNNs. (5) Geom-GCN-s (Pei et al., 2020) is a variant of Geom-GCN that does not force local aggregation. The designed functionality is similar to our NLMLP.\nWe implement MLP, GCN, GAT, and our methods using Pytorch (Adam et al., 2017) and Pytorch Geometric (Fey & Lenssen, 2019). As has been discussed1, in fair settings, the results of GCN and GAT differ from those in Pei et al. (2020).\nOn each dataset, we follow Pei et al. (2020) and randomly split nodes of each class into 60%, 20%, and 20% for training, validation, and testing. The experiments are repeatedly run 10 times with different random splits and the average test accuracy over these 10 runs are reported. Testing is performed when validation accuracy achieves maximum on each run. Apart from the details specified in Section 3.3, we tune the following hyperparameters individually for our proposed models: (1) the number of hidden unit ∈ {16, 48, 96}, (2) dropout rate ∈ {0, 0.5, 0.8}, (3) weight decay ∈ {0, 5e-4, 5e-5, 5e-6}, and (4) learning rate ∈ {0.01, 0.05}." }, { "heading": "4.3 ANALYSIS OF DISASSORTATIVE GRAPH DATASETS", "text": "As discussed in Section 3.3, the disassortative graph datasets can be divided into two categories. Nodes within the local neighborhood provide more noises than useful information in disassortative graphs belonging to the first category. Therefore, local aggregation should be avoided in models on such disassortative graphs. As for the second category, informative nodes locate in both local neighborhood and distant locations. Intuitively, a graph with lower H(G) is more likely to be in the first category. However, it is not an accurate way to determine the two categories.\n1https://openreview.net/forum?id=S1e2agrFvS&noteId=8tGKV1oSzCr\nKnowing the exact category of a disassortative graph is crucial, as we need to apply non-local GNNs accordingly. As analyzed above, the key difference lies in whether the local aggregation is useful. Hence, we can distinguish two categories of disassortative graphs by comparing the performance between MLP and common GNNs (GCN, GAT) on each of the six disassortative graph datasets.\nThe results are summarized in Table 2. We can see that Actor, Cornell, Texas, and Wisconsin fall into the first category, while Chameleon and Squirrel belong to the second category. We add the performance on assortative graph datasets for reference, where the local aggregation is effective so that GNNs tend to outperform MLP." }, { "heading": "4.4 COMPARISONS WITH BASELINES", "text": "According to the insights from Section 4.3, we apply different non-local GNNs according to the category of disassortative graph datasets, and make comparisons with corresponding baselines.\nSpecifically, we employ NLMLP on Actor, Cornell, Texas, and Wisconsin. The corresponding baselines are MLP, Geom-GCN, and Geom-GCN-s, as Table 2 has shown that GCN and GAT perform much worse than MLP on these datasets. And Geom-GCN-g is similar to GCN and has worse performance than Geom-GCN-s, which is shown in Ap-\npendix A.2. The comparison results are reported in Table 3. While Geom-GCN-s are the previous state-of-the-art GNNs on these datasets (Pei et al., 2020), we find that MLP consistently outperforms Geom-GCN-s by large margins. In particular, although Geom-GCN-s does not explicitly perform local aggregation, it is still outperformed by MLP. A possible explanation is that Geom-GCN-s uses pre-trained node embeddings, which aggregates information from the local neighborhood implicitly. In contrast, our NLMLP is built upon MLP with the proposed non-local aggregation framework, which excludes the local noises and collects useful information from non-local informative nodes. The NLMLP sets the new state-of-the-art performance on these disassortative graph datasets.\nOn Chameleon and Squirrel that belong to the second category of disassortative graph datasets, we apply NLGCN and NLGAT accordingly. The baselines are GCN, GAT, Geom-GCN, and Geom-GCN-g. On these datasets, these baselines that explicitly perform local aggregation show advantages over MLP and Geom-GCN-s, as shown in Appendix A.2. As shown in Table 4, our proposed NLGCN achieves the best performance on both datasets. In addition, it is worth noting that our NLGCN and NLGAT are built upon GCN and GAT, respectively. They show improvements over their counterparts, which indicates that the advantages of our proposed non-local aggregation framework are general for common GNNs.\nWe provide the results of all the models on all datasets in Appendix A.2 for reference." }, { "heading": "4.5 ANALYSIS OF THE ATTENTION-GUIDED SORTING", "text": "We analyze the results of the attention-guided sorting in our proposed framework, in order to show that our non-local GNNs indeed perform non-local aggregation.\nSuppose the attention-guided sorting leads to the sorted sequence (z1, z2, . . . , zn), which goes through a convolution or CNN into (ẑ1, ẑ2, . . . , ẑn). As discussed in Section 3.1, we can consider the sequence (z1, z2, . . . , zn) as a re-connected graph Ĝ, where we treat nodes within the receptive field of ẑi as directly connected to zi, i.e. zi’s 1-hop neighborhood. The information within this new 1-hop neighborhood will be aggregated. If our non-local GNNs indeed perform non-local aggregation, the homophily of the re-connected graph should be larger than the original graph. Therefore, we compute H(Ĝ) for each dataset to verify this statement. Following Section 4.4, we apply NLMLP on Actor, Cornell, Texas, and Wisconsin and NLGCN on Chameleon and Squirrel.\nFigure 1 compares H(Ĝ) with H(G) for each dataset. We can observe that H(Ĝ) is much larger than H(G), indicating that distant but informative nodes are near each other in the re-connected graph Ĝ. We also provide the visualizations of the sorted sequence for Cornell and Texas. We can see that nodes with the same label tend to be clustered together. These facts indicate that our non-local GNNs perform non-local aggregation with the attention-guided sorting.\n4.6 EFFICIENCY COMPARISONS\nAs analyzed in Section 3.2, our proposed nonlocal aggregation framework is more efficient than previous methods based on the original attention mechanism, such as Geom-GCN (Pei et al., 2020). Concretely, our method requires only O(n log n) computation time in contrast to O(n2). In this section, we compare the real running time to verify our analysis. Specifi-\ncally, we compare NLGCN with Geom-GCN as well as GCN and GAT. For Geom-GCN, we use the code provided in Pei et al. (2020). Each model is trained for 500 epochs on each dataset and the average training time per epoch is reported.\nThe results are shown in Table 5. Although our NLGCN is built upon GCN, it is just slightly slower than GCN and faster than GAT, showing the efficiency of our non-local aggregation framework. On the other hand, Geom-GCN is significantly slower due to the fact that it has O(n2) time complexity." }, { "heading": "5 CONCLUSION", "text": "In this work, we propose a simple yet effective non-local aggregation framework for GNNs. The core of the framework is an efficient attention-guided sorting, which enables non-local aggregation through convolution. The proposed framework can be easily used to build non-local GNNs with low computational costs. We perform thorough experiments on node classification tasks to evaluate our proposed method. In particular, we experimentally analyze existing disassortative graph datasets and apply different non-local GNNs accordingly. The results show that our non-local GNNs significantly outperform previous state-of-the-art methods on six benchmark datasets of disassortative graphs, in terms of both accuracy and speed." }, { "heading": "A APPENDIX", "text": "A.1 DETAILS OF DISASSORTATIVE GRAPH DATASETS\nHere are the details of disassortative graph datasets used in our experiments:\n• Chameleon and Squirrel are Wikipedia networks (Rozemberczki et al., 2019) where nodes represent web pages from Wikipedia and edges indicate mutual links between pages. Node feature vectors are bag-of-word representation of informative nouns in the corresponding pages. Each node is labeled with one of five classes according to the number of the average monthly traffic of the web page.\n• Actor is an actor co-occurrence network, where nodes denote actors and edges indicate cooccurrence on the same web page from Wikipedia. It is extracted from the film-directoractor-writer network proposed by Tang et al. (Tang et al., 2009). Node feature vectors are bag-of-word representation of keywords in the actors’ Wikipedia pages. Each node is labeled with one of five classes according to the topic of the actor’s Wikipedia page.\n• Cornell, Texas, and Wisconsin come from the WebKB dataset collected by Carnegie Mellon University. Nodes represent web pages and edges denote hyperlinks between them. Node feature vectors are bag-of-word representation of the corresponding web pages. Each node is labeled with one of student, project, course, staff, and faculty.\nA.2 FULL EXPERIMENTAL RESULTS" } ]
2,020
null
SP:ef9027da9feec26a1fe583b9cd8c77e260bdc00f
[ "This paper studies the loss landscapes of sparse linear networks. It proves that under squared loss, (1) spurious local minimum does not exist when the output dimension is one, or with separated first layer and orthogonal training data; and (2) for two-layer sparse linear networks, the good property in (1) does not exist anymore when the conditions are violated. The authors also report experimental results to show that two-layer sparse linear networks with two hidden neurons have spurious local minima.", "This paper studies the optimization landscape of (deep) sparse linear networks. The study of sparse neural networks is well motivated: on the one hand, there is a lot of experimental evidence that the loss of the trained network does not decrease much after removing a large subset of the connections; on the other hand, there is little theoretical evidence of what makes this behaviour possible (or how to provably construct sparse networks from dense ones). As a result, an investigation of the optimization landscape of sparse networks, even in the simple case of a linear activation function, is timely and interesting to the ICLR community, since it can potentially shed light on the questions above." ]
Network pruning, or sparse network has a long history and practical significance in modern applications. Although the loss functions of neural networks may yield bad landscape due to non-convexity, we focus on linear activation which already owes benign landscape. With no unrealistic assumption, we conclude the following statements for the squared loss objective of general sparse linear neural networks: 1) every local minimum is a global minimum for scalar output with any sparse structure, or non-intersected sparse first layer and dense other layers with orthogonal training data; 2) sparse linear networks have sub-optimal local-min for only sparse first layer due to low rank constraint, or output larger than three dimensions due to the global minimum of a sub-network. Overall, sparsity breaks the normal structure, cutting out the decreasing path in original fully-connected networks.
[]
[ { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Zhao Song" ], "title": "A convergence theory for deep learning via overparameterization", "venue": "arXiv preprint arXiv:1811.03962,", "year": 2018 }, { "authors": [ "Sanjeev Arora", "Nadav Cohen", "Noah Golowich", "Wei Hu" ], "title": "A convergence analysis of gradient descent for deep linear neural networks", "venue": "arXiv preprint arXiv:1810.02281,", "year": 2018 }, { "authors": [ "Pierre Baldi", "Kurt Hornik" ], "title": "Neural networks and principal component analysis: Learning from examples without local minima", "venue": "Neural networks,", "year": 1989 }, { "authors": [ "Pierre Baldi", "Zhiqin Lu" ], "title": "Complex-valued autoencoders", "venue": "Neural Networks,", "year": 2012 }, { "authors": [ "Alon Brutzkus", "Amir Globerson" ], "title": "Globally optimal gradient descent for a convnet with gaussian inputs", "venue": "arXiv preprint arXiv:1702.07966,", "year": 2017 }, { "authors": [ "Miguel A Carreira-Perpinán", "Yerlan Idelbayev" ], "title": "learning-compression” algorithms for neural net pruning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Simon S Du", "Wei Hu" ], "title": "Width provably matters in optimization for deep linear neural networks", "venue": "arXiv preprint arXiv:1901.08572,", "year": 2019 }, { "authors": [ "Simon S Du", "Jason D Lee", "Haochuan Li", "Liwei Wang", "Xiyu Zhai" ], "title": "Gradient descent finds global minima of deep neural networks", "venue": "arXiv preprint arXiv:1811.03804,", "year": 2018 }, { "authors": [ "Carl Eckart", "Gale Young" ], "title": "The approximation of one matrix by another of lower rank", "venue": null, "year": 1936 }, { "authors": [ "Armin Eftekhari" ], "title": "Training linear neural networks: Non-local convergence and complexity results", "venue": "arXiv preprint arXiv:2002.09852,", "year": 2020 }, { "authors": [ "Utku Evci", "Fabian Pedregosa", "Aidan Gomez", "Erich Elsen" ], "title": "The difficulty of training sparse neural networks", "venue": "arXiv preprint arXiv:1906.10732,", "year": 2019 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "arXiv preprint arXiv:1803.03635,", "year": 2018 }, { "authors": [ "Trevor Gale", "Erich Elsen", "Sara Hooker" ], "title": "The state of sparsity in deep neural networks", "venue": "arXiv preprint arXiv:1902.09574,", "year": 2019 }, { "authors": [ "Rong Ge", "Jason D Lee", "Tengyu Ma" ], "title": "Learning one-hidden-layer neural networks with landscape design", "venue": "arXiv preprint arXiv:1711.00501,", "year": 2017 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "arXiv preprint arXiv:1510.00149,", "year": 2015 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Kenji Kawaguchi" ], "title": "Deep learning without poor local minima", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Namhoon Lee", "Thalaiyasingam Ajanthan", "Philip HS Torr" ], "title": "Snip: Single-shot network pruning based on connection sensitivity", "venue": "arXiv preprint arXiv:1810.02340,", "year": 2018 }, { "authors": [ "Dawei Li", "Tian Ding", "Ruoyu Sun" ], "title": "On the benefit of width for neural networks: Disappearance of bad basins", "venue": "arXiv, pp", "year": 2018 }, { "authors": [ "Zhuang Liu", "Mingjie Sun", "Tinghui Zhou", "Gao Huang", "Trevor Darrell" ], "title": "Rethinking the value of network pruning", "venue": "arXiv preprint arXiv:1810.05270,", "year": 2018 }, { "authors": [ "Christos Louizos", "Karen Ullrich", "Max Welling" ], "title": "Bayesian compression for deep learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Christos Louizos", "Max Welling", "Diederik P Kingma" ], "title": "Learning sparse neural networks through l 0 regularization", "venue": "arXiv preprint arXiv:1712.01312,", "year": 2017 }, { "authors": [ "Haihao Lu", "Kenji Kawaguchi" ], "title": "Depth creates no bad local minima", "venue": "arXiv preprint arXiv:1702.08580,", "year": 2017 }, { "authors": [ "Eran Malach", "Gilad Yehudai", "Shai Shalev-Shwartz", "Ohad Shamir" ], "title": "Proving the lottery ticket hypothesis: Pruning is all you need", "venue": "arXiv preprint arXiv:2002.00585,", "year": 2020 }, { "authors": [ "Song Mei", "Andrea Montanari", "Phan-Minh Nguyen" ], "title": "A mean field view of the landscape of twolayer neural networks", "venue": "Proceedings of the National Academy of Sciences,", "year": 2018 }, { "authors": [ "Leon Mirsky" ], "title": "Symmetric gauge functions and unitarily invariant norms", "venue": "The quarterly journal of mathematics,", "year": 1960 }, { "authors": [ "Dmitry Molchanov", "Arsenii Ashukha", "Dmitry Vetrov" ], "title": "Variational dropout sparsifies deep neural networks", "venue": "arXiv preprint arXiv:1701.05369,", "year": 2017 }, { "authors": [ "Itay Safran", "Ohad Shamir" ], "title": "Spurious local minima are common in two-layer relu neural networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Andrew M Saxe", "James L McClelland", "Surya Ganguli" ], "title": "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks", "venue": "arXiv preprint arXiv:1312.6120,", "year": 2013 }, { "authors": [ "Shai Shalev-Shwartz", "Ohad Shamir", "Shaked Shammah" ], "title": "Weight sharing is crucial to succesful optimization", "venue": "arXiv preprint arXiv:1706.00687,", "year": 2017 }, { "authors": [ "Ruoyu Sun", "Dawei Li", "Shiyu Liang", "Tian Ding", "Rayadurgam Srikant" ], "title": "The global landscape of neural networks: An overview", "venue": "IEEE Signal Processing Magazine,", "year": 2020 }, { "authors": [ "Chulhee Yun", "Suvrit Sra", "Ali Jadbabaie" ], "title": "Small nonlinearities in activation functions create bad local minima in neural networks", "venue": "arXiv preprint arXiv:1802.03487,", "year": 2018 }, { "authors": [ "Michael Zhu", "Suyog Gupta" ], "title": "To prune, or not to prune: exploring the efficacy of pruning for model compression", "venue": "arXiv preprint arXiv:1710.01878,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) have achieved remarkable empirical successes in the domains of computer vision, speech recognition, and natural language processing, sparking great interests in the theory behind their architectures and training. However, DNNs are often found to be highly overparameterized, making them computationally expensive with large amounts of memory and computational power. For example, it may take up to weeks on a modern multi-GPU server for large datasets such as ImageNet (Deng et al., 2009). Hence, DNNs are often unsuitable for smaller devices like embedded electronics, and there is a pressing demand for techniques to optimize models with reduced model size, faster inference and lower power consumption.\nSparse networks, that is, neural networks in which a large subset of the model parameters are zero, have emerged as one of the leading approaches for reducing model parameter count. It has been shown empirically that deep neural networks can achieve state-of-the-art results under high levels of sparsity (Han et al., 2015b; Gale et al., 2019; Louizos et al., 2017a). Modern sparse networks are mainly obtained from network pruning (Zhu & Gupta, 2017; Lee et al., 2018; Liu et al., 2018; Frankle & Carbin, 2018), which has been the subject of a great deal of work in recent years. However, training a sparse network with fixed sparsity patterns is difficult (Evci et al., 2019) and few theoretical understanding of general sparse networks has been provided.\nPrevious work has already analyze deep neural networks, showing that the non-convexity of the associated loss functions may cause complicated and strange optimization landscapes. However, the property of general sparse networks is poorly understood. Saxe et al. (2013) empirically showed that the optimization of deep linear models exhibits similar properties as deep nonlinear models, and for theoretical development, it is natural to begin with linear models before studying nonlinear models (Baldi & Lu, 2012). In addition, several works (Sun et al., 2020) have show bad minimum exists with nonlinear activation. Hence, it is natural to begin with linear activation to understand the impact of sparsity.\nIn this article, we go further to consider the global landscape of general sparse linear neural networks. We need to emphasize that dense deep linear networks already satisfy that every local minimum is a global minimum under mild conditions (Kawaguchi, 2016; Lu & Kawaguchi, 2017), but findings are different and complicated for sparse linear network. The goal of this paper is to study the relation between sparsity and local minima with the following contributions:\n• First, we point out that every local minimum is a global minimum in scalar target case with any depths, any widths and any sparse structure. Besides, we also briefly show that\nsimilar results hold for non-overlapping filters and orthogonal data feature when sparsity only occurs in the first layer. • Second, we find out that sparse connections would already give sub-optimal local minima\nin general non-scalar case through analytic and numerical examples built on convergence analyze. The local-min may be produced from two situations: a sub-sparse linear network which owes its minimum as a local-min of the original sparse network; a rank-deficient solution between different data features due to sparse connections, while both cases verify the fact that sparsity cuts out the decreasing path in original fully-connected networks.\nOverall, we hope our work contributes to a better understanding of the landscape of sparsity network on simple neural networks, and provide insights for future research.\nThe remainder of our paper is organized as follows. In Section 2, we derive the positive findings of shallow sparse linear networks, providing similar landscape as dense linear networks. In Section 3, we give several examples to show the existence of bad local-min for non-scalar case. In section 4, we briefly generalize the results from shallow to deep sparse linear networks. Some proofs are in Appendix." }, { "heading": "1.1 RELATED WORK", "text": "There is a rapidly increasing literature on analyzing the loss surface of neural network objectives, surveying all of which is well outside our scope. Thus, we only briefly survey the works most related to ours.\nLocal minima is Global. The landscape of a linear network date back to Baldi & Hornik (1989), proving that shallow linear neural networks do not suffer from bad local minima. Kawaguchi (2016) generalized same results to deep linear neural networks, and subsequent several works (Arora et al., 2018; Du & Hu, 2019; Eftekhari, 2020) give direct algorithm-type convergence based on this benign property, though algorithm analysis is beyond the scope of this paper. However, situations are quite complicated with nonlinear activations. Multiple works (Ge et al., 2017; Safran & Shamir, 2018; Yun et al., 2018) show that spurious local minima can happen even in two-layer network with population or empirical loss, some are specific to two-layer and difficult to generalize to general multilayer cases. Another line of works (Arora et al., 2018; Allen-Zhu et al., 2018; Du & Hu, 2019; Du et al., 2018; Li et al., 2018; Mei et al., 2018) understands the landscape of neural network in an overparameterized setting, discovering benign landscape with or without gradient method. Since modern sparse networks reserve few parameters compared to overparameterization, we still seek a fundamental view of sparsity in contrast. Our standpoint is that spurious local minima can happen when applied with specific sparsity even in linear networks.\nSparse networks. Sparse networks (Han et al., 2015b;a; Zhu & Gupta, 2017; Frankle & Carbin, 2018; Liu et al., 2018) have a long history, but appears heavily on the experiments, and mainly related to network pruning, which has practical importance for reducing model parameter count and deploying diverse devices. However, training sparse networks (from scratch) suffers great difficulty. Frankle & Carbin (2018) recommend reusing the sparsity pattern found through pruning and train a sparse network from the same initialization as the original training (‘lottery’) to obtain comparable performance and avoid bad solution. Besides, for fixed sparsity patterns, Evci et al. (2019) attempt to find a decreasing objective path from ‘bad’ solutions to the ‘good’ ones in the sparse subspace but fail, showing bad local minima can be produced by pruning, while we give more direct view of simple examples to verify this. Moreover, several recent works also give abundant methods (Molchanov et al., 2017; Louizos et al., 2017b; Lee et al., 2018; Carreira-Perpinán & Idelbayev, 2018) for choosing weights or sparse network structure while achieving similar performance. In theoretical view, Malach et al. (2020) prove that a sufficiently over-parameterized neural network with random weights contains a subnetwork with roughly the same accuracy as the target network, providing guarantee for ‘good’ sparse networks. Some works analyze convolutional network (Shalev-Shwartz et al., 2017; Du et al., 2018) as a specific sparse structure. Brutzkus & Globerson (2017) analyze non-overlapping and overlapping structure as we do, but with weight sharing to simulate CNN-type structure, and under teacher-student setting with population risk. We do not follow CNN-type network but in general sparse networks, though still linear, to conclude straightforward results." }, { "heading": "2 LANDSCAPE OF SHALLOW SPARSE LINEAR NETWORKS", "text": "" }, { "heading": "2.1 PRELIMINARIES AND NOTATION", "text": "We use bold-faced letters (e.g., w and a) to denote vectors, capital letters (e.g., W = [wij ] and A = [aij ]) for matrices. Let PX be the orthogonal projection to the column space of the matrix X , and λi(H) is the i-th smallest eigenvalue of a real symmetric matrix H .\nWe consider the training samples and their outputs as {(xi,yi)}ni=1 ⊂ Rdx × Rdy , which may come from unknown distribution D. We form the data matrices X = [x1, . . . ,xn]T ∈ Rn×dx and Y = [y1, . . . ,yn]\nT ∈ Rn×dy , respectively. In our analysis in Sections 2 and 3, we consider a two-layer (sparse) linear neural network with squared loss:\nmin W,A\nL(W,A) := 1\n2 ‖Y −XWA‖2F , (1)\nwhere the first layer weight matrix W = [w1, . . . ,wd] ∈ Rdx×d , and the second layer weight matrix A = [a1, . . . ,ad]T ∈ Rd×dy . After weights pruning or sparsity constraint, many weights parameters become zero and would not be updated during retraining. We adopt Sj := {k : wkj = 0} as pruned dimensions in the j-th column ofW , and−Sj := Scj = [dx]\\Sj , where [d] := {1, . . . , d}. In addition, wj,S denotes the sub-vector of wj choosing the positions in S, XS the sub-matrix of X choosing the column indices in S. We let pj = dx − |Sj |, where |S| is the cardinality of the set S. Then wj,−Sj ∈ Rpj is the remaining j-th column in first layer weight which leaves out pruned dimension set Sj . Similarly, X−Sj ∈ Rn×pj means the remaining data matrix connected to j-th node in the first layer.\nFinally, for simplicity, we denote X−j = X−Sj , w−j = wj,−Sj , and (̃·) as the pruned layer weight with several zero elements not updated all along, if no ambiguity.\nBefore we begin, a small note on the sparse structure we concern: there may have unnecessary connections and nodes, such as a node with zero out-degree which can be retrieved and excluded from the final layer to the first layer, and other cases are showing in Appendix C. Thus we do not consider them in the subsequent proof and assume each data dimension has valid output connection, i.e., ∩dj=1Sj = ∅." }, { "heading": "2.2 SCALAR CASE", "text": "In the scalar case, assume dy = 1. We then simplify A = (a1, . . . , ad) T . When pruning any weight ai in the second layer, the output of the i-th node in the first layer contribute zero to final output. Hence wi can also be pruned. Without loss of generality, we assume second layer parameters are not pruned. After pruning several parameters, the original problem becomes\nmin w−i,ai\nL(W̃ ,A) := 1\n2 ∥∥∥∥∥∥∥Y − (X−1, . . . , X−i, . . . , X−d) a1w−1... adw−d ∥∥∥∥∥∥∥ 2\nF\n. (2)\nTheorem 1 For a two-layer linear neural network with scalar output and any sparse structure, every local minimum is a global minimum.\nProof: From Eq. (2), if a local minimizer satisfies ai = 0 for some 1 ≤ i ≤ d, then based on the second order condition for a local minima, we have\n ∂2L ∂a2i\n∂2L\n∂ai∂wT−i ∂L\n∂w−i∂ai\n∂L\n∂w−i∂wT−i\n 0, (3)\nwhich implies that wT−iX−iXT−iw−i −(Y −∑di=1X−iw−iai)T X−i −XT−i ( Y − ∑d i=1X−iw−iai ) 0 0. (4) Then XT−i ( Y − ∑d i=1X−iw−iai ) = 0, which is the global minimizer condition of w−iai.\nOtherwise, ai 6= 0, then from the first-order condition for a local minima,\n∂L\n∂w−i = aiX\nT −i\n( Y −\nd∑ i=1 X−iw−iai\n) = 0,\nshowing that XT−i ( Y − ∑d i=1X−iw−iai ) = 0, which also gives the global minimizer condition of w−iai. Hence every local minimum is a global minimum." }, { "heading": "2.3 NON-SCALAR CASE WITH DENSE SECOND LAYER", "text": "Now we discuss the case of non-scalar outputs. By the intractable and various sparse structure, we first consider pruning only the first layer while retaining the dense second layer. Then the remaining problem is formulated as follows:\nmin w−i,ai\nL(W̃ ,A) := 1\n2 ∥∥∥∥∥Y − d∑\ni=1\nX−iw−ia T i ∥∥∥∥∥ 2\nF\n. (5)\nIntuitively, if we can separate the weight parameters into d parts, based on linear network results, we can still guarantee no bad local-min. We show that non-overlapping first layer weight or disjoint feature extractor, as the left graph of Figure 1 depicts, and orthogonal data feature meet requirements.\nTheorem 2 For a two-layer sparse linear neural network with dense second layer, assume that X is full column rank, and ∀ i 6= j, XT−iX−j = 0. Then every local minimum is global.\nProof: Since ∀ i 6= j, XT−iX−j = 0 and X is full column rank, X−i and X−j share no same columns. Additionally, from our assumption ∩dj=1Sj = ∅, we have ∩dj=1Sj = [dx], meaning that (X−1, . . . , X−d) is X with different arrangement of columns. Hence,\nY = PXY + (I − PX)Y = X(XTX)−1XTY + (I − PX)Y = d∑\ni=1\nX−iZi + (I − PX)Y, (6)\nwhere Zi = ( (XTX)−1XTY ) −Si\nis the sub-matrix choosing the row indices in −Si. Then we only need to consider the objective:\nmin w−i,ai\nL(W̃ ,A) = 1\n2 ∥∥∥∥∥ d∑\ni=1\nX−i ( Zi −w−iaTi )∥∥∥∥∥ 2\nF\n= 1\n2 d∑ i=1 ∥∥X−i (Zi −w−iaTi )∥∥2F . (7)\nHidden layer 1\nInput layer\nOutput layer\nHidden layer 1 Hidden layer 2 Hidden layer 3 Input layer Output layer\nWe will see that the objective has already been separated into d parts, while each part is a two-layer dense linear network with full column rank data matrix. Based on Theorem 2.2 in Lu & Kawaguchi (2017) or Eckart-Young-Mirsky theorem (Eckart & Young, 1936; Mirsky, 1960), we obtain that every local minimum is a global minimum.\nAdditionally, we need to explain the assumption that non-overlapping filters in the first layer involves convolution networks (Brutzkus & Globerson, 2017) if weight sharing is used. Otherwise, we will show a bad local minima exists when the first layer is overlapped or training data is not orthogonal in Section 3.\n2.4 GENERAL CASE WITH dy = 2\nPrevious findings imply positive results with one-dimension outputs, or specific sparse structure and data assumption. In this subsection, we discuss an arbitrary sparse structure with outputs of dimension dy = 2. We first prove that some connections still owe common benign landscape which can be removed.\nTheorem 3 For a sparse two-layer network, a node with full out-degree and one in-degree can be removed if we consider the remaining structure with objective under some projected data, having no influence for spurious local minima.\nPrevious result simplifies the sparse structure including such a hidden node with one connection to input and full connection to output. Next, we will provide another type reduction with only one connection to output when sparsity is applied to both layers with dy = 2.\nWe mention the final layer output as Node 1 and Node 2, and the hidden nodes which have only one connection to the output layer as R1 and R2 while the remaining full connection set as R. Set T1 = ∩j∈R1Sj , T2 = ∩j∈R2Sj . We define U(T ) = {j : wij 6= 0, i ∈ T } as the hidden node set connected with data feature in T . When the hidden node sets connected to T1 and T2 satisfy the condition below, we are able to simply the sparse structure into the dense layer case.\nTheorem 4 For a sparse two-layer linear network with dy = 2, if U(T1 ∩ T2) ∩ U(T1 \\ T2) = ∅ and U(T1 ∩ T2) ∩ U(T2 \\ T1) = ∅, then there is a sub-network with dense second layer optimized with some projected training data, sharing the same local minimum for the remaining parameters.\nThe formal proof of the theorem can be found in Appendix E. Additionally, from the proof of Theorem 4, the objective is converted into two objectives with weight sharing in the first layer even the assumption does not meet. Weight sharing structure has been shown in some related work (Shalev-Shwartz et al., 2017; Brutzkus & Globerson, 2017), so we do not give detailed description and leave it as future work.\nNow for a sparse two-layer linear network with dy = 2, we focus on the case which has dense second layer. If one hidden node only has one in-degree, then based on Theorem 3, we can remove such node and consider the objective optimized with some projected data. Therefore, each hidden node should have at least two in-degree. Because one hidden node obviously leads to no bad local minima, the least sparse structure has two hidden nodes with totally eight connections (e.g., two constructions in Figure 1). We will show the existence of bad local-minima in Section 3.\n2.5 GENERAL CASE WITH dy ≥ 3\nWe finish this section by discovering that a sub-network with its global minima might yield a spurious local minima of the original sparse network when dy ≥ 3.\nTheorem 5 There exists a spurious local minima that is a global minimum of sub-network in twolayer sparse network when output dimension dy ≥ 3.\nProof: We consider the sparse structure in Figure 2 with only eight connections. The objective is\nmin wi\nL(w1, . . . , w8) := 1\n2 ∥∥∥∥∥Y −X ( w1 0 w2 w3 0 w4 )( w5 w6 0 0 w7 w8 )∥∥∥∥∥ 2\nF\n= 1\n2 ∥∥∥∥∥Y −X ( w1w5 w1w6 0 w2w5 w2w6 + w3w7 w3w8\n0 w4w7 w4w8\n)∥∥∥∥∥ 2\nF\n.\nLet X = I3 and Y = ( 1 2 0 2 10 0 0 0 4 ) . Clearly, X and Y have full column rank that is the common\nassumption in previous work. Then ( w1 0 w2 w3 0 w4 ) = ( 1 0 2 2 0 0 ) , ( w5 w6 0 0 w7 w8 ) = ( 1 2 0 0 3 0 ) satisfy ∇L = 0, and L(w1, . . . , w8) = 8. In addition, for any small disturbances δi, i = 1 . . . , 8,\n2L(w1 + δ1, . . . , w8 + δ8) = (δ1 + δ5 + δ1δ5) 2 + (2δ1 + δ6 + δ1δ6) 2 + (δ2 + 2δ5 + δ2δ5) 2\n+ (2δ2 + 2δ6 + δ2δ6 + 3δ3 + 2δ7 + δ3δ7) 2\n+ (2 + δ3) 2δ28 + (3 + δ7) 2δ24 + (δ4δ8 − 4)2\n≥ (2 + δ3)2δ28 + (3 + δ7)2δ24 + (δ4δ8 − 4)2 ≥ 2 [(2 + δ3) (3 + δ7)− 4] |δ4δ8|+ 16.\nSince the perturbations δi are small, we have (2 + δ3) (3 + δ7)−4 > 0. Hence, L(w1+δ1, . . . , w8+ δ8) ≥ 8, verifying the local minimizer.\nHowever, when ( w1 0 w2 w3 0 w4 ) = √10/5 0√10 0 0 2 , (w5 w6 0 0 w7 w8 ) = (√ 10/5 √ 10 0 0 0 2 ) ,\nL(w1, . . . , w8) = 0.18 < 8. Hence, a bad local minimum exists.\nWe underline that the bad local minimum is produced from the sub-network when w4=w8=0. Since we encounter no bad local minimum in a dense linear network, sparse connections indeed destroy the benign landscape because sparsity obstructs the decreasing path as Evci et al. (2019) mentioned from experiments." }, { "heading": "3 BAD LOCAL MINIMUM WITH SPARSE FIRST LAYER", "text": "Now we turn back to the dense second layer case in Section 2.3 with dy=2, and assume X has full column rank. We give an algorithm to check the existence of spurious local minima when ∃ i 6= j, s.t.,XT−iX−j 6= 0.\nAlgorithm 1 Sparse-2-Opt (Z1, Z2, D): Obtain the solution of two-layer sparse linear neural network with two hidden neurons.\n1: Input: Target matrix Z1, Z2 and covariance diagonal matrix D. 2: Initialize w2, d2,a2; 3: while not converge do 4: w1, d1,a1 = SV D(Z1 +D(Z2 − d2w2aT2 )); 5: w2, d2,a2 = SV D(Z2 +DT (Z1 − d1w1aT1 )); 6: end while 7: w1 = d1w1,w2 = d2w2. 8: if λ1(∇2L), λ2(∇2L) ≈ 0, λ3(∇2L) > 0 then 9: Return the solution w1,a1,w2,a2.\n10: else 11: Try again from another initialization. 12: end if 13: Output: w1,a1,w2,a2.\nNotice that we have no rank constraint for the Zi in Eq. (5). Suppose singular value decomposition of X−i as X−i = UiDiV Ti with Ui ∈ Rn×pi , Di ∈ Rpi×pi , Vi ∈ Rpi×pi . Since Di has full rank, we take DiViZi and DiViwi as new targets and variables. With a slight abuse of notation, then the problem becomes\nmin W̃ ,A\n1\n2 ∥∥∥∥∥ d∑\ni=1\nUi ( Zi −wiaTi )∥∥∥∥∥ 2\nF\n. (8)\nIn the following, we show d = 2 is enough to give counter examples. Similarly, using the singular value decomposition of UT1 U2 as U T 1 U2 = ŪDV̄\nT with a rectangle diagonal matrix D ∈ Rp1×p2 . Notice that U1, U2 are column orthogonal matrices, thus Dii ≤ 1, and |{i : Dii = 1}| equals to the overlapping columns between X−1 and X−2. Finally, the objective becomes\nL(w1,w2,a1,a2) = 1\n2 ‖Z1−w1aT1 ‖2F +\n1 2 ‖Z2−w2aT2 ‖2F + tr[(Z1−w1aT1 )TD(Z2−w2aT2 )].\nIf we fix w2 and a2, we can see w1 and a1 are the best rank-1 approximation ofZ1+D(Z2−w2aT2 ), since w1 and a1 are the solution of\narg min w1,a1\n‖Z1 +D(Z2 −w2aT2 )−w1aT1 ‖2F .\nSimilarly, w2 and a2 are the best rank-1 approximation of Z2 +DT (Z1 −w1aT1 ). Empirically, we use alternating update method to find the solution in Algorithm 1 for two blocks, where SV D(·) is a classical method getting the largest singular value and the corresponding singular vectors.\nSince each update does not increase the loss, this makes the convergence of sequence w1,a1,w2,a2. Once the algorithm converges, the first-order condition is satisfied and two eigenvectors with zero eigenvalue of the Hessian matrix are decided. Moreover, we can also prove that the convergent solution is indeed a local minima (detail see Appendix B). Otherwise, we examine a local minimum using gradient descent or other optimization method started with noise, if necessary.\nBased on Algorithm 1, we find several cases with bad local minima including the overlapping case (∃i, Dii = 1). The results are shown in Table 1. We observe distinct gaps between the local minima because our choice of elements in the Zi is small. In the non-overlapping setting, the algorithm reaches the local min quickly and shows several different examples. As for the overlapping setting, a simple construction is leaving out the repeated feature away with zero items in the Zi, though we also show bad-min applied with the overlapping data feature in Row 3 in Table 1.\nIt is interesting to note that for d = 2, only at most two local minimum are found, and we can easily broaden the alternating update method into general d case in Appendix D, that will also verify similar observation: at most d local minimum produced by a sparse-first-layer network with hidden d nodes, which leaves as future work. Overall, sparsity breaks the original matrix structure, leading to additional low rank constraint in this case, and still cuts out the decreasing path in the original fully-connected network.\nAdditionally, a descent algorithm still will diverge to infinity. For instance, the example in Appendix A shows that there is a sequence diverging to infinity while the function values are decreasing and convergent." }, { "heading": "4 LANDSCAPE OF DEEP SPARSE LINEAR NETWORKS", "text": "In this section, we briefly extend Theorems 1 and 2 to deep sparse linear networks and leave the proof in Appendix F. The intuition is that deep linear networks have similar landscape property as the shallow case (Lu & Kawaguchi, 2017; Eftekhari, 2020). However, understanding the landscape of an arbitrary deep sparse linear network is still complicated.\nTheorem 6 For a deep sparse linear neural network with scalar output (dy = 1) and any sparse structure, every local minimum is a global minimum.\nThe proof intuition can be described by induction based on shallow linear networks. The above theorem shows that sparsity introduces no bad-min when applied with scalar target. In addition, we give a simple choice for obtaining a global minimizer below.\nHow to obtain a global minimizer in scalar case: One way is to set the first-layer weights as the global minimizer in the two-layer case with ai = 1, then the remaining layers uniformly distribute the output of each node to the next layer. For example, if one node has k output connections, then each connection assigns weight 1/k. Hence, the sum of each layer output remains the best solution to approximate target Y (see the right graph of Figure 2 for example).\nTheorem 7 For a deep sparse linear neural network with a sparse first layer and dense other layers, assume that X,Y have full column rank, and ∀ i 6= j, XT−iX−j = 0. If di ≥ min{d1, dy},∀i ≥ 1, where di is the hidden width in the i-th layer, then every local minimum is a global minimum.\nNote that under our assumption di ≥ min{d1, dy},∀i ≥ 1, the deep linear network we study has the same solution as the shallow linear network when the first-layer weight fixed. Hence, the optimal value for our objective function is equal to the optimal value of the shallow network problem." }, { "heading": "5 DISCUSSION", "text": "We have discussed the landscape of sparse linear networks with several arguments. On the positive side, spurious local minimum does not exist when the objective applied with scalar target, or with separated first layer and orthogonal training data. On the negative side, we have discovered the bad local minimum when the previous conditions are violated in a general sparse two-layer linear network, that is, one is generated from low rank constraint, another is produced from sub-sparse structure. Both the cases show that sparsity cuts out the decreasing path in the original fully-connected network. Since dense linear networks possess benign landscape, we have concluded that sparsity or network pruning destroys the favourable solutions. However, some heuristic algorithms combining training and pruning still work well in practice, leading to mystery of modern network pruning methods and sparse network design. Other interesting questions for future research include understanding the gap between global minimum and spurious local minimum, or showing a similar performance of bad-min, particularly, combining with pruning algorithms." }, { "heading": "A DECREASING PATH OF SPARSE LINEAR NETWORK WITH SPARSE FIRST LAYER", "text": "In addition, there still exists decreasing path to infinity:\nmin wi\nL(w1, . . . , w8) : = 1\n2 ∥∥∥∥∥Y −X ( w1 0 w2 w3 0 w4 )( w5 w6 w7 w8 )∥∥∥∥∥ 2\nF\n= 1\n2 ∥∥∥∥∥Y −X ( w1w5 w1w6 w2w5 + w3w7 w2w6 + w3w8\nw4w7 w4w8\n)∥∥∥∥∥ 2\nF\n(9)\nX = I3, Y = ( 1 0 0 1 1 1 ) , Choose (w1, w2, w3, w4, w5, w6, w7, w8) = (− 1√k ,− √ k, 1, 1, 1√ k , 0, 1, 1), with k ∈ N+. then 2L(w1, . . . , w8) = ( 1k + 1) 2 > 1 decreases when k increases. Since minwi L(w1, . . . , w8) = 0, hence we get a decreasing path to infinity, but not a global minimum." }, { "heading": "B ALGORITHM ANALYSIS", "text": "We built algorithm guarantee in the following:\nFirst, since each update step, the objective doesn’t increase, then the algorithm will converge.\nSecond, we verify that the convergent solution (w∗1,a ∗ 1,w ∗ 2,a ∗ 2) satisfy zero gradient. Recall the first-order condition:\n− ∂L ∂w1\n= ( Z1 −w1aT1 +D(Z2 −w2aT2 ) ) a1 = ( Z1 +D(Z2 −w2aT2 ) ) a1 − aT1 a1w1,\n− ∂L ∂a1\n= ( Z1 −w1aT1 +D(Z2 −w2aT2 ) )T w1 = ( Z1 +D(Z2 −w2aT2 ) )T w1 −wT1 w1a1,\n− ∂L ∂w2\n= ( Z2 −w2aT2 +DT (Z1 −w1aT1 ) ) a2 = ( Z2 +D T (Z1 −w1aT1 ) ) a2 − aT2 a2w2,\n− ∂L ∂a2\n= ( Z2 −w2aT2 +DT (Z1 −w1aT1 ) )T w2 = ( Z2 +D T (Z1 −w1aT1 ) )T\nw2 −wT2 w2a2. (10)\nNotice that w∗1,a ∗ 1 is the best rank-1 approximation of Z1 + D(Z2 − w∗2a∗T2 ), and w∗2,a∗2 are the best rank-1 approximation of Z2 + DT (Z1 − w∗1a∗T1 ). Then we have already got a solution (w∗1,a ∗ 1,w ∗ 2,a ∗ 2) with zero gradient.\nThird, we verify that the convergent solution is a local minimizer through the conditions we checked. Set r1 = Z1 +D(Z2 −w2aT2 ), r2 = Z2 +DT (Z1 −w1aT1 ). Then\n∇2L(w1,a1,w2,a2) = aT1 a1Ip1 −r1 + 2w1aT1 aT1 a2D Dw2aT1( −r1 + 2w1aT1 )T wT1 w1Idy a2w T 1D w T 1Dw2Idy\naT1 a2D T DTw1a T 2 a T 2 a2Ip2 −r2 + 2w2aT2 a1w T 2D T wT1Dw2Idy ( −r2 + 2w2aT2 )T wT2 w2Idy (11)\nSet H∗ := ∇2L(w∗1,a∗1,w∗2,a∗2). Observe that( w∗T1 ,−a∗T1 ,0T ,0T ) H∗ = 0, ( 0T ,0T ,w∗T2 ,−a∗T2 ) H∗ = 0,\nshowing that H∗ has zero eigenvalue with at least two eigenvectors v1 = ( w∗T1 ,−a∗T1 ,0T ,0T )T and v2 = ( 0T ,0T ,w∗T2 ,−a∗T2 )T .\nSuppose the third smallest eigenvalue is λ3 ≥ > 0, then for any direction v with ‖v‖2 = 1, we have v = α1v̄1 + α2v̄2 + α3v̄3 with v3⊥v1,v3⊥v2, ∑3 i=1 α 2 i = 1, and w̄ := w/‖w‖2.\nIf α3 6= 0, then vHv = α23v̄3Hv̄3 ≥ α23λ3 ≥ α23 ≥ 0. Otherwise, we set v = δ1v1 + δ2v2 with small δ1, δ2 as perturbation, and the perturbed parameters are notated as w̃1, ã1, w̃2, ã2 = (1− δ1)w∗1, (1− δ2)w∗2, (1 + δ1)a∗1, (1 + δ2)a∗2, which yields\nL(w̃1, ã1, w̃2, ã2) = ‖X1 ( Z1 − (1− δ21)w1aT1 ) +X2 ( Z2 − (1− δ22)w2aT2 ) ‖2F\n= ‖X1 ( Z1 −w1aT1 ) +X2 ( Z2 −w2aT2 ) ‖2F + ‖δ21w1aT1 + δ22w2aT2 ‖2F\n+ 2δ21 tr[ ( w1a T 1 )T ( r1 −w1aT1 ) ] + 2δ22 tr[ ( w2a T 2 )T ( r2 −w2aT2 ) ]\n= ‖X1 ( Z1 −w1aT1 ) +X2 ( Z2 −w2aT2 ) ‖2F + ‖δ21w1aT1 + δ22w2aT2 ‖2F\n= L(w1,a1,w2,a2) + ‖δ2α21w1aT1 + δ2α21w2aT2 ‖2F ≥ L(w1,a1,w2,a2). (12)\nThird equality holds for the rank-1 approximation of the solution. Hence, the convergent solution is a local minimizer.\nFourth, due to numerical error, we can not obtain exact convergent solution, but we are able to obtain approximate solution (wt1,a t 1,w t 2,a t 2) after t iterations with L (w t 1,a t 1,w t 2,a t 2) − L (w∗1,a ∗ 1,w\n∗ 2,a ∗ 2) ≤ 2, and then use Weyl’s inequality (Safran & Shamir, 2018, Theorem 2),∣∣λi(∇2L(wt1,at1,wt2,at2))− λi(∇2L(w∗1,a∗1,w∗2,a∗2))∣∣ < O( ),\nwhere λi(H) is the i-th smallest eigenvalue of the real symmetric matrix H . Therefore, if the approximate solution is approximate positive semi-definite with a large third smallest eigenvalue, we conclude the convergent solution is a local minimizer." }, { "heading": "C USELESS CONNECTIONS AND NODES IN SPARSE NETWORK", "text": "In this section, we explain several kinds of unnecessary connections suffered from sparsity or network pruning.\nHidden layer 1\nHidden layer 2 Input layer\nOutput layer\n1. Zero out-degree I: if a node have zero out-degree, such as h(2)1 in Figure 3, we can eliminate the input connections.\n2. Zero out-degree II: if a node have zero out-degree when removed output connections in latter layers, such as h(1)1 in Figure 3. Though it owes one output connection, the connected node h(2)1 is zero out-degree, hence the connection can be removed, leading to zero outdegree. We can eliminate the input connections of h(1)1 as well.\n3. Zero in-degree I: if a node have zero in-degree, such as h(2)4 and h (1) 4 in Figure 3, we can\neliminate the output connections, but notice that when the node has a bias term, then we can not remove output connections since the bias constant will still propagate to subsequent layers.\n4. Zero in-degree II: if a node have zero in-degree when removed input connections in former layers, such as h(2)3 in Figure 3. Though it owes one input connection, the connected node h (1) 4 is zero in-degree, hence the connection can be removed, leading to zero in-degree. We\ncan eliminate the output connections of h(2)3 as well.\nD GENERAL d BLOCKS ALGORITHM\nAlgorithm 2 Sparse-d-Opt (X1, . . . , Xd, Z1, . . . , Zd): Obtain the solution of two-layer sparse linear neural network with d hidden neurons.\n1: Input: Input matrix X1, . . . , Xd. Target matrix Z1, . . . , Zd; 2: Initialize wi, di,ai, i = 2, . . . , d; 3: while not converge do 4: for i = 1, . . . , d do 5: wi, di,ai = SV D(Zi + ∑ j 6=iX T i Xj(Zj − djwjaTj )); 6: end for 7: end while 8: wi = diwi, i = 1, . . . , d. 9: if λ1(∇2L), . . . , λd(∇2L) ≈ 0, λd+1(∇2L) > 0 then\n10: Return the solution wi,ai, i = 1, . . . , d. 11: else 12: Try again from another initialization. 13: end if 14: Output: wi,ai, i = 1, . . . , d.\nThe analysis that the convergent solution is a local minimizer is similar to d = 2, so we are not going to repeat the details. We list some examples searched for d > 2 below.\nd = 3: Target: Z1 = (\n0 0 0 1\n) , Z2 = ( 1 0 1 1 ) , Z3 = ( 1 0 −1 −1 ) .\nTraining data: XTX = 1.0 0.0 −0.088 −0.599 −0.234 0.178 0.0 1.0 −0.163 0.429 −0.529 0.431 −0.088 −0.163 1.0 −0.0 0.558 0.193 −0.599 0.429 −0.0 1.0 −0.357 −0.244 −0.234 −0.529 0.558 −0.357 1.0 −0.0 0.178 0.431 0.193 −0.244 −0.0 1.0 Local minimum found:" }, { "heading": "E MISSING PROOFS FOR SECTION 2", "text": "E.1 PROOF OF THEOREM 3\nProof: Suppose the j-th node has full out-degree and one in-degree, so that w−j ∈ R. We treat objective with fixed other weights and only consider optimizing w−j ,aj .\nmin wj ,aj\n`(wj ,aj) = 1\n2 ∥∥∥Ỹ −X−jwjaTj ∥∥∥2 F , Ỹ := Y − ∑ i6=j X−iw−ia T i . (13)\nBased on the proof of scalar case, a local minimizer (w∗j ,a ∗ j ) of `(wj ,aj) must satisfy the condition\nXT−j ( Ỹ −X−jw∗ja∗Tj ) = 0, showing that `(w∗j ,a ∗ j ) = 1 2‖ ( I − PX−j ) Ỹ ‖2F . Therefore, the objective with remaining weights becomes:\nmin w−i,ai,i6=j\n1\n2 ∥∥∥∥∥∥(In − PX−j)Y − ∑ i 6=j ( In − PX−j ) X−iw−ia T i ∥∥∥∥∥∥ 2\nF\n. (14)\nWe define (( In − PX−j ) XSj , ( In − PX−j ) Y )\nas new training dataset which is the projection into the orthogonal complement ofX−j , and remove some elements in w−i corresponding to the column in X−j . Moreover, if X has full column rank, then projected data ( In − PX−j ) XSj still has full column rank. Hence, removing the above connections doesn’t affect the spurious local minima since these connections preserve certain solution.\nE.2 PROOF OF THEOREM 4\nProof: The original loss function can be formulated as below,\n2L(W̃ , Ã) = ∥∥∥∥∥∥Y1− ∑ i∈R1 X−iw−iai1− ∑ j∈R X−jw−jaj1 ∥∥∥∥∥∥ 2\nF\n+ ∥∥∥∥∥∥Y2− ∑ i∈R2 X−iw−iai2− ∑ j∈R X−jw−jaj2 ∥∥∥∥∥∥ 2\nF\n.\nUnder similar analysis as scalar case,\n∀i ∈ R1, XT−i Y1 − ∑ i∈R1 X−iw−iai1 − ∑ j∈R X−jw−jaj1 = 0. ∀i ∈ R2, XT−i\nY2 − ∑ i∈R2 X−iw−iai2 − ∑ j∈R X−jw−jaj2 = 0. Hence, w−i1ai11 ... w−ijaij1 ...\nw−i|R1|ai|R1|1\n = ( X−i1 , · · · , X−ij , · · · , X−i|R1| )+Y1 −∑ j∈R X−jw−jaj1\n , ij ∈ R1. w−i1ai12 ... w−ijaij2 ...\nw−i|R2|ai|R1|1\n = ( X−i1 , · · · , X−ij , · · · , X−i|R2| )+Y2 −∑ j∈R X−jw−jaj2\n , ij ∈ R2. Then the objective becomes:∥∥∥∥∥∥(In − PX−T1 ) Y1 −∑ j∈R X−jw−jaj1 ∥∥∥∥∥∥ 2\n2\n+ ∥∥∥∥∥∥(In − PX−T2 ) Y2 −∑ j∈R X−jw−jaj2 ∥∥∥∥∥∥ 2\n2\n.\nWe can see the objective is separated into two parts with shared sparse first-layer weights. Notice that if i /∈ T1, then Xi ∈ XT1 , hence ( In − PX−T1 ) Xi = 0. Therefore, we simplify the problem as\nmin W̃ ,Ã\nL(W̃ , Ã) = 1\n2 ∥∥∥∥∥∥(In − PX−T1 ) Y1 −∑\ni∈T1\nXi ∑\nj:i/∈Sj\nwijaj1 ∥∥∥∥∥∥ 2\n2\n+ 1\n2 ∥∥∥∥∥∥(In − PX−T2 ) Y2 −∑\ni∈T2\nXi ∑\nj:i/∈Sj\nwijaj2 ∥∥∥∥∥∥ 2\n2\n.\n(15)\nUse previous analysis again on T1 \\ T2 in first output dimension and T2 \\ T1 in second output dimension since no overlap in parameters by the condition U(T1 ∩ T2) ∩ U(T1 \\ T2) = ∅ and U(T1 ∩ T2) ∩ U(T2 \\ T1) = ∅. Therefore, we simplify the problem again as\nmin W̃ ,Ã\nL(W̃ , Ã) = 1\n2 ∥∥∥∥∥∥ ( In−P(In−PX−T1 )XT1\\T2 )( In−PX−T1 )Y1− ∑ i∈T1∩T2 Xi ∑ j:i/∈Sj wijaj1 ∥∥∥∥∥∥ 2\n2\n+ 1\n2 ∥∥∥∥∥∥ ( In−P(In−PX−T1 )XT2\\T1 )( In−PX−T2 )Y2− ∑ i∈T1∩T2 Xi ∑ j:i/∈Sj wijaj2 ∥∥∥∥∥∥ 2\n2\n.\nUsing the fact that (In − PW1) (In − PW2) = In−P(W1,W2) if WT1 W2 = 0. Hence the remaining problem is same as\nmin W̃ ,Ã\nL(W̃ , Ã) = 1\n2 2∑ k=1 ∥∥∥∥∥∥(In − PX−T1∩T2 ) Yk − ∑ i∈T1∩T2 Xi ∑ j:i/∈Sj wijajk ∥∥∥∥∥∥ 2\n2\n.\nTherefore, the remaining network structure has dense second layer." }, { "heading": "F MISSING PROOFS FOR SECTION 4", "text": "The objective of a deep linear network with squared loss is\nmin W (1),...,W (L)\n1 2 ‖Y −XW (1) · · ·W (L)‖2F , (16)\nwhere the i-th layer weight matrix W (i) ∈ Rdi−1×di , d0 = dx, dL = dy , Data matrix X ∈ Rn×dx , Y ∈ Rn×dy . We adopt S(i)j = {k : W (i) kj = 0} as pruned dimensions in j-th column of W (i). Besides, W (i)j,−S as the remaining j-th column in i-th layer weight which leaves out pruned\ndimension set S. For simplification, we denote w(i)−j = w (i) j,−S(i)j ∈ R\n( di−1−|S(i)j | ) , w(i)jk = W (i) jk ,\nand W̃ (i) as the pruned weight matrix with several zero elements as before.\nF.1 PROOF OF THEOREM 6\nProof: Using induction. Base on Theorem 1, we have already proof two layer case. If the result holds for (L− 1)-layer sparse linear network, we consider L layer case. We denote Xnew := XW̃ (1) as new training set, and ` := Y −XW̃ (1) · · · W̃ (L). Then based on inductive assumption, `TXnew = 0, showing that\n`TX−iw (1) −i = 0, ∀1 ≤ i ≤ d1. (17)\nCombined with first-order condition:\n∂L\n∂w (1) −i\n= −`TX−i(W̃ (2) · · · W̃ (L))i = 0.\nIf (W̃ (2) · · · W̃ (L))i 6= 0, then `TX−i = 0, which satisfies the global minimizer condition. Otherwise, any value of w(1)−i doesn’t change the loss since the forward path already contribute zero to the final output. Hence, arbitrary choice of w(1)−i owes same objective value. Thus, from Eq. (17), we still obtain `TX−i = 0. Thus any local minimum is a global minimum for the pruned sparse model.\nF.2 PROOF OF THEOREM 7\nProof: Since ∀ i 6= j, X−i, X−j share no same columns and XTX = Id, then ∀ i 6= j, XT−iX−j = 0. Besides, from our assumption ∩mj=1Sj = ∅, then ∩mj=1Sj = {1, . . . , d}, meaning that (X−1, . . . , X−d) is X with different arrangement of columns. Hence\nY = PXY + (I −PX)Y = X(XTX)−1XTY + (I −PX)Y , d∑\ni=1\nX−iZi + (I −PX)Y, (18)\nSet W (2) = [a1, . . . ,ad1 ] T , then the objective becomes\n1\n2 d1∑ i=1 ‖Zi −w−iaTi W (3) · · ·W (L)‖2F .\nWe set X̃ = XW̃ (1) = (X−1w−1, . . . , X−d1w−d1). Now we show the following problems have same local minimizer condition for w−1.\n(P) L(W̃ (1),W (2), . . . ,W (L)) = 1\n2 ∥∥∥Y − X̃ (W (2) · · ·W (L))∥∥∥2 F ,\n(P1) L2(W̃ ,A) = 1\n2 ∥∥∥∥∥Y − d1∑ i=1 X−iw−ia T i ∥∥∥∥∥ 2\nF\n.\n(19)\nIf there is a local minimizer w−1, . . . ,w−d1 6= 0, for problem (P), since di ≥ min{d1, dy},∀i ≥ 1 and X̃, Y have full column rank, then based on Theorem 2.3 in Lu & Kawaguchi (2017), a local minimizer of L(W̃ (1),W (2), . . . ,W (L)) is obtained when\nW (2) · · ·W (L) = ( X̃T X̃ )−1 X̃TY.\nNotice that X̃T X̃ = diag(wT−1w−1, . . . ,w T −d1w−d1). Then the objective is simplified as\n2L(W̃ (1)) = ∥∥∥∥Y − X̃ (X̃T X̃)−1 X̃TY ∥∥∥∥2 F = ∥∥∥∥∥Y − d1∑ i=1 (X−iw−i)(X−iw−i) TY wT−iw−i ∥∥∥∥∥ 2\nF\n= d1∑ i=1 ∥∥∥∥X−iZi − (X−iw−i)(X−iw−i)TX−iZiwT−iw−i ∥∥∥∥2 F\n= d1∑ i=1 ∥∥∥∥X−iZi − X−iwT−iw−iZiwT−iw−i ∥∥∥∥2 F\n(20)\nFor problem (P1), similarly, a local minimizer of L2(W̃ ,A) is obtained when (X−jw−j) T ( Y − ∑d1 i=1X−iw−ia T i ) = 0. Then aTj = (X−jw−j) TY\nwT−jw−j , showing same loss\nobjective as\n2L2(W̃ ) = ∥∥∥∥∥Y − d1∑ i=1 (X−iw−i)(X−iw−i) TY wT−iw−i ∥∥∥∥∥ 2\nF\n= d1∑ i=1 ∥∥∥∥X−iZi − (X−iw−i)(X−iw−i)TX−iZiwT−iw−i ∥∥∥∥2 F = 2L(W̃ (1)).\n(21)\nFinally, based on Theorem 2, every local minimum of (P1) is a global minimum. Hence every local minimum of (P) is a global minimum.\nIf there exists i0, such that w−i0 = 0, we show that Zi0 = 0 below. Without loss of generality, we assume i0 = 1, then the value of a1 does not affect the objective, we take a1 = 0 as well. In order\nto show the result, we only perturb w−1,a1, W (3), . . . ,W (L) into w−1 + ∆w,a1 + ∆a,W (3) + ∆3, . . . ,W (L) + ∆L and analyze the difference of loss as ∆L. We set\n∆W , L∏\ni=3\n( W (i) + ∆i ) − L∏ i=3 W (i), W o , L∏ i=3 W (i). (22)\nThen the perturbation leads to\n2∆L(∆w,∆a,∆W ) = ‖Z1 −∆w∆aT (W o + ∆W ) ‖2F − ‖Z1‖2F + ∑ i 6=1 ‖Zi −w−iaTi (W o + ∆W ) ‖2F − ‖Zi −w−iaTi W o‖2F\n= −2tr[ZT1 ∆w∆aT (W o + ∆W )] + ‖∆w∆aT (W o + ∆W ) ‖2F − 2 ∑ i 6=1 tr[∆WTaiw T −i ( Zi −w−iaTi W o ) ] + ‖w−iaTi ∆W‖2F .\n(23)\nApplying the first case to the remaining parameters excluding w−1 and a1 (If there are several w−is are zero, we can leave them all out), we have\naTi W o =\n(X−iw−i) TY\nwT−iw−i =\n(w−i) TZi\nwT−iw−i ,\nwhich agrees with wT−i ( Zi −w−iaTi W o ) = 0, i 6= 1.\nHence the second term in the final row of Eq. (23) is zero. Besides, let us note the first-order term of ∆w, showing that tr[ZT1 ∆w∆a\nT (W o + ∆W )] = 0. Otherwise, given w−1 = Θ(t−1),a−1 = Θ(t−1), ∆W = Θ(t−3), as t→∞, the sign in the final expansions of Eq. (23) depends on the fist term that is indefinite.\nTherefore, ∆a (W o + ∆W )ZT1 = 0, then (W o + ∆W )ZT1 = 0, leading to W oZT1 = 0 and ∆WZT1 = 0.\nIn view of expression ∆W , it holds that\n∆WZT1 = d1∑ i=3 ( W (3) · · ·W (i−1)∆iW (i+1) · · ·W (L) ) ZT1 + . . .\n= L−2∑ t=1 ft(∆3, . . . ,∆L)Z T 1 ,\n(24)\nwhere ft(∆3, . . . ,∆L) is the sum of the product in W (3), . . . ,W (L),∆3, . . . ,∆L that contains exactly t different ∆is. Then from small-order terms to high-order terms, we obtain ft(∆3, . . . ,∆L)Z1 = 0. It follows from fL−2 = ∆3 · · ·∆L, di ≥ min{d1, dL}, and the arbitrary of ∆3 · · ·∆L, we get Zi = 0. Finally, when Z1 = 0, It is evident that w1 = 0 already satisfies the global minimizer condition since the objective is separated as ∑d1 i=1 ‖Zi−w−iaTi W (3) · · ·W (L)‖2F . This completes the proof." } ]
2,020
null
SP:598a0c59ed1b2fb08626115179948768d09f0e45
[ "This work tackles the problem of adversarial attacks against ML models for code-understanding tasks, such as function summarization. It formulates the problem as the adversarial application of existing semantics-preserving program transformations (e.g., renaming variables), by jointly optimizing on the location of such a transformation, and the argument to the transformation (e.g., what to replace an existing identifier with). It shows that such adversarial examples increase the attack success rate over baseline approaches, and training with such examples increases the robustness of the resulting model to the same or baseline attacks.", "* This paper proposes an optimization problem to adopt insert/replace operations (program obfuscations) to generate adversarial programs. They apply it to the task of program summarization, and show that they outperform the existing baseline published in 2020. In particular, one of the main contributions is the identification of the site-perturbation and site-selection process, and formalizing them as practical optimization based on PGD. " ]
Machine learning (ML) models that learn and predict properties of computer programs are increasingly being adopted and deployed. In this work, we investigate principled ways to adversarially perturb a computer program to fool such learned models, and thus determine their adversarial robustness. We use program obfuscations, which have conventionally been used to avoid attempts at reverse engineering programs, as adversarial perturbations. These perturbations modify programs in ways that do not alter their functionality but can be crafted to deceive an ML model when making a decision. We provide a general formulation for an adversarial program that allows applying multiple obfuscation transformations to a program in any language. We develop first-order optimization algorithms to efficiently determine two key aspects – which parts of the program to transform, and what transformations to use. We show that it is important to optimize both these aspects to generate the best adversarially perturbed program. Due to the discrete nature of this problem, we also propose using randomized smoothing to improve the attack loss landscape to ease optimization. We evaluate our work on Python and Java programs on the problem of program summarization.1 We show that our best attack proposal achieves a 52% improvement over a state-of-the-art attack generation approach for programs trained on a SEQ2SEQ model. We further show that our formulation is better at training models that are robust to adversarial attacks.
[ { "affiliations": [], "name": "OPTIMIZED OBFUSCATIONS" }, { "affiliations": [], "name": "Shashank Srikant" }, { "affiliations": [], "name": "Sijia Liu" }, { "affiliations": [], "name": "Tamara Mitrovska" }, { "affiliations": [], "name": "Shiyu Chang" }, { "affiliations": [], "name": "Quanfu Fan" }, { "affiliations": [], "name": "Gaoyuan Zhang" }, { "affiliations": [], "name": "Una-May O’Reilly" } ]
[ { "authors": [ "Miltiadis Allamanis", "Hao Peng", "Charles Sutton" ], "title": "A convolutional attention network for extreme summarization of source code", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Miltiadis Allamanis", "Earl T Barr", "Premkumar Devanbu", "Charles Sutton" ], "title": "A survey of machine learning for big code and naturalness", "venue": "ACM Computing Surveys (CSUR),", "year": 2018 }, { "authors": [ "Uri Alon", "Shaked Brody", "Omer Levy", "Eran Yahav" ], "title": "code2seq: Generating sequences from structured representations of code", "venue": "arXiv preprint arXiv:1808.01400,", "year": 2018 }, { "authors": [ "Matej Balog", "Alexander L Gaunt", "Marc Brockschmidt", "Sebastian Nowozin", "Daniel Tarlow" ], "title": "Deepcoder: Learning to write programs", "venue": "arXiv preprint arXiv:1611.01989,", "year": 2016 }, { "authors": [ "James C Bezdek", "Richard J Hathaway" ], "title": "Convergence of alternating optimization. Neural", "venue": "Parallel & Scientific Computations,", "year": 2003 }, { "authors": [ "Pavol Bielik", "Martin Vechev" ], "title": "Adversarial robustness for code", "venue": "arXiv preprint arXiv:2002.04694,", "year": 2020 }, { "authors": [ "Christian Blum", "Andrea Roli" ], "title": "Metaheuristics in combinatorial optimization: Overview and conceptual comparison", "venue": "ACM computing surveys (CSUR),", "year": 2003 }, { "authors": [ "Stephen Boyd", "Neal Parikh", "Eric Chu" ], "title": "Distributed optimization and statistical learning via the alternating direction method of multipliers", "venue": "Now Publishers Inc,", "year": 2011 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In 2017 ieee symposium on security and privacy (sp),", "year": 2017 }, { "authors": [ "John C Duchi", "Peter L Bartlett", "Martin J Wainwright" ], "title": "Randomized smoothing for stochastic optimization", "venue": "SIAM Journal on Optimization,", "year": 2012 }, { "authors": [ "Javid Ebrahimi", "Anyi Rao", "Daniel Lowd", "Dejing Dou" ], "title": "Hotflip: White-box adversarial examples for text classification", "venue": "arXiv preprint arXiv:1712.06751,", "year": 2017 }, { "authors": [ "Logan Engstrom", "Andrew Ilyas", "Anish Athalye" ], "title": "Evaluating and understanding the robustness of adversarial logit pairing", "venue": "arXiv preprint arXiv:1807.10272,", "year": 2018 }, { "authors": [ "Saeed Ghadimi", "Guanghui Lan", "Hongchao Zhang" ], "title": "Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization", "venue": "Mathematical Programming,", "year": 2016 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Rahul Gupta", "Soham Pal", "Aditya Kanade", "Shirish Shevade" ], "title": "Deepfix: Fixing common c language errors by deep learning", "venue": "In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Zhen Li", "Deqing Zou", "Shouhuai Xu", "Xinyu Ou", "Hai Jin", "Sujuan Wang", "Zhijun Deng", "Yuyi Zhong" ], "title": "Vuldeepecker: A deep learning-based system for vulnerability detection", "venue": "arXiv preprint arXiv:1801.01681,", "year": 2018 }, { "authors": [ "Han Liu", "Chengnian Sun", "Zhendong Su", "Yu Jiang", "Ming Gu", "Jiaguang Sun" ], "title": "Stochastic optimization of program obfuscation", "venue": "IEEE/ACM 39th International Conference on Software Engineering (ICSE),", "year": 2017 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Renaud Pawlak", "Martin Monperrus", "Nicolas Petitprez", "Carlos Noguera", "Lionel Seinturier" ], "title": "Spoon: A library for implementing analyses and transformations of java source code", "venue": "Software: Practice and Experience,", "year": 2016 }, { "authors": [ "Palle Martin Pedersen" ], "title": "Methods and systems for identifying an area of interest in protectable", "venue": "content, September", "year": 2010 }, { "authors": [ "Fabio Pierazzi", "Feargus Pendlebury", "Jacopo Cortellazzi", "Lorenzo Cavallaro" ], "title": "Intriguing properties of adversarial ml attacks in the problem space", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2020 }, { "authors": [ "Michael Pradel", "Koushik Sen" ], "title": "Deepbugs: A learning approach to name-based bug detection", "venue": "Proceedings of the ACM on Programming Languages,", "year": 2018 }, { "authors": [ "Erwin Quiring", "Alwin Maier", "Konrad Rieck" ], "title": "Misleading authorship attribution of source code using adversarial learning", "venue": "In 28th {USENIX} Security Symposium ({USENIX} Security 19),", "year": 2019 }, { "authors": [ "Md Rabin", "Rafiqul Islam", "Mohammad Amin Alipour" ], "title": "Evaluation of generalizability of neural program analyzers under semantic-preserving transformations", "venue": "arXiv preprint arXiv:2004.07313,", "year": 2020 }, { "authors": [ "Goutham Ramakrishnan", "Jordan Henkel", "Zi Wang", "Aws Albarghouthi", "Somesh Jha", "Thomas Reps" ], "title": "Semantic robustness of models of source code", "venue": "arXiv preprint arXiv:2002.03043,", "year": 2020 }, { "authors": [ "Veselin Raychev", "Pavol Bielik", "Martin Vechev" ], "title": "Probabilistic model for code with decision trees", "venue": "ACM SIGPLAN Notices,", "year": 2016 }, { "authors": [ "Xujie Si", "Hanjun Dai", "Mukund Raghothaman", "Mayur Naik", "Le Song" ], "title": "Learning loop invariants for program verification", "venue": "In Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Shashank Srikant", "Varun Aggarwal" ], "title": "A system to grade computer programming skills using machine learning", "venue": "In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2014 }, { "authors": [ "Ke Wang", "Mihai Christodorescu" ], "title": "Coset: A benchmark for evaluating neural program embeddings", "venue": "arXiv preprint arXiv:1905.11445,", "year": 2019 }, { "authors": [ "Kaidi Xu", "Hongge Chen", "Sijia Liu", "Pin-Yu Chen", "Tsui-Wei Weng", "Mingyi Hong", "Xue Lin" ], "title": "Topology attack and defense for graph neural networks: An optimization perspective", "venue": null, "year": 1906 }, { "authors": [ "Noam Yefet", "Uri Alon", "Eran Yahav" ], "title": "Adversarial examples for models of code", "venue": "arXiv preprint arXiv:1910.07517,", "year": 2019 }, { "authors": [ "Huangzhao Zhang", "Zhuo Li", "Ge Li", "Lei Ma", "Yang Liu", "Zhi Jin" ], "title": "Generating adversarial examples for holding robustness of source code processing models", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Yaqin Zhou", "Shangqing Liu", "Jingkai Siow", "Xiaoning Du", "Yang Liu" ], "title": "Devign: Effective vulnerability identification by learning comprehensive program semantics via graph neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "C P" ], "title": "and∞ otherwise. We then derive Karush–Kuhn–Tucker (KKT) conditions of the above problems for their optimal solutions; see (Parikh & Boyd, 2014; Xu et al., 2019) for details. In (11) and (12), we need to call an internal solver to find the root of a scalar equation, e.g., 1 [z−", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Machine learning (ML) models are increasingly being used for software engineering tasks. Applications such as refactoring programs, auto-completing them in editors, and synthesizing GUI code have benefited from ML models trained on large repositories of programs, sourced from popular websites like GitHub (Allamanis et al., 2018). They have also been adopted to reason about and assess programs (Srikant & Aggarwal, 2014; Si et al., 2018), find and fix bugs (Gupta et al., 2017; Pradel & Sen, 2018), detect malware and vulnerabilities in them (Li et al., 2018; Zhou et al., 2019) etc. thus complementing traditional program analysis tools. As these models continue to be adopted for such applications, it is important to understand how robust they are to adversarial attacks. Such attacks can have adverse consequences, particularly in settings such as security (Zhou et al., 2019) and compliance automation (Pedersen, 2010). For example, an attacker could craft changes in malicious programs in a way which forces a model to incorrectly classify them as being benign, or make changes to pass off code which is licensed as open-source in an organization’s proprietary code-base.\nAdversarially perturbing a program should achieve two goals – a trained model should flip its decision when provided with the perturbed version of the program, and second, the perturbation should be imperceivable. Adversarial attacks have mainly been considered in image classification (Goodfellow et al., 2014; Carlini & Wagner, 2017; Madry et al., 2018), where calculated minor changes made to pixels of an image are enough to satisfy the imperceptibility requirement. Such changes escape a human’s attention by making the image look the same as before perturbing it, while modifying the underlying representation enough to flip a classifier’s decision. However, programs demand a stricter imperceptibility requirement – not only should the changes avoid human attention, but the changed program should also importantly functionally behave the same as the unperturbed program.\n1Source code: https://github.com/ALFA-group/adversarial-code-generation\nProgram obfuscations provide the agency to implement one such set of imperceivable changes in programs. Obfuscating computer programs have long been used as a way to avoid attempts at reverse-engineering them. They transform a program in a way that only hampers humans’ comprehension of parts of the program, while retaining its original semantics and functionality. For example, one common obfuscation operation is to rename variables in an attempt to hide the program’s intent from a reader. Renaming a variable sum in the program statement int sum = 0 to int xyz = 0 neither alters how a compiler analyzes this variable nor changes any computations or states in the program; it only hampers our understanding of this variable’s role in the program. Modifying a very small number of such aspects of a program marginally affects how we comprehend it, thus providing a way to produce changes imperceivable to both humans and a compiler. In this work, we view adversarial perturbations to programs as a special case of applying obfuscation transformations to them.\nHaving identified a set of candidate transformations which produce imperceivable changes, a specific subset needs to be chosen in a way which would make the transformed program adversarial. Recent attempts (Yefet et al., 2019; Ramakrishnan et al., 2020; Bielik & Vechev, 2020) which came closest to addressing this problem did not offer any rigorous formulation. They recommended using a variety of transformations without presenting any principled approach to selecting an optimal subset of transformations. We present a formulation which when solved provides the exact location to transform as well as a transformation to apply at the location. Figure 1 illustrates this. A randomly selected local-variable (name) when replaced by the name virtualname, which is generated by the stateof-the-art attack generation algorithm for programs\n(Ramakrishnan et al., 2020), is unable to fool a program summarizer (which predicts set item) unless our proposed site optimization is applied. We provide a detailed comparison in Section 2. In our work, we make the following key contributions – • We identify two problems central to defining an adversarial program – identifying the sites in a\nprogram to apply perturbations on, and the specific perturbations to apply on the selected sites. These perturbations are involve replacing existing tokens or inserting new ones. • We provide a general mathematical formulation of a perturbed program that models site locations and the perturbation choice for each location. It is independent of programming languages and the task on which a model is trained, while seamlessly modeling the application of multiple transformations to the program. • We propose a set of first-order optimization algorithms to solve our proposed formulation efficiently, resulting in a differentiable generator for adversarial programs. We further propose a randomized smoothing algorithm to achieve improved optimization performance. • Our approach demonstrates a 1.5x increase in the attack success rate over the state-of-the-art attack generation algorithm (Ramakrishnan et al., 2020) on large datasets of Python and Java programs. • We further show that our formulation provides better robustness against adversarial attacks compared to the state-of-the-art when used in training an ML model." }, { "heading": "2 RELATED WORK", "text": "Due to a large body of literature on adversarial attacks in general, we focus on related works in the domain of computer programs. Wang & Christodorescu (2019), Quiring et al. (2019), Rabin et al. (2020), and Pierazzi et al. (2020) identify obfuscation transformations as potential adversarial examples. They do not, however, find an optimal set of transformations to deceive a downstream model. Liu et al. (2017) provide a stochastic optimization formulation to obfuscate programs optimally by maximizing its impact on an obscurity language model (OLM). However, they do not address the problem of adversarial robustness of ML models of programs, and their formulation is only to find the right sequence of transformations which increases their OLM’s perplexity. They use an MCMC-based search to find the best sequence.\nYefet et al. (2019) propose perturbing programs by replacing local variables, and inserting print statements with replaceable string arguments. They find optimal replacements using a first-order optimization method, similar to Balog et al. (2016) and HotFlip (Ebrahimi et al., 2017). This is\nformed - two replace sites corresponding to local variables b and r , and three insert sites at locations I1, I2, I3. Ω is a vocabulary of tokens which can be used for the transformations. (c) This is a perturbed program with the tokens world and set from Ω used to replace tokens b and at location I3. These transformations do not change the original functionality of P , but cause an incorrect prediction delete (d) Examples of two site selection vectors zi, zii selecting different components. zi = 1 for a location i signifies that the ith token in P is selected to be optimally transformed. zi corresponds to the perturbed program in (c).\nan improvement over Zhang et al. (2020), who use the Metropolis-Hastings algorithm to find an optimal replacement for variable names. Bielik & Vechev (2020) propose a robust training strategy which trains a model to abstain from deciding when uncertain if an input program is adversarially perturbed. The transformation space they consider is small, which they search through greedily. Moreover, their solution is designed to reason over a limited context of the program (predicting variable types), and is non-trivial to extend to applications such as program summarization (explored in this work) which requires reasoning over an entire program.\nRamakrishnan et al. (2020) extend the work by Yefet et al. (2019) and is most relevant to what we propose in this work. They experiment with a larger set of transformations and propose a standard min-max formulation to adversarially train robust models. Their inner-maximizer, which generates adversarial programs, models multiple transformations applied to a program in contrast to Yefet et al. (2019). However, they do not propose any principled way to solve the problem of choosing between multiple program transformations. They randomly select transformation operations to apply, and then randomly select locations in the program to apply those transformations on.\nWe instead show that optimizing for locations alone improves the attack performance. Further, we propose a joint optimization problem of finding the optimal location and optimal transformation, only the latter of which Ramakrishnan et al. (2020) (and Yefet et al. (2019)) address in a principled manner. Although formally unpublished at the time of preparing this work, we compare our experiments to Ramakrishnan et al. (2020), the state-of-the-art in evaluating and defending against adversarial attacks on models for programs, and contrast the advantages of our formulation." }, { "heading": "3 PROGRAM OBFUSCATIONS AS ADVERSARIAL PERTURBATIONS", "text": "In this section, we formalize program obfuscation operations, and show how generating adversarial programs can be cast as a constrained combinatorial optimization problem.\nProgram obfuscations. We view obfuscation transformations made to programs as adversarial perturbations which can affect a downstream ML/DL model like a malware classifier or a program summarizer. While a variety of such obfuscation transformations exist for programs in general (see section 2A, Liu et al. (2017)), we consider two broad classes – replace and insert transformations. In replace transformations, existing program constructs are replaced with variants which decrease readability. For example, replacing a variable’s name, a function parameter’s name, or an object field’s name does not affect the semantics of the program in any way. These names in any program exclusively aid human comprehension, and thus serve as three replace transformations. In insert transformations, we insert new statements to the program which are unrelated to the code it is inserted around, thereby obfuscating its original intent. For example, including a print statement with an arbitrary string argument does not change the semantics of the program in any way.\nOur goal hence is to introduce a systematic way to transform a program with insert or replace transformations such that a trained model misclassifies a program P that it originally classified correctly.\nSite-selection and Site-perturbation – Towards defining adversarial programs. Before we formally define an adversarial program, we highlight the key factors which need to be considered in our formulation through the example program introduced in Figure 2.\nConsider applying the following two obfuscation transformations on the example program P in Figure 2.a – replacing local variable names (a replace transform), and inserting print statements (an insert transform). The two local variables b and r in P are potential candidates where the replace transform can be applied, while a print statement can potentially be inserted at the three locations I1, I2, I3 (highlighted in Figure 2.b). We notate these choices in a program as sites– locations in a program where a unique transformation can be applied.\nThus, in order to adversarially perturb P , we identify two important questions that need to be addressed. First, which sites in a program should be transformed? Of the n sites in a program, if we are allowed to choose at most k sites, which set of ≤ k sites would have the highest impact on the downstream model’s performance? We identify this as the site-selection problem, where the constraint k is the perturbation strength of an attacker. Second, what tokens should be inserted/replaced at the k selected sites? Once we pick k sites, we still have to determine the best choice of tokens to replace/insert at those sites which would have the highest impact on the downstream model. We refer to this as the site-perturbation problem.\nMathematical formulation. In what follows, we propose a general and rigorous formulation of adversarial programs. LetP denote a benign program which consists of a series of n tokens {Pi}ni=1 in the source code domain. For example, the program in Figure 2.a, when read from top to bottom and left to right, forms a series of n = 12 tokens {def, b, . . . , r, +, 5}. We ignore white spaces and other delimiters when tokenizing. Each Pi ∈ {0, 1}|Ω| here is considered a one-hot vector of length |Ω|, where Ω is a vocabulary of tokens. Let P ′ define a perturbed program (with respect to P) created by solving the site-selection and site-perturbation problems, which use the vocabulary Ω to find an optimal replacement. Since our formulation is agnostic to the type of transformation, perturbation in the remainder of this section refers to both replace and insert transforms. In our work, we use a shared vocabulary Ω to select transforms from both these classes. In practice, we can also assign a unique vocabulary to each transformation we define.\nTo formalize the site-selection problem, we introduce a vector of boolean variables z ∈ {0, 1}n to indicate whether or not a site is selected for perturbation. If zi = 1 then the ith site (namely, Pi) is perturbed. If there exist multiple occurrences of a token in the program, then all such sites are marked 1. For example, in Figure 2.d, if the site corresponding to local variable b is selected, then both indices of its occurrences, z3, z9 are marked as 1 as shown in zi. Moreover, the number of perturbed sites, namely, 1T z ≤ k provides a means of measuring perturbation strength. For example, k = 1 is the minimum perturbation possible, where only one site is allowed to be perturbed. To define site-perturbation, we introduce a one-hot vector ui ∈ {0, 1}|Ω| to encode the selection of a token from Ω which would serve as the insert/replace token for a chosen transformation at a chosen site. If the jth entry [ui]j = 1 and zi = 1, then the jth token in Ω is used as the obfuscation transformation applied at the site i (namely, to perturb Pi). We also have the constraint 1Tui = 1, implying that only one perturbation is performed at Pi. Let vector u ∈ {0, 1}n×|Ω| denote n different ui vectors, one for each token i in P . Using the above formulations for site-selection, site-perturbation and perturbation strength, the perturbed program P ′ can then be defined as\nP ′ = (1− z) · P + z · u, where 1T z ≤ k, z ∈ {0, 1}n, 1Tui = 1, ui ∈ {0, 1}|Ω|, ∀i, (1) where · denotes the element-column wise product.\nThe adversarial effect of P ′ is then measured by passing it as input to a downstream ML/DL model θ and seeing if it successfully manages to fool it.\nGenerating a successful adversarial program is then formulated as the optimization problem, minimize\nz,u `attack((1− z) · P + z · u;P,θ)\nsubject to constraints in (1), (2)\nwhere `attack denotes an attack loss. In this work, we specify `attack as the cross-entropy loss on the predicted output evaluated at P ′ in an untargeted setting (namely, without specifying the prediction label targeted by an adversary) (Ramakrishnan et al., 2020). One can also consider other specifications of `attack, e.g., C&W untargeted and targeted attack losses (Carlini & Wagner, 2017)." }, { "heading": "4 ADVERSARIAL PROGRAM GENERATION VIA FIRST-ORDER OPTIMIZATION", "text": "Solving problem (2) is not trivial because of its combinatorial nature (namely, the presence of boolean variables), the presence of a bi-linear objective term (namely, z ·u), as well as the presence of multiple constraints. To address this, we present a projected gradient descent (PGD) based joint optimization solver (JO) and propose alternates which promise better empirical performance.\nPGD as a joint optimization (JO) solver. PGD has been shown to be one of the most effective attack generation methods to fool image classification models (Madry et al., 2018). Prior to applying PGD, we instantiate (2) into a feasible version by relaxing boolean constraints to their convex hulls,\nminimize z,u `attack(z,u) subject to 1T z ≤ k, z ∈ [0, 1]n, 1Tui = 1, ui ∈ [0, 1]|Ω|, ∀i, (3)\nwhere for ease of notation, the attack loss in (2) is denoted by `attack(z,u). The continuous relaxation of binary variables in (3) is a commonly used trick in combinatorial optimization to boost the stability of learning procedures in practice (Boyd et al., 2004). Once the continuous optimization problem (3) is solved, a hard thresholding operation or a randomized sampling method (which regards z and u as probability vectors with elements drawn from a Bernoulli distribution) can be called to map a continuous solution to its discrete domain (Blum & Roli, 2003). We use the randomized sampling method in our experiments.\nThe PGD algorithm is then given by {z(t),u(t)} = {z(t−1),u(t−1)} − α{∇z`attack(z(t−1),u(t−1)),∇u`attack(z(t−1),u(t−1))} (4)\n{z(t),u(t)} = Proj({z(t),u(t)}), (5) where t denotes PGD iterations, z(0) and u(0) are given initial points, α > 0 is a learning rate, ∇z\ndenotes the first-order derivative operation w.r.t. the variable z, and Proj represents the projection operation w.r.t. the constraints of (3).\nThe projection step involves solving for z and ui simultaneously in a complex convex problem. See Equation 9 in Appendix A for details.\nA key insight is that the complex projection problem (9) can equivalently be decomposed into a sequence of sub-problems owing to the separability of the constraints w.r.t. z and {ui}. The two sub-problems are –\nminimize z ‖z− z(t)‖22 subject to 1T z ≤ k, z ∈ [0, 1]n, and minimize ui ‖ui − u(t)i ‖ 2 2 subject to 1Tui = 1, ui ∈ [0, 1]|Ω|, ∀i. (6)\nThe above subproblems w.r.t. z and ui can optimally be solved by using a bisection method that finds the root of a scalar equation. We provide details of a closed-form solution and its corresponding proof in Appendix A. We use this decomposition and solutions to design an alternating optimizer, which we discuss next.\nAlternating optimization (AO) for fast attack generation. While JO provides an approach to solve the unified formulation in (2), it suffers from the problem of getting trapped at a poor local optima despite attaining stationarity (Ghadimi et al., 2016). We propose using AO (Bezdek & Hathaway, 2003) which allows the loss landscape to be explored more aggressively, thus leading to better empirical convergence and optimality (see Figure 4a).\nAO solves problem (2) one variable at a time – first, by optimizing the site selection variable z keeping the site perturbation variable u fixed, and then optimizing u keeping z fixed. That is,\nz(t) = arg min 1T z≤k, z∈[0,1]n `attack(z,u (t−1)) and u(t)i = arg min 1Tui=1, ui∈[0,1]|Ω| `attack(z (t),u) ∀i. (7)\nWe can use PGD, as described in (6), to similarly solve each of z and u separately in the two alternating steps. Computationally, AO is expensive than JO by a factor of 2, since we need two iterations to cover all the variables which JO covers in a single iteration. However, in our experiments, we find AO to converge faster. The decoupling in AO also eases implementation, and provides the flexibility to set a different number of iterations for the u-step and the z-step within one iteration of AO. We also remark that the AO setup in (7) can be specified in other forms, e.g. alternating direction method of multipliers (ADMM) (Boyd et al., 2011). However, such methods use an involved alternating scheme to solve problem (2). We defer evaluating these options to future work.\nRandomized smoothing (RS) to improve generating adversarial programs. In our experiments, we noticed that the loss landscape of generating adversarial program is not smooth (Figure 3). This motivated us to explore surrogate loss functions which could smoothen it out. In our work, we employ a convolution-based RS technique (Duchi et al., 2012) to circumvent the optimization difficulty induced by the non-smoothness of the attack loss `attack. We eventually obtain a smoothing loss `smooth:\n`smooth(z,u) = Eξ,τ [`attack(z + µξ,u + µτ )], (8) where ξ and τ are random samples drawn from the uniform distribution within the unit Euclidean\nball, and µ > 0 is a small smoothing parameter (set to 0.01 in our experiments).\nThe rationale behind RS (8) is that the convolution of two functions (smooth probability density function and non-smooth attack loss) is at least as smooth as the smoothest of the two original functions. The advantage of such a formulation is that it is independent of the loss function, downstream model, and the optimization solver chosen for a problem. We evaluate RS on both AO and JO. In practice, we consider an empirical Monte Carlo approximation of (8), `smooth(z,u) = ∑m j=1[`attack(z + µξj ,u+µτj)]. We setm = 10 in our experiments to save on computation time. We also find that smoothing the site perturbation variable u contributes the most to improving attack\nperformance. We hence perturb only u to further save computation time." }, { "heading": "5 EXPERIMENTS & RESULTS", "text": "We begin by discussing the following aspects of our experiment setup – the classification task, the dataset and model we evaluate on, and the evaluation metrics we use.\nTask, Transformations, Dataset. We evaluate our formulation of generating optimal adversarial programs on the problem of program summarization, first introduced by Allamanis et al. (2016). Summarizing a function in a program involves predicting its name, which is usually indicative of its intent. We use this benchmark to test whether our adversarially perturbed program, which retains the functionality of the original program, can force a trained summarizer to predict an incorrect function name. We evaluate this on a well maintained dataset of roughly 150K Python programs(Raychev et al., 2016) and 700K Java programs (Alon et al., 2018). They are pre-processed into functions, and each function is provided as input to an ML model. The name of the function is omitted from the input. The ML model predicts a sequence of tokens as the function name. We evaluate our work on six transformations (4 replace and 2 insert transformations); see Appendix B for details on these transformations. The results and analysis that follow pertains to the case when any of these six transformations can be used as a valid perturbation, and the optimization selects which to pick and apply based on the perturbation strength k. This is the same setting employed in the baseline (Ramakrishnan et al., 2020).\nModel. We evaluate a trained SEQ2SEQ model. It takes program tokens as input, and generates a sequence of tokens representing its function name. We note that our formulation is independent of the learning model, and can be evaluated on any model for any task. The SEQ2SEQ model is trained and validated on 90% of the data while tested on the remaining 10%. It is optimized using the cross-entropy loss function.\nCODE2SEQ (Alon et al., 2018) is another model which has been evaluated on the task of program summarization. Its architecture is similar to that of SEQ2SEQ and contains two encoders - one which encodes tokens, while another which encodes AST paths. The model when trained only on tokens performs similar to a model trained on both tokens and paths (Table 3, Alon et al. (2018)). Thus adversarial changes made to tokens, as accommodated by our formulation, should have a high impact on the model’s output. Owing to the similarity in these architectures, and since our\ncomputational bench is in Pytorch while the original CODE2SEQ implementation is in TensorFlow, we defer evaluating the performance of our formulation on CODE2SEQ to future work.\nEvaluation metrics. We report two metrics – Attack Success Rate (ASR) and F1-score. ASR is defined as the percentage of output tokens misclassified by the model on the perturbed input but\ncorrectly predicted on the unperturbed input, i.e. ASR = ∑ i,j 1(θ(x ′ i)6=yij)∑\ni,j 1(θ(xi)=yij) for each token j in the\nexpected output of sample i. Higher the ASR, better the attack. Unlike (Ramakrishnan et al., 2020), we evaluate our method on those samples in the test-set which were fully, correctly classified by the model. Evaluating on such fully correctly classified samples provides direct evidence of the adversarial effect of the input perturbations (also the model’s adversarial robustness) by excluding test samples that have originally been misclassified even without any perturbation. We successfully replicated results from (Ramakrishnan et al., 2020) on the F1-score metric they use, and acknowledge the extensive care they have taken to ensure that their results are reproducible. As reported in Table 2 of (Ramakrishnan et al., 2020), a trained SEQ2SEQ model has an F1-score of 34.3 evaluated on the entire dataset. We consider just those samples which were correctly classified. The F1-score corresponding to ‘No attack’ in Table 1 is hence 100. In all, we perturb 2800 programs in Python and 2300 programs in Java which are correctly classified." }, { "heading": "5.1 EXPERIMENTS", "text": "We evaluate our work in three ways – first, we evaluate the overall performance of the three approaches we propose – AO, JO, and their combination with RS, to find the best sites and perturbations for a given program. Second, we evaluate the sensitivity of two parameters which control our optimizers – the number of iterations they are evaluated on, and the perturbation strength (k) of an attacker. Third, we use our formulation to train an adversarially robust SEQ2SEQ model, and evaluate its performance against the attacks we propose.\nOverall attack results. Table 1 summarizes our overall results. The first row corresponds to the samples not being perturbed at all. The ASR as expected is 0. The ‘Random replace’ in row 2 corresponds to both z and u being selected at random. This produces no change in the ASR, suggesting that while obfuscation transformations can potentially deceive ML models, any random transformation will have little effect. It is important to have some principled approach to selecting and applying these transformations.\nRamakrishnan et al. (2020) (referred to as BASELINE in Table 1) evaluated their work in two settings. In the first setting, they pick 1 site at random and optimally perturb it. They refer to this as Q1G. We contrast this by selecting an optimal site through our formulation. We use the same algorithm as theirs to optimally perturb the chosen site i.e. to solve for u. This allows us to ablate the effect of incorporating and solving the site-selection problem in our formulation. In the second related setting, they pick 5 sites at random and optimally perturb them, which they refer to as Q5G. In our setup, Q1G and Q5G are equivalent to setting k = 1 and k = 5 respectively, and picking random sites in z instead of optimal ones. We run AO for 3 iterations, and JO for 10 iterations.\nWe find that our formulation consistently outperforms the baseline. For k = 1, where the attacker can perturb at most 1 site, both AO and JO provide a 3 point improvement in ASR, with\nJO marginally performing better than AO. Increasing k improves the ASR across all methods – the attacker now has multiple sites to transform. For k = 5, where the attacker has at most 5 sites to transform, we find AO to provide a 6 point improvement in ASR over the baseline, outperforming JO by 1.5 points.\nSmoothing the loss function has a marked effect. For k = 1, we find smoothing to provide a 10 point increase (∼ 52% improvement) in ASR over the baseline when applied to AO, while JO+RS provides a 4 point increase. Similarly, for k = 5, we find AO+RS to provide a 14 point increase (∼ 38% improvement), while JO+RS provides a 11 point increase, suggesting the utility of smoothing the landscape to aid optimization.\nOverall, we find that accounting for site location in our formulation combined with having a smooth loss function to optimize improves the quality of the generated attacks by nearly 1.5 times over the state-of-the-art attack generation method for programs.\nEffect of solver iterations and perturbation strength k. We evaluate the attack performance (ASR) of our proposed approaches against the number of iterations at k = 5 (Figure 4a). For comparison, we also present the performance of BASELINE, which is not sensitive to the number of iterations (consistent with the empirical finding in (Ramakrishnan et al., 2020)), implying its least optimality. Without smoothing, JO takes nearly 10 iterations to reach its local optimal value, whereas AO achieves it using only 3 iterations but with improved optimality (in terms of higher ASR than JO). This supports our hypothesis that AO allows for a much more aggressive exploration of the loss landscape, proving to empirically outperform JO. With smoothing, we find both AO+RS and JO+RS perform better than AO and JO respectively across iterations. We thus recommend using AO+RS with 1-3 iterations as an attack generator to train models that are robust to such adversarial attacks.\nIn Figure 4b, we plot the performance of the best performing methods as we vary the attacker’s perturbation strength (k). We make two key observations – First, allowing a few sites (< 5) to be perturbed is enough to achieve 80% of the best achievable attack rate. For example, under the AO+RS attack, the ASR is 50 when k = 5 and 60 when k = 20. From an attacker’s perspective, this makes it convenient to effectively attack models of programs\nwithout being discovered. Second, we observe that the best performing methods we propose consistently outperform BASELINE across different k. The performance with BASELINE begins to converge only after k = 10 sites, suggesting the effectiveness of our attack.\nImproved adversarial training under proposed attack. Adversarial training (AT) (Madry et al., 2018) is a min-max optimization based training method to improve a model’s adversarial robustness. In AT, an attack generator is used as the inner maximization oracle to produce perturbed training examples that are then used in the outer minimization for model training. Using AT, we investigate if our proposed attack generation method (AO+RS) yields an improvement in adversarial robustness (over BASELINE) when it is used to adversarially train the SEQ2SEQ model. We evaluate AT\nin three settings – ‘No AT’, corresponding to the regularly trained SEQ2SEQ model (the one used in all the experiments in Table 1), BASELINE - SEQ2SEQ trained under the attack by (Ramakrishnan et al., 2020), and AO+RS - SEQ2SEQ trained under our AO+RS attack. We use three attackers on these models – BASELINE and two of our strongest attacks - AO and AO+RS. The row corresponding to ‘No AT’ is the same as the entries under k = 1 in Table 1. We find AT with BASELINE improves robustness by ∼11 points under AO+RS, our strongest attack. However, training with AO+RS provides an improvement of ∼16 points. This suggests AO+RS provides better robustness to models when used as an inner maximizer in an AT setting." }, { "heading": "6 CONCLUSION", "text": "In this paper, we propose a general formulation which mathematically defines an adversarial attack on program source code. We model two key aspects in our formalism – location of the transformation, and the specific choice of transformation. We show that the best attack is generated when both these aspects are optimally chosen. Importantly, we identify that the joint optimization problem we set up which models these two aspects is decomposable via alternating optimization. The nature of decomposition enables us to easily and quickly generate adversarial programs. Moreover, we show that a randomized smoothing strategy can further help the optimizer to find better solutions. Eventually, we conduct extensive experiments from both attack and defense perspectives to demonstrate the improvement of our proposal over the state-of-the-art attack generation method." }, { "heading": "7 ACKNOWLEDGMENT", "text": "This work was partially funded by a grant by MIT Quest for Intelligence, and the MIT-IBM AI lab. We thank David Cox for helpful discussions on this work. We also thank Christopher Laibinis for his constant support with computational resources. This work was partially done during an internship by Shashank Srikant at MIT-IBM AI lab." }, { "heading": "A SOLVING THE PAIR OF PROJECTION SUB-PROBLEMS IN EQ. 6", "text": "The projection step (5) can formally be described as finding the solution of the convex problem minimize\nz,u ‖z− z(t)‖22 +\n∑ i ‖ui − u (t) i ‖22\nsubject to 1T z ≤ k, z ∈ [0, 1]n, 1Tui = 1, ui ∈ [0, 1]|Ω|, ∀i. (9)\nIn Equation 6, we proposed to decompose the projection problem on the combined variables z and ui into the following sub-problems –\nminimize z ‖z− z(t)‖22 subject to 1T z ≤ k, z ∈ [0, 1]n, and minimize ui ‖ui − u(t)i ‖22 subject to 1Tui = 1, ui ∈ [0, 1]|Ω|, ∀i. (10)\nThe following proposition is a closed-form solution which solves them.\nProposition 1 Let z(t+1) and {u(t+1)i } denote solutions of problems given in (6). Their expressions are given by\nz(t+1) = [z(t) − µ1]+, and µ is the root of 1T [z(t) − µ1]+ = 1; (11)\nu (t+1) i =\n{ P[0,1][u (t) i ] if 1 TP[0,1][u (t) i ] ≤ k,\nP[0,1][u (t) i − τi1] if ∃τi > 0 s.t. 1TP[0,1][u (t) i − τi1] = k,\n∀i, (12)\nwhere [·]+ = max{0, ·} denotes the (elementwise) non-negative operation, and P[0,1](·) is the (elementwise) box projection operation, namely, for a scalar x P[0,1](x) = x if x ∈ [0, 1], 0 if x < 0, and 1 if x > 1.\nProof. We first reformulate problems in (6) as problems subject to a single inequality or equality constraint. That is, min1T z≤k ‖z− z(t)‖22 + I[0,1]n(z) and min1Tui=1 ‖ui − u (t) i ‖22 + Iui≥0(ui), where IC(P) denotes an indicator function over the constraint set C and IC(P) = 0 if P ∈ C and∞ otherwise. We then derive Karush–Kuhn–Tucker (KKT) conditions of the above problems for their optimal solutions; see (Parikh & Boyd, 2014; Xu et al., 2019) for details.\nIn (11) and (12), we need to call an internal solver to find the root of a scalar equation, e.g., 1T [z(t)− µ1]+ = 1 in (11). This can efficiently be accomplished using the bisection method in the logarithmic rateO(− log2 ) for the solution of -error tolerance (Boyd et al., 2004). Eventually, the PGD-based JO solver is formed by (4), (5), (11) and (12)." }, { "heading": "B PROGRAM TRANSFORMATIONS", "text": "To compare our work against the state-of-the-art (Ramakrishnan et al., 2020), we adopt the transformations introduced in their work and apply our formulation to them. This allows to solve the same setup they try while contrasting the advantages of our formulation.\nThey implement the following transformations -\n• Renaming local variables. This is the most common obfuscation operation, where local variables in a scope are renamed to less meaningful names. We use this to replace it with a name which has an adversarial effect on a downstream model.\n• Renaming function parameters. Similar to renaming variable names, function parameter names are also changeable. We replace names of such parameters with\n• Renaming object fields. A class’ referenced field is renamed in this case. These fields are referenced using self and this in Python and Java respectively. We assume that the class definitions corresponding to these objects are locally defined and can be changed to reflect these new names.\n• Replacing boolean literals A boolean literal (True, False) occurring in the program is replaced with an equivalent expression containing optimizable tokens. This results in a statement of the form <token> == <token> and <token> != <token> for True and False respectively.\n• Inserting print statements. A print with optimizable string arguments are inserted at a location recommended by the site-selection variable. See Figure 2 for an example.\n• Adding dead code. A statement having no consequence to its surrounding code, guarded by an if-condition with an optimizable argument, is added to the program.\nThe last two transformations are insert transforms, which introduce new tokens in the original program, whereas the others are replace transforms, which modify existing tokens.\nThey implement two other transformations – inserting a try-catch block with an optimizable parameter, and unrolling a while loop once. We did not add these to our final set of transforms since they are very similar to adding print statements and dead code, while producing a negligible effect. Adding every insert transformation increases the number of variables to be optimized. Since these two transforms did not impact model performance, we omitted them to keep the number of variables to be optimized bounded." }, { "heading": "C AN ATTACK EXAMPLE", "text": "" }, { "heading": "D ADDITIONAL RESULTS", "text": "D.1 JAVA DATASET\nWe tabulate results of evaluating our formulation on an additional dataset containing Java programs. This dataset was released by Alon et al. (2018) in their work on CODE2SEQ. The transformations were implemented using Spoon (Pawlak et al., 2016). We find the results to be consistent with the results on the Python dataset. AO + RS provides the best attack.\nD.2 FALSE POSITIVE RATE (FPR) UNDER DIFFERENT ATTACKS\nWe evaluated the False Positive Rate (FPR) of the model when provided perturbed programs as inputs. It is important the perturbations made to the programs cause the model to start predicting ground truth tokens instead of incorrect tokens. In principle, our optimization objective strictly picks replacements which degrade the classifier’s performance. As a consequence, we should expect that the model does not end up perturbing the program in a way which leads it closer to the ground truth. We empirically find the FPR consistent with this explanation. As seen in Table 5, in all our attacks, the FPR is almost 0, validating that our perturbations do not introduce changes which result in the model predicting the right output.\nD.3 EFFECT OF VARIOUS TRANSFORMATIONS\nWe study the effect of different transformations used in our work on the attack success rate of our attacks. In our analysis, we found that omitting print statements do not affect the ASR of our attacks. This is helpful since for the current task we evaluate (generating program summaries), a print statement likely affects the semantics of the task.\nWe further investigated the effect of variable names and function parameter names on our attacks. The SEQ2SEQ model and the task of generating program summaries can appear to overly rely on the presence of variable and parameter names in the program. To empirically ascertain this, we masked\nout all the variable and parameter names from our training set and trained the model with placeholder tokens. We find the model’s performance to remain similar (row All - vars, function params: 1, Table 6). Likewise, the attack performance also remains similar. We test another condition where we mask all these token names in the test set as well, during inference. We find a decrease in performance (row 4, Table 6) under attack strength k = 1, but the trend observed remains – AO+RS >AO >BASELINE. The ASR is largely unchanged under k = 5." } ]
2,021
null
SP:dc605d174368de20c31edca06ef90fc18fb79faa
[ "This paper focuses on domain generalization, targeting the challenging scenario where the training set might not include different sources; even under the presence of different sources, the problem formulation does not takes into account domain labels. The proposed solution is based on meta-learning, following the path drawn by Li et al. AAAI 2018; the Authors propose to adversarially split the training set in meta-train and meta-validation sets, and then update the current model in a direction that fosters good generalization performance on the meta-test. Results on standard benchmarks are encouraging.", "This paper proposes to unify adversarial training and meta-learning in domain-free generalization where labels of source domains are unavailable. To maximize the domain shift between the subsets of meta-train and meta-val, adversarial training is leveraged to find the worst-case train/val splits. Extensive experiments on benchmark datasets under different settings demonstrate the effectiveness of the proposed method. " ]
Domain generalization is an approach that utilizes several source domains to train the learner to be generalizable to unseen target domain to tackle domain shift issue. It has drawn much attention in machine learning community. This paper aims to learn to generalize well to unseen target domain without relying on the knowledge of the number of source domains and domain labels. We unify adversarial training and meta-learning in a novel proposed Domain-Free Adversarial Splitting (DFAS) framework. In this framework, we model the domain generalization as a learning problem that enforces the learner to be able to generalize well for any train/val subsets splitting of the training dataset. To achieve this goal, we propose a min-max optimization problem which can be solved by an iterative adversarial training process. In each iteration, it adversarially splits the training dataset into train/val subsets to maximize domain shift between them using current learner, and then updates the learner on this splitting to be able to generalize well from train-subset to val-subset using meta-learning approach. Extensive experiments on three benchmark datasets under three different settings on the source and target domains show that our method achieves state-of-the-art results and confirm the effectiveness of our method by ablation study. We also derive a generalization error bound for theoretical understanding of our method.
[]
[ { "authors": [ "Yaser S Abu-Mostafa", "Malik Magdon-Ismail", "Hsuan-Tien Lin" ], "title": "Learning from data, volume 4. AMLBook", "venue": null, "year": 2012 }, { "authors": [ "Yogesh Balaji", "Swami Sankaranarayanan", "Rama Chellappa" ], "title": "Metareg: Towards domain generalization using meta-regularization", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Fernando Pereira" ], "title": "Analysis of representations for domain adaptation", "venue": "In NeurIPS,", "year": 2007 }, { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Alex Kulesza", "Fernando Pereira", "Jennifer Wortman Vaughan" ], "title": "A theory of learning from different", "venue": "domains. ML,", "year": 2010 }, { "authors": [ "Gilles Blanchard", "Gyemin Lee", "Clayton Scott" ], "title": "Generalizing from several related classification tasks to a new unlabeled sample", "venue": "In NeurIPS", "year": 2011 }, { "authors": [ "Fabio M Carlucci", "Antonio D’Innocente", "Silvia Bucci", "Barbara Caputo", "Tatiana Tommasi" ], "title": "Domain generalization by solving jigsaw puzzles", "venue": null, "year": 2019 }, { "authors": [ "Hal Daume III", "Daniel Marcu" ], "title": "Domain adaptation for statistical classifiers", "venue": "JAIR, 26(1):101–126,", "year": 2006 }, { "authors": [ "Antonio D’Innocente", "Barbara Caputo" ], "title": "Domain generalization with domain-specific aggregation modules", "venue": "GCPR,", "year": 2018 }, { "authors": [ "Qi Dou", "Daniel Coelho de Castro", "Konstantinos Kamnitsas", "Ben Glocker" ], "title": "Domain generalization via model-agnostic learning of semantic features", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Yingjun Du", "Jun Xu", "Huan Xiong", "Qiang Qiu", "Xiantong Zhen", "Cees GM Snoek", "Ling Shao" ], "title": "Learning to optimize domain specific normalization for domain generalization", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Yingjun Du", "Jun Xu", "Huan Xiong", "Qiang Qiu", "Xiantong Zhen", "Cees GM Snoek", "Ling Shao" ], "title": "Learning to learn with variational information bottleneck for domain generalization", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Yang Fan", "Fei Tian", "Tao Qin", "Xiang-Yang Li", "Tie-Yan Liu" ], "title": "Learning to teach", "venue": null, "year": 2018 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Yaroslav Ganin", "Evgeniya Ustinova", "Hana Ajakan", "Pascal Germain", "Hugo Larochelle", "François Laviolette", "Mario Marchand", "Victor Lempitsky" ], "title": "Domain-adversarial training of neural networks", "venue": null, "year": 2030 }, { "authors": [ "Muhammad Ghifary", "W Bastiaan Kleijn", "Mengjie Zhang", "David Balduzzi" ], "title": "Domain generalization for object recognition with multi-task autoencoders", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Xiang Gu", "Jian Sun", "Zongben Xu" ], "title": "Spherical space domain adaptation with robust pseudo-label loss", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Judy Hoffman", "Eric Tzeng", "Taesung Park", "Jun-Yan Zhu", "Phillip Isola", "Kate Saenko", "Alexei Efros", "Trevor Darrell" ], "title": "CyCADA: Cycle-consistent adversarial domain adaptation", "venue": null, "year": 2018 }, { "authors": [ "Jiayuan Huang", "Arthur Gretton", "Karsten Borgwardt", "Bernhard Schölkopf", "Alex J. Smola" ], "title": "Correcting sample selection bias by unlabeled data", "venue": "NeurIPS", "year": 2007 }, { "authors": [ "Zeyi Huang", "Haohan Wang", "Eric P Xing", "Dong Huang" ], "title": "Self-challenging improves cross-domain generalization", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In NeurIPS,", "year": 2012 }, { "authors": [ "Da Li", "Yongxin Yang", "Yi-Zhe Song", "Timothy M Hospedales" ], "title": "Deeper, broader and artier domain generalization", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Da Li", "Yongxin Yang", "Yi-Zhe Song", "Timothy Hospedales" ], "title": "Learning to generalize: Meta-learning for domain generalization", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Da Li", "Jianshu Zhang", "Yongxin Yang", "Cong Liu", "Yi-Zhe Song", "Timothy M Hospedales" ], "title": "Episodic training for domain generalization", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Haoliang Li", "Sinno Jialin Pan", "Shiqi Wang", "Alex C Kot" ], "title": "Domain generalization with adversarial feature learning", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Yiying Li", "Yongxin Yang", "Wei Zhou", "Timothy M Hospedales" ], "title": "Feature-critic networks for heterogeneous domain generalization", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Weiyang Liu", "Yandong Wen", "Zhiding Yu", "Ming Li", "Bhiksha Raj", "Le Song" ], "title": "Sphereface: Deep hypersphere embedding for face recognition", "venue": null, "year": 2017 }, { "authors": [ "Mingsheng Long", "Yue Cao", "Jianmin Wang", "Michael Jordan" ], "title": "Learning transferable features with deep adaptation networks", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "JMLR, 9(Nov):2579–2605,", "year": 2008 }, { "authors": [ "Toshihiko Matsuura", "Tatsuya Harada" ], "title": "Domain generalization using a mixture of multiple latent domains", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Krikamol Muandet", "David Balduzzi", "Bernhard Schlkopf" ], "title": "Domain generalization via invariant feature representation", "venue": "In NeurIPS,", "year": 2013 }, { "authors": [ "Hyeonseob Nam", "HyunJae Lee", "Jongchan Park", "Wonjun Yoon", "Donggeun Yoo" ], "title": "Reducing domain gap via style-agnostic networks", "venue": "arXiv preprint arXiv:1910.11645,", "year": 2019 }, { "authors": [ "Sinno Jialin Pan", "Qiang Yang" ], "title": "A survey on transfer learning", "venue": "IEEE TKDE,", "year": 2010 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Vihari Piratla", "Praneeth Netrapalli", "Sunita Sarawagi" ], "title": "Efficient domain generalization via commonspecific low-rank decomposition", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Fengchun Qiao", "Long Zhao", "Xi Peng" ], "title": "Learning to learn single domain generalization", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": null, "year": 2015 }, { "authors": [ "Jongbin Ryu", "Gitaek Kwon", "Ming-Hsuan Yang", "Jongwoo Lim" ], "title": "Generalized convolutional forest networks for domain generalization and visual recognition", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Kuniaki Saito", "Donghyun Kim", "Stan Sclaroff", "Trevor Darrell", "Kate Saenko" ], "title": "Semi-supervised domain adaptation via minimax entropy", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Shiv Shankar", "Vihari Piratla", "Soumen Chakrabarti", "Siddhartha Chaudhuri", "Preethi Jyothi", "Sunita Sarawagi" ], "title": "Generalizing across domains via cross-gradient training", "venue": null, "year": 2018 }, { "authors": [ "A. Torralba", "A.A. Efros" ], "title": "Unbiased look at dataset bias", "venue": null, "year": 2011 }, { "authors": [ "Liu. Cosface" ], "title": "Large margin cosine loss for deep face recognition", "venue": "In CVPR,", "year": 2018 } ]
[ { "heading": null, "text": "Domain generalization is an approach that utilizes several source domains to train the learner to be generalizable to unseen target domain to tackle domain shift issue. It has drawn much attention in machine learning community. This paper aims to learn to generalize well to unseen target domain without relying on the knowledge of the number of source domains and domain labels. We unify adversarial training and meta-learning in a novel proposed Domain-Free Adversarial Splitting (DFAS) framework. In this framework, we model the domain generalization as a learning problem that enforces the learner to be able to generalize well for any train/val subsets splitting of the training dataset. To achieve this goal, we propose a min-max optimization problem which can be solved by an iterative adversarial training process. In each iteration, it adversarially splits the training dataset into train/val subsets to maximize domain shift between them using current learner, and then updates the learner on this splitting to be able to generalize well from train-subset to val-subset using meta-learning approach. Extensive experiments on three benchmark datasets under three different settings on the source and target domains show that our method achieves state-of-the-art results and confirm the effectiveness of our method by ablation study. We also derive a generalization error bound for theoretical understanding of our method." }, { "heading": "1 INTRODUCTION", "text": "Deep learning approach has achieved great success in image recognition (He et al., 2016; Krizhevsky et al., 2012; Simonyan & Zisserman, 2014). However, deep learning methods mostly succeed in the case that the training and test data are sampled from the same distribution (i.e., the i.i.d. assumption). However, this assumption is often violated in real-world applications since the equipments/environments that generate data are often different in training and test datasets. When there exists distribution difference (domain shift (Torralba & Efros, 2011)) between training and test datasets, the performance of trained model, i.e., learner, will significantly degrade.\nTo tackle the domain shift issue, domain adaptation approach (Pan & Yang, 2010; Daume III & Marcu, 2006; Huang et al., 2007) learns a transferable learner from source domain to target domain. Domain adaptation methods align distributions of different domains either in feature space (Long et al., 2015; Ganin et al., 2016) or in raw pixel space (Hoffman et al., 2018), which relies on unlabeled data from target domain at training time. However, in many applications, it is unrealistic to access the unlabeled target data, therefore this prevents us to use domain adaptation approach in this setting, and motivates the research on the learning problem of domain generalization.\nDomain generalization (DG) approach (Blanchard et al., 2011; Muandet et al., 2013) commonly uses several source domains to train a learner that can generalize to an unseen target domain. The underlying assumption is that there exists a latent domain invariant feature space across source domains and unseen target domain. To learn the domain invariant features, (Muandet et al., 2013; Ghifary et al., 2015; Li et al., 2018b) explicitly align distributions of different source domains in feature space. (Balaji et al., 2018; Li et al., 2019b; 2018a; Dou et al., 2019) split source domains into meta-train and meta-test to simulate domain shift and train learner in a meta-learning approach. (Shankar et al., 2018; Carlucci et al., 2019; Zhou et al., 2020; Ryu et al., 2020) augment images or features to train learner to enhance generalization capability.\nConventional domain generalization methods assume that the domain labels are available. But in a more realistic scenario, the domain labels may be unknown (Wang et al., 2019). To handle this domain-free setting, Carlucci et al. (2019) combines supervised learning and self-supervised learning to solve jigsaw puzzles of the training images. Matsuura & Harada (2020) divides samples into several latent domains via clustering and trains a domain invariant feature extractor via adversarial training. Huang et al. (2020) discards the dominant activated features, forcing the learner to activate remaining features that correlate with labels. Another line of works (Volpi et al., 2018; Qiao et al., 2020) tackle the single source setting that the training set comprises a single domain, and the train and test data are from different domains.\nIn this work, we focus on a general learning scenario of domain generalization as follows. First, we do not know the domain label of each data and do not assume that there are several domains in the training dataset. Second, we do not assume that the training and test data are from different domains (e.g., styles). However, the previous domain-free DG methods (Matsuura & Harada, 2020) commonly evaluate on the datasets (e.g., PACS) composed of several domains though they do not use domain labels in training.\nIn our domain-free setting, we do not assume and know the domains in the training dataset, we therefore model domain generalization as a learning problem that the learner should be able to generalize well for any splitting of train/val subsets, i.e., synthetic source/target domains, over the training dataset. This explicitly enforces that the trained learner should be generalizable for any possible domain shifts within the training dataset.\nTo achieve this goal, we propose an adversarial splitting model that is a min-max optimization problem, due to the difficulty of enumerating all splittings. In this min-max problem, we adversarially split training dataset to train/val subsets by maximizing the domain shift between them based on the given learner, and then update learner by minimizing the prediction error on val-subset using meta-learning approach given the splitting. By optimizing this min-max problem, we enforce the learner to generalize well even in the worst-case splitting. We also investigate L2-normalization of features in our domain generalization method. It is surprisingly found that L2-normalization can improve performance of learner and mitigate gradient explosion in the meta-learning process of DG. We further theorectically analyze the underlying reasons for this finding. This proposed domain generalization approach is dubbed Domain-Free Adversarial Splitting, i.e., DFAS.\nTo verify the effectiveness of our method, we conduct extensive experiments on benchmark datasets of PACS, Office-Home and CIFAR-10 under different settings with multiple/single source domains. In experiments that the training data are from several source domains, our method achieves state-ofthe-art results on both PACS and Office-Home datasets. We also find that our method significantly outperforms baselines in experiments that the training data are from a single source domain on PACS and CIFAR-10. We also confirm the effectiveness of our method by ablation study.\nBased on domain adaptation theory, we also derive an upper bound of the generalization error on unseen target domain. We analyze that the terms in this upper bound are implicitly minimized by our method. This theoretical analysis partially explains the success of our method." }, { "heading": "2 RELATED WORKS", "text": "We summarize and compare with related domain generalization (DG) methods in two perspectives, i.e., DG with domain labels and DG without domain labels.\nDG with domain labels. When the domain labels are available, there are three categories of methods for DG. First, (Muandet et al., 2013; Ghifary et al., 2015; Li et al., 2018b; Piratla et al., 2020) learn domain invariant features by aligning feature distributions or by common/specific feature decomposition. Second, (Li et al., 2019a; Balaji et al., 2018; Li et al., 2019b; 2018a; Dou et al., 2019; Du et al., 2020a;b) are based on meta-learning approach that splits given source domains into meta-train and meta-test domains and trains learner in an episodic training paradigm. Third, (Shankar et al., 2018; Carlucci et al., 2019; Zhou et al., 2020; Wang et al., 2020) augment fake domain data to train learner for enhancing generalization capability of learner. Our method may mostly relate to the above second category of methods. But differently, we consider the DG problem in domain-free setting and adversarially split training dataset to synthesize domain shift in a principled min-max optimization method, instead of using leave-one-domain-out splitting in these methods.\nDG without domain labels. When the domain label is unavailable, to enhance generalization ability of learner, Wang et al. (2019) extracts robust feature representation by projecting out superficial patterns like color and texture. Carlucci et al. (2019) proposes to solve jigsaw puzzles of the training images. Matsuura & Harada (2020) divides samples into several latent domains via clustering and learns domain invariant features via adversarial training of feature extractor and domain discriminator. Huang et al. (2020) discards the dominant activated features, forcing the learner to activate remaining features that correlate with labels. Volpi et al. (2018) and Qiao et al. (2020) propose adversarial data augmentation to tackle the setting that the training set comprises a single domain. In methodology, these methods either explicitly force the learner to extract robust features (Wang et al., 2019; Matsuura & Harada, 2020; Huang et al., 2020) or augment new data to increase training data (Carlucci et al., 2019; Qiao et al., 2020; Volpi et al., 2018). While our method is a novel meta-learning approach for DG by introducing adversarial splitting of training dataset during training, without relying on data/domain augmentation." }, { "heading": "3 METHOD", "text": "In our setting, since we do not assume and know the domains in the training dataset, the training data could be independently sampled from several underlying source domains or just from a single source domain. We denote S = {(xi, yi)}Ni=1 as the training dataset. Our goal is to train the learner with S that can generalize well on an unseen target domain.\nIn the following sections, we introduce details of our proposed model in Sect. 3.1, followed by its optimization method in Sect. 3.2. We also investigate L2-normalization for domain generalization in Sect. 3.3. Theoretical analysis for our method is presented in Sect. 4. Experimental results are reported in Sect. 5. Sect. 6 concludes this paper." }, { "heading": "3.1 DOMAIN-FREE ADVERSARIAL SPLITTING MODEL", "text": "As mentioned in Sect. 1, we model DG as a learning problem that enforces the learner to be able to generalize well for any train/val subsets splitting of the training dataset. The learner is trained using meta-learning approach (Finn et al., 2017). To formulate our idea mathematically, we first introduce some notations. We denote f as a function/learner (f could be a deep neural network, e.g., ResNet (He et al., 2016)) that outputs classification score of the input image, l as the loss such as cross-entropy, St and Sv as the train-subset and val-subset respectively such that S = St ∪ Sv and St ∩ Sv = ∅. The formulated optimization problem for domain generalization is\nmin w\n1 |Γξ| ∑ Sv∈Γξ L(θ(w);Sv) +R(w)\ns.t. θ(w) = arg min θ L(θ;St, w), St = S − Sv.\n(1)\nIn Eq. (1), Γξ = {Sv : Sv ⊂ S, |Sv| = ξ} is the set of all possible val-subsets of S with length of ξ, St = S − Sv is train-subset paired with each Sv , L(θ(w);Sv) = 1|Sv| ∑ (x,y)∈Sv l(fθ(w)(x), y) is the loss on Sv, where θ(w) is the parameters of f , L(θ;St, w) is L(θ;St) with θ initialized by w andR(w) is regularization term. In the optimization model of Eq. (1), the parameter θ(w) of learner trained on St is treated as a function of the initial parameter w. To force the learner trained on St to generalize well on Sv, we directly minimize the loss L(θ(w);Sv), dubbed generalization loss, on val-subset Sv, w.r.t. the parameter θ(w) trained on St. Solving Eq. (1) will force the learner to be able to generalize well from any train-subset to corresponding val-subset.\nSince |Γξ| may be extremely large, it is infeasible to enumerate all possible train/val splittings. Thus, we propose the following adversarial splitting model instead,\nmin w max Sv∈Γξ\nL(θ(w);Sv) +R(w)\ns.t. θ(w) = arg min θ L(θ;St, w), St = S − Sv.\n(2)\nIn the min-max problem of Eq. (2), the train/val (St/Sv) splitting is optimized to maximize the generalization loss to increase the domain shift between train and val subsets by finding the hardest splitting to the learner. While w is optimized by minimizing the generalization loss of learner over\nthe splitting. Solving this adversarial splitting optimization model in Eq. (2) enforces the learner to be generalizable even for the worst-case splitting. We therefore expect that the trained learner is robust to the domain shifts within the training dataset. For the regularization termR(w), we set it to be the training loss on St (i.e.,R(w) = L(w;St)), which additionally constrains that the learner with parameter w should be effective on St (Li et al., 2018a). The effect of the hyper-parameter ξ will be discussed in Appendix A.2.\nIn conventional adversarial machine learning, adversarial training is imposed on adversarial samples and learner to increase robustness of the learner to adversarial corruption (Goodfellow et al., 2015). While in our optimization model of Eq. (2), adversarial training is conducted on data splitting and learner to force the learner to be robust to domain shift between train/val subsets. Our model bridges adversarial training and meta-learning. It is a general learning framework for domain generalization and is a complement to adversarial machine learning." }, { "heading": "3.2 OPTIMIZATION", "text": "This section focuses on the optimization of Eq. (2). Since Eq. (2) is a min-max optimization problem, we alternately update Sv and w by fixing the other one as known. We should also consider the inner loop for optimization of θ(w) in the bi-layer optimization problem of Eq. (2). We next discuss these updating steps in details. The convergence and computational cost of this algorithm will be also discussed in Appendix A.3 and A.4 respectively.\nInner loop for optimization of θ(w). We adopt finite steps of gradient descent to approximate the minimizer θ(w) of the inner objective L(θ;St) with initial value w. This approximation technique has been introduced in machine learning community several years ago (Sun & Tappen, 2011; Finn et al., 2017; Fan et al., 2018). For convenience of computation, following (Li et al., 2018a; Dou et al., 2019), we only conduct gradient descent by one step as\nθ(w) = w − α∇θL(θ;St)|θ=w, (3)\nwhere α is the step size of inner optimization and its effect will be discussed in Appendix A.2.\nOptimizingw with fixed Sv . For convenience, we denote gtw = ∇θL(θ;St)|θ=w. Fixing Sv (St is then fixed), w can also be updated by gradient descent, i.e.,\nw = w − η∇w ( L(w − αgtw;Sv) +R(w) ) , (4)\nwhere η is the step size of outer optimization.\nFinding the hardest splitting Sv with fixed w. Fixing w, to find Sv ∈ Γξ to maximize L(w − αgtw;Sv), we do first order Taylor expansion for L(w−αgtw;Sv) by L(w−αgtw;Sv) ≈ L(w;Sv)− α 〈gtw, gvw〉, where gvw = ∇θL(θ;Sv)|θ=w and 〈·, ·〉 denotes the inner product. From the definition of L, gtw and gvw, the optimization problem of maxSv∈Γξ{L(w;Sv) − α 〈gtw, gvw〉} can be written as maxSv∈Γξ{ 1|Sv| ∑ (x,y)∈Sv l (fw(x), y)− α 〈∇wl(fw(x), y), g t w〉}. This problem is equivalent to the following splitting formulation:\nmax Sv,A ∑ (x,y)∈Sv l (fw(x), y)− α 〈∇wl(fw(x), y), A〉 s.t. A = gtw, Sv ∈ Γξ, (5)\nwhere we introduced an auxiliary variable A. Eq. (5) can be solved by alternatively updating Sv and A. Given A, we compute and rank the values of l (fw(x), y) − α 〈∇wl(fw(x), y), A〉 for all (x, y) ∈ S and select the largest ξ samples to constitute the Sv. Given Sv (St is then given), we update A by A = gtw = 1 |St| ∑ (x,y)∈St ∇wl(fw(x), y). We also discuss details and convergence of this alternative iteration in Appendix C. Since computing gradient w.r.t. all parameters is time and memory consuming, we only compute gradient w.r.t. parameters of the final layer of learner f .\n3.3 L2-NORMALIZATION FOR EXTRACTED FEATURE\nL2-normalization has been used in face recognition (Liu et al., 2017; Wang et al., 2018) and domain adaptation (Saito et al., 2019; Gu et al., 2020), but is rarely investigated in domain generalization. We investigate L2-normalization in domain generalization in this paper. It is found surprisingly in experiments that L2-normalization not only improves the performance of learner (see Sect. 5.4), but\nalso mitigates gradient explosion (see Sect. 5.5) that occurs frequently during the training of metalearning for DG (Finn et al., 2017; Dou et al., 2019). We next discuss details of L2-normalization in our method and analyze why L2-normalization mitigates gradient explosion.\nFeature L2-normalization. The L2-normalization is used as a component of our learner f . Specifically, we decompose f into feature extractor fe (e.g., the convolutional layers of ResNet), the transform fn representing L2-normalization and classifier f c, i.e., f = f c ◦ fn ◦ fe. The feature of input image x extracted by fe is fed to fn to output an unit vector z, i.e., z = fn(fe(x)) = f\ne(x) ‖fe(x)‖ .\nThe classifier f c consists of unit weight vectors W = [w1, w2, · · · , wK ], where K is the number of classes and ‖wk‖ = 1,∀k. f c takes z as an input and outputs the classification score vector σm,s(W T z). σm,s(·) is the marginal softmax function defined by\n[σm,s(W T z)]k = exp(s(wTk z −mI{k=y}))∑K k′=1 exp(s(w T k′z −mI{k′=y})) , k = 1, 2, · · · ,K, (6)\nwhere y is the label of x, [·]k indicates the k-th element, I{a} is the indicator function that returns 1 if a is true, 0 otherwise, m and s are hyper-parameters indicating margin and radius respectively.\nAnalysis of mitigating gradient explosion. We find that L2-normalization mitigates gradient explosion in the training of meta-learning for domain generalization. For the sake of simplicity, we analyze gradient norm of loss w.r.t. parameters of f c in the meta-learning process of domain generalization, with fe as fixed function. Without loss of generality, we consider the case that K = 2 (i.e., binary classification), s = 1 and m = 0. In this case, we have the following proposition. Proposition 1. Under the above setting, if the input feature of f c is L2-normalized, the gradient norm of loss w.r.t. parameters of f c in the meta-learning process of DG is bounded.\nSketch of proof. Given feature z, the loss of binary classification is L(w; z) = −y log(σ(wT z))− (1 − y) log(1 − σ(wT z)), where σ is the sigmoid function. Let w′ = w − α∇wL(w; z), then ∇wL(w′; z) = (I − αH)∇w′L(w′; z), where H is the Hessian matrix. The gradient norm ‖∇wL(w′; z)‖ ≤ ‖I − αH‖ ‖∇w′L(w′; z)‖ ≤ (1 + |α| ‖H‖) ‖∇w′L(w′; z)‖. Since ∇w′L(w′; z) = (p− y)z and H = p(1− p)zzT where p = σ(wT z), ‖H‖ = supu:‖u‖=1 ‖Hu‖ ≤ supu:‖u‖=1\n∥∥zzTu∥∥ ≤ ‖z‖2 and ‖∇w′L(w′; z)‖ ≤ ‖z‖. If ‖z‖ = 1, we have ‖∇wL(w′; z)‖ ≤ 1 + |α|. According to Proposition 1, L2-normalization can mitigate gradient explosion under the above setting. The analysis of gradient norm of loss w.r.t. parameters of both f c and fe in the meta-learning process is much more complex, left for our future work." }, { "heading": "4 THEORETICAL ANALYSIS", "text": "This section presents theoretical understanding of our method. We first derive a generalization error bound on target domain in theorem 1 for the general setting of meta-learning for DG. Then, based on theorem 1, we theoretically explain the reason on the success of our method.\nWithout loss of generality, we consider binary classification problem. We denoteH as the set of all possible f , i.e., H = {fw : w ∈ RM}, where M is the number of parameters. For any Sv ∈ Γξ and St = S − Sv, we let HSt = {fθ(w) : θ(w) = arg minθ L(θ;St, w), w ∈ RM}. The metalearning approach for DG is to find a function in HSt to minimize classification loss on Sv. Note that, although the training samples in S may be sampled from several distributions, they can still be seen as being i.i.d. sampled from a mixture of these distributions. We next respectively denote P = ∑D d=1 βdPd as the mixture distribution with βd representing the sampling ratio of the d-th source domain, ΨQ(f) = E(x,y)∼Q[I{Ψ(f(x))6=y}] as the generalization error on distribution Q of unseen target domain, ̂ΨSv (f) = 1 |Sv| ∑ (x,y)∈Sv I{Ψ(f(x)) 6=y} as the empirical error on Sv, V C(H) as the VC-dimension of H, and Ψ(·) as the prediction rule such as the Bayes Optimal Predictor. Based on the domain adaptation theory (Ben-David et al., 2007; 2010) and inspired by the analysis in (Saito et al., 2019), we have the following theorem. Theorem 1. Let γ be a constant, assume EQ[I{l(f(x),y)>γ}] ≥ EP [I{l(f(x),y)>γ}], then given any Sv ∈ Γξ and St = S−Sv and for any δ ∈ (0, 1), with probability at least 1−2δ, we have ∀f ∈ HSt ,\nTable 1: Results of MSDS experiment on PACS based on ResNet18 and ResNet50.\nBackbone Target D-SAM JiGen MASF MMLD MetaReg RSC Base-line DFAS (ours)\nResNet18\nA 77.3 79.4 80.3 81.3 83.7 83.4 80.2±0.4 84.2±0.1 C 72.4 75.3 77.2 77.2 77.2 80.3 75.5±0.5 79.5±0.3 P 95.3 96.0 95.0 96.1 95.5 96.0 95.9±0.1 95.8±0.1 S 77.8 71.4 71.7 72.3 70.3 80.9 70.1±0.9 82.1±0.4\nAvg 80.7 80.5 81.0 81.8 81.7 85.1 80.4 85.4\nResNet50\nA - - 82.9 - 87.2 87.9 86.1±0.2 89.1±0.1 C - - 80.5 - 79.2 82.2 79.2±0.4 84.6±0.2 P - - 95.0 - 97.6 97.9 97.6±0.1 96.8±0.2 S - - 72.3 - 70.3 83.5 70.3±0.7 85.6±0.3\nAvg - - 82.7 - 83.6 87.8 83.3 89.0\nΨlQ (f) ≤ ̂ Ψl Sv (f) +B(Sv) + 2\n√ 8\nξ\n( C2 + 4\nδ\n) + C3, (7)\nwhere B(Sv) = C1 − inf\nf ′∈HSt\n1 |Sv| ∑\n(x,y)∈Sv\nI{l(f ′(x),y)>γ}, (8)\nC1 = supS′v∈Γξ supf ′∈HS−S′v EQ[I{l(f ′(x),y)>γ}], C2 = supS′v∈Γξ V C(H Ψl S−S′v ) log 2eξ V C(HΨl S−S′v ) , C3 ≥ supS′v∈Γξ inff ′∈HS−S′v { Ψl P (f ′) + ΨlQ (f ′)}, HΨlS−St = {Ψl ◦ f : f ∈ HS−St}, Ψl is a loss-related indicator defined by\nΨl(f(x)) = { 1 if l(f(x), y) > γ. 0 otherwise .\n(9)\nProof is given in Appendix D. The assumption of EQ[I{l(f(x),y)>γ}] ≥ EP [I{l(f(x),y)>γ}] in theorem 1 is realistic because the data of Q is not accessed at training time, and the learner trained on data of P should have smaller classification loss on P thanQ. In theorem 1, C1, C2, C3 are constants to f . In Eq. (7), the generalization error ΨlQ (f) on Q can be bounded by the empirical error ̂ Ψl Sv\n(f) on Sv, the term B(Sv) that measures the discrepancy between P and Q, and the last two constant terms in Eq. (7).\nTo obtain lower ΨlQ (f), we need to minimize ̂ Ψl Sv (f) and B(Sv). Minimizing B(Sv) w.r.t. Sv is equivalent to\nmax Sv∈Γξ inf f∈HSt\n1 |Sv| ∑\n(x,y)∈Sv\nI{l(f(x),y)>γ}. (10)\nIntuitively, Eq. (10) means to find a Sv ∈ Γξ such that the infimum ratio of examples in Sv having loss greater than γ is maximized. This min-max problem of Eq. (10) for computing the error bound bears the similar idea as our min-max problem. Our adversarial splitting model in Eq. (2) can implicitly realize the goal of Eq. (10) and meanwhile ensure lower ̂ΨlSv (f) for any Sv .\nThe maximization in Eq. (10) corresponds to our adversarial splitting that finds the hardest val-subset Sv for the learner in Eq. (2). The infimum in Eq. (10) corresponds to the minimization of the loss in Eq. (2) on Sv w.r.t. the learner parameterized by θ(w). Instead of using indicator function I in Eq. (10), in our model of Eq. (2), we choose differentiable classification loss for easier optimization." }, { "heading": "5 EXPERIMENTS", "text": "We verify the effectiveness of our method in three types of experimental settings: Multi Source with Domain Shift (MSDS) that the training data are from several source domains and there exists domain\nshift between training and test data, Single Source with Domain Shift (SSDS) that the training data are from a single source domain and there exists domain shift between training and test data, and Same Source and Target Domain (SSTD) that the training and test data are from a same single domain. The source codes will be released online.\nWe conduct experiments on three benchmark datasets. PACS (Li et al., 2017) contains four domains, including art painting (A), cartoon (C), photo (P), sketch (S), sharing seven classes. OfficeHome (Volpi et al., 2018), a dataset widely used in domain adaptation and recently utilized in domain generalization, consists of four domains: Art (Ar), Clipart (Cl), Product (Pr), Real World (Rw), sharing 65 classes. Both of these two datasets are utilized to conduct experiments in settings of MSDS and SSDS. CIFAR-10 (Krizhevsky et al., 2009) is taken for the experimental setting of SSTD." }, { "heading": "5.1 TYPE I: MULTI SOURCE WITH DOMAIN SHIFT (MSDS)", "text": "In the setting of MSDS, following (Carlucci et al., 2019), we use leave-one-domain-out crossvalidation, i.e., training on three domains and testing on the remaining unseen domain, on PACS and Office-Home. Note that the domain labels are not used during training. We adopt ResNet18 and ResNet50 (He et al., 2016) pre-trained on ImageNet (Russakovsky et al., 2015). For each of them, the last fully-connected layer is replaced by a bottleneck layer, then the corresponding network is taken as feature extractor fe. Full implementation details are reported in Appendix B.\nWe compare our method with several state-of-the-art methods, including D-SAM (D’Innocente & Caputo, 2018), JiGen (Carlucci et al., 2019), MASF (Dou et al., 2019), MMLD (Matsuura & Harada, 2020), MetaReg (Balaji et al., 2018), RSC (Huang et al., 2020). The results on PACS and Office-Home are reported in Table 1 and Table 2 respectively. Our DFAS achieves state-of-the-art results based on both ResNet18 and ResNet50 on both PACS (85.4%, 89.0%) and Office-Home (64.3%, 70.3%), outperforming RSC by 0.3% and 1.2% on PACS based on ResNet18 and ResNet50 respectively, and by 1.2% on Office-Home based on ResNet18 (these methods do not conduct experiment on Office-Home using ResNet50). Compared with Baseline that directly aggregates three source domains to train learner with standard fully-connected layer as classifier f c, our method of DFAS improves its performance by 5.0% and 5.7% on PACS based on ResNet18 and ResNet50 respectively, and by 2.1% and 2.1% on Office-Home based on ResNet18 and ResNet50 respectively. In Table 1, on PACS, DFAS significantly outperforms Baseline in almost all tasks except when P is taken as target domain. Note that, in the task that domain S, of which the style is extremely different from rest three domains, is target domain, our DFAS boosts the accuracy of Baseline by 12.0% and 15.3% based on ResNet18 and ResNet50 respectively. This indicates that our method can generalize well when domain shift is large. Office-Home is challenging for DG since the number of classes is larger than other datasets. As shown in Table 2, our DFAS outperforms Baseline stably in almost all tasks on Office-Home. These performance improvements demonstrate the effectiveness of our method in the case that training data are from multi-source domains and the unseen target domain is different from source domains." }, { "heading": "5.2 TYPE II: SINGLE SOURCE WITH DOMAIN SHIFT (SSDS)", "text": "We conduct this type of experiment on PACS based on ResNet18. In this experiment, we train learner on one domain and test on each of the rest three domains, resulting in total 12 tasks. Implementation details are shown in Appendix B. Our method is compared with related methods, including Baseline that directly trains learner with standard fully-connected layer as classifier f c on the source domain, Jien (Carlucci et al., 2019) and SagNet (Nam et al., 2019). Results are reported in Table 3. Our method of DFAS outperforms Baseline and SagNet by 4.2% and 4.7% respectively. We observe that our method outperforms Baseline in 10 tasks among all 12 tasks. The performance boosts are large in tasks when domain S is set to be source domain. These performance improvements demonstrate the effectiveness of our method in the case that training data are from single source domain and the unseen target domain is different from source domain." }, { "heading": "5.3 TYPE III: SAME SOURCE AND TARGET DOMAIN (SSTD)", "text": "We also apply our DG method to the common recognition task that the training and test data are from a same domain, i.e., SSTD, on CIFAR-10 dataset. To investigate the effect of training size, we sample different sizes of training data from the provided training set (i.e., source domain). Implementation details are in Appendix B. As shown in Table 4, our DFAS outperforms Baseline, JiGen and MMLD by 2.4%, 3.6% and 4.3% respectively on average. The results of JiGen and MMLD are obtained by running their codes on CIFAR-10. We observe that DFAS outperforms Baseline and compared methods in all different numbers of training data. In general, the performance boost is larger when the number of training data is smaller. This may be because the learner is more possible to be overfitting when the training size is smaller and our DFAS is designed to extract better generalizable features." }, { "heading": "5.4 ABLATION STUDY", "text": "To further verify the effectiveness of each component of our method, we conduct additional ablation experiments on PACS dataset based on ResNet18 in both MSDS and SSDS setting. The results are reported in Table 5 and Table 6.\nIn Table 5 and 6, L2-norm denotes feature L2-normalization defined in Sect.3.3. Rand-split means the random splitting strategy that we randomly split the train/val subsets at each step of updating the parameters of learner. Adv-split denotes the adversarial splitting model that we update the worst-case splitting by solving the maximization problem in Eq. (5) per epoch in training process.\nEffectiveness of L2-normalization. In Table 5, L2-norm (82.7%) outperforms Baseline (80.4%) by 2.3% and L2-norm + Adv-split (i.e., DFAS) (85.4%) outperforms Adv-split (83.6%) by 1.8% in MSDS setting. In Table 6, L2-norm (63.1%) outperforms Baseline (62.4%) by 0.7% and L2norm + Adv-split (i.e., DFAS) (66.6%) outperforms Adv-split (65.1%) by 1.5% in SSDS setting.\nThese performance improvements demonstrate that the feature L2-normalization is useful in domain generalization.\nEffectiveness of adversarial splitting model. In Table 5, Adv-split (83.6%) outperforms Baseline (80.4%) by 3.2% and L2-norm + Adv-split (i.e., DFAS) (85.4%) outperforms L2-norm (82.7%) by 2.7% in MSDS setting. In Table 6, Adv-split (65.1%) outperforms Baseline (62.4%) by 2.7% and L2-norm + Adv-split (i.e., DFAS) (66.6%) outperforms L2-norm (63.1%) by 3.5% in SSDS setting. These results indicating that our proposed adversarial splitting model is effective.\nEffectiveness of adversarial splitting over random splitting. In Table 5, L2-norm + Adv-split (85.4%) outperforms L2-norm + Rand-split (84.3%) by 1.1% and Adv-split (83.6%) outperforms Rand-split (82.8%) by 0.8% in MSDS setting. In Table 5, L2-norm + Adv-split (66.6%) outperforms L2-norm + Rand-split (65.4%) by 1.2% and Adv-split (65.1%) outperforms Rand-split (64.1%) by 1.0% in SSDS setting. These results demonstrate that the adversarial splitting model outperforms the random splitting strategy in different experimental settings.\nDue to space limit, we add more ablation experiments in Appendix A.1 to further compare different splittings, including adversarial splitting, domain-label-based splitting and random splitting.\n5.5 MITIGATING GRADIENT EXPLOSION BY L2-NORMALIZATION.\nTo show that L2-normalization can mitigate gradient explosion, we conduct the same experiments independently for 50 times respectively with L2-normalization and without L2-normalization. Then we count the numbers of occurrences of gradient explosion that are reported in Table 7. From Table 7, we can observe that L2-normalization can mitigate gradient explosion." }, { "heading": "6 CONCLUSION", "text": "In this paper, we unify adversarial training and meta-learning in a novel proposed Domain-Free Adversarial Splitting (DFAS) framework to tackle the general domain generalization problem. Extensive experiments show the effectiveness of the proposed method. We are interested in deeper theoretical understanding and more applications of our method in the future work." }, { "heading": "A ANALYSIS", "text": "" }, { "heading": "A.1 COMPARISON OF ADVERSARIAL SPLITTING AND DOMAIN-LABEL-BASED SPLITTING", "text": "In the section, we compare our adversarial splitting with domain-label-based splitting that is commonly used in meta-learning-based DG methods. Due to variations of style, poses, sub-classes, etc., the internal inconsistency within dataset is complicated. Domain-label partially capture the inconsistency, while cannot cover all possible internal inconsistency. Our adversarial splitting method does not rely on the domain label. It iteratively finds the hardest train/val splitting to the learner to maximize the inconsistency and train the learner to generalize well for the hardest splitting, in an adversarial training way. This strategy more flexibly investigates the possible inconsistency within training dataset, adaptively to the learner, and can potentially enhance the generalization ability of learner.\nWe first empirically show that the domain-label-based splitting (denoted as Label-split) is not as hard as our adversarial splitting (Adv-split) to the learner in Table 8. In Table 8, we report the values of objective function in Eq. (5) of Adv-split and Label-split by fixing the learner with different network parameters wi at different epoch (1th, 2th, 5th and 10th) in the training process. Larger value in the table indicates that the splitting is harder to the learner (i.e., network). It can be observed that the domain-label-based splitting (Label-split) is not as hard as Adv-split to learner.\nWe also conduct experiments on PACS in MSDS setting to fairly compare different splittings, including adversarial splitting (Adv-split), domain-label-based splitting (Label-split) and random splitting (Rand-split). The results are reported in Table 9. Table 9 shows that adversarial splitting outperforms random splitting and domain-label-based splitting when training data is from multiple domains.\nWhen the training data are from only a single domain, our adversarial splitting also performs well (as in Table 3). However, domain-label-based splitting cannot be used in this setting, since there is no domain label available." }, { "heading": "A.2 EFFECT OF HYPER-PARAMETERS", "text": "Effect of hyper-parameter ξ. In Fig. 1(a), we show the performance of our method when varying the hyper-parameter ξ, i.e., length of the val-subset Sv in adversarial splitting of training dataset. The best result is obtained when ξ = |S|2 , and the results are similar when ξ |S| ranges from 0.3 to 0.7.\nEffect of hyper-parameter α. We evaluate the effect of α in MSDS setting on PACS dataset in Fig. 1(b). From Fig. 1(b), the ACC is stable to the values of α in large range of 1e-6 to 1e-4. Small α results in small step-size for parameter updating in meta-learning framework, and limits the benefits from meta-learning and adversarial splitting. Larger α results in larger step-size for gradient descent based network updating, which may fail to decrease the training loss from the optimization perspective.\nEffect of hyper-parameter m. The effect of m is evaluated in MSDS setting on PACS dataset in Fig. 1(c). Fig. 1(c) shows that the result is not sensitive to the value of m." }, { "heading": "A.3 CONVERGENCE", "text": "We testify the convergence of DFAS with errors and losses in different tasks in Fig. 2. In Fig. 2(a) and Fig. 2(b) , we show the classification error curves on target domains (A and Ar respectively) in the setting of MSDS. In Fig. 2(c), we show the training loss of DFAS in task A in MSDS setting. These training curves indicates that DFAS converges in the training process. We also observe that DFAS has better stability than Baseline, in Fig. 2(a) and Fig. 2(b)." }, { "heading": "A.4 COMPUTATIONAL COST OF ADVERSARIAL SPLITTING AND RANDOM SPLITTING", "text": "We compare the computational cost of adversarial splitting and random splitting in this section. Since we only update the worst-case splitting per epoch, instead of at each step of updating parameters, the computational cost is only slightly higher than that of random splitting. To show this, we compare the total training times of the adversarial splitting and random spitting in the same number of steps (20000), as in Table 10.\nFrom Table 10, the training time of Adv-split is only 5.6% (0.33/5.90) higher than Rand-split.\nA.5 VISUALIZATION OF FEATURE SPACE\nWe visualize the feature space learned by our method of DFAS and Baseline (shown in Fig. 3), by t-SNE (Maaten & Hinton, 2008). It appears that DFAS yields better separation of classes and better alignment of distributions of source and unseen target domains, which possibly explains the accuracy improvements achieved by our DFAS.\nB IMPLEMENTATION DETAILS\nFor the setting of MSDS, we use ResNet18 and ResNet50 (He et al., 2016) pre-trained on ImageNet (Russakovsky et al., 2015). For each of them, the last fully-connected layer is replaced by a bottleneck layer, then the corresponding network is taken as feature extractor fe. The dimension of the bottleneck layer is set to be 512 when the backbone is ResNet18 as in (Saito et al., 2019), and 256 when the backbone is ResNet50 as in (Gu et al., 2020). Following (Gu et al., 2020), s is set to 7.5 for PACS and 10.0 for Office-Home. m is set to 0.2 for PACS and 0.1 for Office-Home. ξ is set to |S|2 . SGD with momentum of 0.9 is utilized to update parameters of learner. The learning rate of classifier and bottleneck layer is 10 times of convolutional layers, which is widely adopted in domain adaptation (Long et al., 2015; Ganin et al., 2016). Following (Ganin et al., 2016), the learning rate of convolutional layer is adjusted by η = 0.001(1+10p)0.75 , where p is the optimizing progress linearly changing from 0 to 1. The learning rate α of inner loop optimization is set to 10−5. The parameters are updated for 20000 steps and the hardest val-subset is updated per 200 steps. The batchsize is set to 64. The running mean and running variance of Batch Normalization (BN) layers are fixed as the pre-trained values on ImageNet during training, which is discussed in (Du et al., 2020a). Due to memory limit, when implementing experiments based on ResNet50, we adopt the first order approximation (Finn et al., 2017) that stops the gradient of gtw in Eq. (4) for reducing memory and computational cost.\nFor the setting of SSDS, we conduct experiment based on ResNet18 on PACS. The implementation details are same as MSDS. For the setting of SSTD, we conduct experiment based on ResNet18 on CIFAR-10. The hyper-parameters of s andm are set to 8.0 and 0.2 respectively. Other implementation details are same as MSDS except that, in BN layers, the running mean and running variance are updated.\nWe implement experiments using Pytorch (Paszke et al., 2019) on a single NVIDIA Tesla P100 GPU.\nC OPTIMIZATION ALGORITHM FOR FINDING THE HARDEST Sv" }, { "heading": "C.1 OPTIMIZATION ALGORITHM", "text": "To solve the problem of\nmax Sv,A ∑ (x,y)∈Sv l (fw(x), y)− α 〈∇wl(fw(x), y), A〉 s.t. A = gtw, Sv ∈ Γξ, (11)\nwe design an alternative iteration algorithm in Sect. 3.2. Specifically, we initialize A with gradient of a sample randomly selected from S. Then we alternatively update Sv and A.\nGiven A, Sv is updated by solving\nmax Sv ∑ (x,y)∈Sv l (fw(x), y)− α 〈∇wl(fw(x), y), A〉\ns.t. Sv ⊂ S, |Sv| = ξ, (12)\nwhere the constraints are derived from the definition of Γξ. Equation (12) indicates that the optimal Sv consists of ξ samples that have the largest values of l (fw(x), y)− α 〈∇wl(fw(x), y), A〉. Thus we compute and rank the values of l (fw(x), y)− α 〈∇wl(fw(x), y), A〉 for all (x, y) ∈ S and select the largest ξ samples to constitute the Sv .\nGiven Sv (St is then given), we update A to satisfy the constraint A = gtw in Eq. (11), then A is\nA = gtw = 1 |St| ∑\n(x,y)∈St\n∇wl(fw(x), y). (13)" }, { "heading": "C.2 CONVERGENCE IN EXPERIMENTS", "text": "We show empirically the convergence of this alternative iteration algorithm in Fig. 4, with the values of objective function in Eq. (11). Figure 4 shows that the values of objective function converges after only a few iterations.\nWe also check if the splitting changes when the value of objective function converges. To do this, we count the ratio of changed sample indexes in Sv at each iteration, as in Table 11. Table 11 shows that the splitting is not changed when the value of objective function converges." }, { "heading": "C.3 TOY EXAMPLE.", "text": "We present a toy example in this section to check if this algorithm can find the optimal solution. The toy example is an 2-dimensional classification problem, as shown in Fig. 5. Different colors indicate different classes. A fully-connected layer without bias is used as the network (learner). We split the data of the first class (blue points) with our algorithm. The candidate solutions with corresponding objective function values are given in Table 12.\nThe solutions in the iteration process of our algorithm are reported in Table 13. The solutions converge to (1,3,4), which is the optimal solution in Table 12. This indicates that our algorithm can find the\noptimal splitting for the toy example. We also give the code of this toy example as below, and the reader may rerun it to verify the results.\nCode the toy example:\nimport torch import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_blobs\n##### np.random.seed(1) data = make_blobs(n_samples=12,centers=2) x1 = data[0][data[1]==0] x2 = data[0][data[1]==1]\nplt.plot(x1[:,0],x1[:,1],’*’) plt.plot(x2[:,0],x2[:,1],’*’) t = 0 for x in x1:\nplt.text(x[0],x[1],str(t)) t+=1\nplt.savefig(’samples.pdf’)\n### model = torch.nn.Linear(2,2,bias=False).cuda() torch.nn.init.xavier_uniform_(model.weight) log = open(’log.txt’,’w’) log.write(’weight:’+ str(model.weight.cpu().data.numpy())+’\\n’)\nfeat = torch.Tensor(data[0]).cuda() label = torch.LongTensor(data[1]).cuda() loss_fun = torch.nn.CrossEntropyLoss()" }, { "heading": "Grads = [] Loss = [] for i in range(len(feat)):", "text": "out = torch.nn.functional.linear(feat[i].view(1,-1),weight=model.weight,bias=None) loss_x = loss_fun(out,label[i].view(-1,))\ngrad = torch.autograd.grad(loss_x,model.weight)[0] Loss.append(loss_x.cpu().data.numpy()) Grads.append(grad.cpu().data.numpy())\n##we split the data of the first class Loss = np.array(Loss)[data[1] == 0] Grads = np.array(Grads)[data[1] == 0] # # np.save(’toy_Grads.npy’,Grads) # np.save(’toy_Loss.npy’,Loss)\n# Loss = np.load(’toy_Loss.npy’)[data[1] == 0] # Grads = np.load(’toy_Grads.npy’)[data[1] == 0] alpha = 0.001\ndef adv_loss(val_index=[0,1,2]): train_index = np.delete(np.arange(len(Loss)),val_index) loss = np.mean(Loss[val_index]) grad_val = np.mean(Grads[val_index],axis=0) grad_train = np.mean(Grads[train_index],axis=0)\nreturn loss - alpha*np.sum(grad_val*grad_train)\n#####brute force searching" }, { "heading": "Solutions = [] Values = []", "text": "#generate solutions for i in range(len(Loss)-2):\nfor j in range(i+1,len(Loss)-1): for k in range(j+1,len(Loss)):\nSolutions.append([i,j,k]) for val_idex in Solutions:\nValues.append(adv_loss(val_idex))\noptimal_idx = np.array(Values).argmax() optimal_solution = Solutions[optimal_idx] optimal_value = max(Values)\nprint(\"all possible solutions:\",Solutions) print(\"objective function values of possible solutions:\",Values) print(\"optimal solution:\",optimal_solution,\"\\nobjective function value\nof optimal solution:\",optimal_value)\nlog.write(str({\"all possible solutions\":Solutions, \"objective function values of possible solutions\":Values, \"optimal solution\":optimal_solution, \"objective function value of optimal solution\":optimal_value}))\n###### our algorithm A = Grads[np.random.randint(len(Loss))] values_tmp = [] solutions_tmp = [] mtr_index =\nnp.random.choice(np.arange(len(Loss)),size=len(Loss)//2,replace=False) l = np.mean(Loss- alpha*np.sum(Grads*A.reshape((1,2,2)),axis=(1,2))) values_tmp.append(l) solutions_tmp.append(np.delete(np.arange(len(Loss)),mtr_index)) for i in range(5):\nD = np.sum(Grads*A,axis=(1,2)) Loss_ = Loss-alpha*D idx_sort = np.argsort(Loss_)\nmtr_index = idx_sort[:len(Loss) // 2] mte_index = idx_sort[len(Loss) // 2 :]\nvalues_tmp.append(Loss_[mte_index].mean()) solutions_tmp.append(mte_index) A = np.mean(Grads[mtr_index],axis=0)\nprint(\"our optimal solution:\",solutions_tmp[-1],\"\\nobjective function value of our optimal solution:\",values_tmp[-1]) log.write(str({\"our optimal solution\":solutions_tmp[-1], \"objective function value of our optimal solution\":values_tmp[-1]})) print(solutions_tmp,values_tmp)" }, { "heading": "D PROOF OF THEOREM 1", "text": "We first introduce VC-dimension based generalization bound and domain adaptation theory in Appendix D.1, then present two lemmas in Appendix D.2, and finally give the proof of Theorem 1 in Appendix D.3." }, { "heading": "D.1 PRELIMINARY", "text": "VC-dimension based generalization bound. Theorem A-1. (Abu-Mostafa et al., 2012) Let S be the set of training data i.i.d. sampled for distribution P . For any δ ∈ (0, 1), with probability at least 1− δ, we have ∀h (h : X → {0, 1}) in hypothesis spaceH,\n| P(h)− ̂S(h)| ≤\n√ 8\n|S|\n( V C(H) log 2e |S|\nV C(H) +\n4\nδ\n) . (14)\nwhere P(h) = E(x,y)∼P [I{(h(x)) 6=y}] and ̂S(h) = 1|S| ∑ (x,y)∈S I{(h(x))6=y}." }, { "heading": "Domain adaptation theory.", "text": "Theorem A-2. (Ben-David et al., 2007; 2010) For any h in hypothesis spaceH, we have\nQ(h) ≤ P(h) + 1\n2 dH(P,Q) + λ∗, (15)\nwhere λ∗ ≥ infh′∈H{ P(h′) + Q(h′)} and\ndH(P,Q) = 2 sup h∈H |EP [h = 1]− EQ[h = 1]| (16)\nisH-divergence." }, { "heading": "D.2 LEMMAS", "text": "Lemma A-1. For any Sv ∈ Γξ and St = S − Sv, ∀δ ∈ (0, 1), with probability at least 1 − δ, we have ∀f ∈ HSt ,\n| ΨP(f)− ̂ΨSv (f)| ≤ √√√√ 8 |Sv| ( V C(HΨSt) log 2e |Sv| V C(HΨSt) + 4 δ ) , (17)\nwhere ΨP(f) = E(x,y)∼P [I{Ψ(f(x)) 6=y}] is generalization error on distribution P , ̂ΨSv (f) = 1 |Sv| ∑ (x,y)∈Sv I{Ψ(f(x)) 6=y} is empirical error, H Ψ St\n= {Ψ ◦ f : f ∈ HSt}, V C(HΨSt) is the VC-dimension of HΨSt , and Ψ(·) is the prediction rule such as the Bayes Optimal Predictor, i.e., Ψ(f(x)) = I{f(x)≥ 12}.\nProof:\nFrom the definition of HΨSt , for any f ∈ HSt , there exists a hf ∈ H Ψ St such that hf = Ψ ◦ f . Applying Theorem A-1, with probability at least 1− δ, we have ∀f ∈ HSt ,\n| ΨP(f)− ̂ΨSv (f)| =| P(hf )− ̂Sv (hf )|\n≤ √√√√ 8 |Sv| ( V C(HΨSt) log 2e |Sv| V C(HΨSt) + 4 δ ) .\n(18)\nLemma A-2. For any Sv ∈ Γξ and St = S − Sv, let ΨP(g) = inff∈HSt Ψ P(f) and ̂ Ψ Sv (h) = inff∈HSt ̂ Ψ Sv (f), then ∀δ ∈ (0, 1), with probability at least 1− δ, we have\nΨP(g) ≥ ̂ΨSv (h)− √√√√ 8 |Sv| ( V C(HΨSt) log 2e |Sv| V C(HΨSt) + 4 δ ) . (19)" }, { "heading": "Proof:", "text": "From the definition of g and h, we have ̂ΨSv (g) ≥ ̂ Ψ Sv (h). ∀δ ∈ (0, 1), with probability at least 1− δ, we have\nΨP(g)− ̂ΨSv (h) = Ψ P(g)− ̂ΨSv (g) + ̂ Ψ Sv (g)− ̂ Ψ Sv (h)\n≥ ΨP(g)− ̂ΨSv (g)\n≥− √√√√ 8 |Sv| ( V C(HΨSt) log 2e |Sv| V C(HΨSt) + 4 δ ) .\n(20)\nThus, Eq. (19) holds." }, { "heading": "D.3 PROOF OF THEOREM 1", "text": "" }, { "heading": "Proof:", "text": "We denote byHΨl the hypothesis space such that ∀h ∈ HΨl ,\nh(x) = Ψl(f(x)) = { 1 if l(f(x), y) > γ, 0 otherwise ,\n(21)\nfor f ∈ H. Then\ndHΨl (P,Q) = 2 sup h∈HΨl ∣∣EP [h = 1]− EQ[h = 1]∣∣ = 2 sup\nf∈H ∣∣EP [Ψl(f(x)) = 1]− EQ[Ψl(f(x)) = 1]∣∣ = 2 sup\nf∈H ∣∣∣EP [I{l(f(x),y)>γ}]− EQ[I{l(f(x),y)>γ}]∣∣∣ = 2 sup\nf∈H\n{ EQ[I{l(f(x),y)>γ}]− EP [I{l(f(x),y)>γ}] } ≤ 2 sup\nf∈H EQ[I{l(f(x),y)>γ}]− 2 inf f∈H EP [I{l(f(x),y)>γ}].\n(22)\nIn the fourth equation, we utilize the assumption that EQ[I{l(f(x),y)>γ}] ≥ EP [I{l(f(x),y)>γ}]. Given any Sv ∈ Γξ and St = S − Sv , we replaceH byHSt , then\ndHΨlSt (P,Q) ≤2 sup f∈HSt EQ[I{l(f(x),y)>γ}]− 2 inf f∈HSt EP [I{l(f(x),y)>γ}]\n≤2C1 − 2 inf f∈HSt\nEP [I{l(f(x),y)>γ}] (23)\nwhere C1 = supS′v∈Γξ supf∈HS−S′v EQ[I{l(f(x),y)>γ}]. Applying Theorem A-2, for any f ∈ HSt , we have ΨlQ (f) ≤ Ψl P (f) + C1 − inf\nf ′∈HSt EP [I{l(f ′(x),y)>γ}] + λ∗(Sv), (24)\nwhere λ∗(Sv) ≥ inff ′∈HS−Sv { Ψl P (f ′) + ΨlQ (f ′)}. Let C3 = supS′v∈Γξ λ ∗(S′v) ≥ supS′v∈Γξ inff ′∈HS−S′v { ΨlP (f ′) + Ψl Q (f ′)}, we have\nΨlQ (f) ≤ Ψl P (f) + C1 − inf f ′∈HSt EP [I{l(f ′(x),y)>γ}] + C3. (25)\nApplying Lemma A-1 to the first term of right side in Eq. (25), ∀δ ∈ (0, 1), with probability at least 1− δ, we have ∀f ∈ HSt ,\nΨlP (f) ≤ ̂ Ψl Sv (f) + √√√√ 8 |Sv| ( V C(HΨlSt ) log 2e |Sv| V C(HΨlSt ) + 4 δ ) . (26)\nApplying Lemma A-2 to the third term of right side in Eq. (25), ∀δ ∈ (0, 1), with probability at least 1− δ, we have\ninf f ′∈HSt EP [I{l(f ′(x),y)>γ}] ≥ inf f ′∈HSt\n1 |Sv| ∑\n(x,y)∈Sv\nI{l(f ′(x),y)>γ}\n− √√√√ 8 |Sv| ( V C(HΨlSt ) log 2e |Sv| V C(HΨlSt ) + 4 δ ) .\n(27)\nCombining Eq. (25), (26), (27) and thanks to the union bound, for any δ ∈ (0, 1), with probability at least 1− 2δ, we have ∀f ∈ HSt ,\nΨlQ (f) ≤ ̂ Ψl Sv (f) +B(Sv) + 2 √√√√ 8 |Sv| ( V C(HΨlSt ) log 2e |Sv| V C(HΨlSt ) + 4 δ ) + C3, (28)\nwhere B(Sv) = C1 − inff ′∈HSt 1 |Sv| ∑ (x,y)∈Sv I{l(f ′(x),y)>γ}. Using the fact that |Sv| = ξ and let C2 = supS′v∈Γξ V C(H Ψl S−S′v ) log 2eξ V C(HΨl\nS−S′v ) , we have\nΨlQ (f) ≤ ̂ Ψl Sv (f) +B(Sv) + 2\n√ 8\n|Sv|\n( C2 + 4\nδ\n) + C3. (29)" } ]
2,020
null
SP:dec287e2fe3b34942440388a7e79031e833dc718
[ "This paper proposes to use an evolutionary search algorithm to search for better loss functions for the classification and regression branch of an object detector. The algorithm starts with 20 primitive mathematical operations. Due to the highly sparse action space, the vanilla evolutionary algorithm would take a long time to converge. Then the authors propose two ways to reduce the search space. First, they filter out loss functions which generates gradients of large magnitude to well-classified samples and do not converge to zero. Second, they construct a very small dataset by sampling only one image randomly from each category and evaluate the loss function on it to quickly filter out bad loss candidates.", "This paper proposes to automatically discover proper loss functions for object detection. It first designs some unit mathematical operations as search space, and then performs evolutionary algorithm to discover well-performed loss functions for the object detection tasks. Different from image classification, one needs to search both classification and localization losses in object detection. To accelerate the search, the paper proposes convergence property verification and model optimization simulation to effectively evaluate the searched loss and reduce the search space." ]
Designing proper loss functions for vision tasks has been a long-standing research direction to advance the capability of existing models. For object detection, the well-established classification and regression loss functions have been carefully designed by considering diverse learning challenges (e.g. class imbalance, hard negative samples, and scale variances). Inspired by the recent progress in network architecture search, it is interesting to explore the possibility of discovering new loss function formulations via directly searching the primitive operation combinations. So that the learned losses not only fit for diverse object detection challenges to alleviate huge human efforts, but also have better alignment with evaluation metric and good mathematical convergence property. Beyond the previous auto-loss works on face recognition and image classification, our work makes the first attempt to discover new loss functions for the challenging object detection from primitive operation levels and finds the searched losses are insightful. We propose an effective convergence-simulation driven evolutionary search algorithm, called CSE-Autoloss, for speeding up the search progress by regularizing the mathematical rationality of loss candidates via two progressive convergence simulation modules: convergence property verification and model optimization simulation. CSE-Autoloss involves the search space (i.e. 21 mathematical operators, 3 constant-type inputs, and 3 variable-type inputs) that cover a wide range of the possible variants of existing losses and discovers best-searched loss function combination within a short time (around 1.5 wall-clock days with 20x speedup in comparison to the vanilla evolutionary algorithm). We conduct extensive evaluations of loss function search on popular detectors and validate the good generalization capability of searched losses across diverse architectures and various datasets. Our experiments show that the best-discovered loss function combinations outperform default combinations (Cross-entropy/Focal loss for classification and L1 loss for regression) by 1.1% and 0.8% in terms of mAP for two-stage and one-stage detectors on COCO respectively. Our searched losses are available at https://github.com/PerdonLiu/CSE-Autoloss.
[ { "affiliations": [], "name": "Peidong Liu" }, { "affiliations": [], "name": "Gengwei Zhang" }, { "affiliations": [], "name": "Bochao Wang" }, { "affiliations": [], "name": "Hang Xu" }, { "affiliations": [], "name": "Xiaodan Liang" }, { "affiliations": [], "name": "Yong Jiang" }, { "affiliations": [], "name": "Zhenguo Li" } ]
[ { "authors": [ "Han Cai", "Chuang Gan", "Tianzhe Wang", "Zhekai Zhang", "Song Han" ], "title": "Once-for-all: Train one network and specialize it for efficient deployment", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Zhaowei Cai", "Nuno Vasconcelos" ], "title": "Cascade r-cnn: Delving into high quality object detection", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Kai Chen", "Jiaqi Wang", "Jiangmiao Pang", "Yuhang Cao", "Yu Xiong", "Xiaoxiao Li", "Shuyang Sun", "Wansen Feng", "Ziwei Liu", "Jiarui Xu" ], "title": "Mmdetection: Open mmlab detection toolbox and benchmark", "venue": "arXiv preprint arXiv:1906.07155,", "year": 2019 }, { "authors": [ "Kean Chen", "Jianguo Li", "Weiyao Lin", "John See", "Ji Wang", "Lingyu Duan", "Zhibo Chen", "Changwei He", "Junni Zou" ], "title": "Towards accurate one-stage object detection with ap-loss", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "M. Everingham", "S.M.A. Eslami", "L. Van Gool", "C.K.I. Williams", "J. Winn", "A. Zisserman" ], "title": "The pascal visual object classes challenge: A retrospective", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "Félix-Antoine Fortin", "François-Michel De Rainville", "Marc-André Gardner", "Marc Parizeau", "Christian Gagné" ], "title": "DEAP: Evolutionary algorithms made easy", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Ross Girshick" ], "title": "Fast r-cnn", "venue": "In ICCV, pp. 1440–1448,", "year": 2015 }, { "authors": [ "David E Goldberg", "Kalyanmoy Deb" ], "title": "A comparative analysis of selection schemes used in genetic algorithms", "venue": "In Foundations of genetic algorithms,", "year": 1991 }, { "authors": [ "Santiago Gonzalez", "Risto Miikkulainen" ], "title": "Improved training speed, accuracy, and data utilization through loss function optimization", "venue": "In CEC,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Borui Jiang", "Ruixuan Luo", "Jiayuan Mao", "Tete Xiao", "Yuning Jiang" ], "title": "Acquisition of localization confidence for accurate object detection", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Buyu Li", "Yu Liu", "Xiaogang Wang" ], "title": "Gradient harmonized single-stage detector", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Chuming Li", "Xin Yuan", "Chen Lin", "Minghao Guo", "Wei Wu", "Junjie Yan", "Wanli Ouyang" ], "title": "Am-lfs: Automl for loss function search", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Xiang Li", "Wenhai Wang", "Lijun Wu", "Shuo Chen", "Xiaolin Hu", "Jun Li", "Jinhui Tang", "Jian Yang" ], "title": "Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In ECCV,", "year": 2014 }, { "authors": [ "Tsung-Yi Lin", "Piotr Dollár", "Ross Girshick", "Kaiming He", "Bharath Hariharan", "Serge Belongie" ], "title": "Feature pyramid networks for object detection", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Focal loss for dense object detection", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Chenxi Liu", "Piotr Dollár", "Kaiming He", "Ross Girshick", "Alan Yuille", "Saining Xie" ], "title": "Are labels necessary for neural architecture search", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Hanxiao Liu", "Andrew Brock", "Karen Simonyan", "Quoc V Le" ], "title": "Evolving normalization-activation layers", "venue": "arXiv preprint arXiv:2004.02967,", "year": 2020 }, { "authors": [ "Qi Qian", "Lei Chen", "Hao Li", "Rong Jin" ], "title": "Dr loss: Improving object detection by distributional ranking", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Prajit Ramachandran", "Barret Zoph", "Quoc V Le" ], "title": "Searching for activation functions", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Esteban Real", "Chen Liang", "David R So", "Quoc V Le" ], "title": "Automl-zero: Evolving machine learning algorithms from scratch", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "venue": "In NeurIPS,", "year": 2015 }, { "authors": [ "Hamid Rezatofighi", "Nathan Tsoi", "JunYoung Gwak", "Amir Sadeghian", "Ian Reid", "Silvio Savarese" ], "title": "Generalized intersection over union: A metric and a loss for bounding box regression", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Zhi Tian", "Chunhua Shen", "Hao Chen", "Tong He" ], "title": "Fcos: Fully convolutional one-stage object detection", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Lachlan Tychsen-Smith", "Lars Petersson" ], "title": "Improving object localization with fitness nms and bounded iou loss", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Xiaobo Wang", "Shuo Wang", "Cheng Chi", "Shifeng Zhang", "Tao Mei" ], "title": "Loss function search for face recognition", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Shengkai Wu", "Xiaoping Li", "Xinggang Wang" ], "title": "Iou-aware single-stage object detector for accurate localization", "venue": "Image and Vision Computing,", "year": 2020 }, { "authors": [ "Fisher Yu", "Haofeng Chen", "Xin Wang", "Wenqi Xian", "Yingying Chen", "Fangchen Liu", "Vashisht Madhavan", "Trevor Darrell" ], "title": "Bdd100k: A diverse driving dataset for heterogeneous multitask learning", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Jiahui Yu", "Yuning Jiang", "Zhangyang Wang", "Zhimin Cao", "Thomas Huang" ], "title": "Unitbox: An advanced object detection network", "venue": "In ACMM,", "year": 2016 }, { "authors": [ "Shifeng Zhang", "Cheng Chi", "Yongqiang Yao", "Zhen Lei", "Stan Z Li" ], "title": "Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Zhaohui Zheng", "Ping Wang", "Wei Liu", "Jinze Li", "Rongguang Ye", "Dongwei Ren" ], "title": "Distance-iou loss: Faster and better learning for bounding box regression", "venue": "In AAAI,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "The computer vision community has witnessed substantial progress in object detection in recent years. The advances for the architecture design, e.g. two-stage detectors (Ren et al., 2015; Cai & Vasconcelos, 2018) and one-stage detectors (Lin et al., 2017b; Tian et al., 2019), have remarkably ∗Equal Contribution. Work done when the first author (Peidong Liu) interns at Huawei Noah’s Ark Lab. †Correspondence to: Xiaodan Liang (xdliang328@gmail.com), Yong Jiang (jiangy@sz.tsinghua.edu.cn).\npushed forward the state of the art. The success cannot be separated from the sophisticated design for training objective, i.e. loss function.\nTraditionally, two-stage detectors equip the combination of Cross-entropy loss (CE) and L1 loss/Smooth L1 loss (Girshick, 2015) for bounding box classification and regression respectively. In contrast, one-stage detectors, suffering from the severe positive-negative sample imbalance due to dense sampling of possible object locations, introduce Focal loss (FL) (Lin et al., 2017b) to alleviate the imbalance issue. However, optimizing object detectors with traditional hand-crafted loss functions may lead to sub-optimal solutions due to the limited connection with the evaluation metric (e.g. AP). Therefore, IoU-Net (Jiang et al., 2018) proposes to jointly predict Intersection over Union (IoU) during training. IoU loss series, including IoU loss (Yu et al., 2016), Bounded IoU loss (Tychsen-Smith & Petersson, 2018), Generalized IoU loss (GIoU) (Rezatofighi et al., 2019), Distance IoU loss (DIoU), and Complete IoU loss (CIoU) (Zheng et al., 2020), optimize IoU between predicted and target directly. These works manifest the necessity of developing effective loss functions towards better alignment with evaluation metric for object detection, while they heavily rely on careful design and expertise experience.\nIn this work, we aim to discover novel loss functions for object detection automatically to reduce human burden, inspired by the recent progress in network architecture search (NAS) and automated machine learning (AutoML) (Cai et al., 2019; Liu et al., 2020a). Different from Wang et al. (2020) and Li et al. (2019b) that only search for particular hyper-parameters within the fixed loss formula, we steer towards finding new forms of the loss function. Notably, AutoML-Zero (Real et al., 2020) proposes a framework to construct ML algorithm from simple mathematical operations, which motivates us to design loss functions from primitive mathematical operations with evolutionary algorithm. However, it encounters a severe issue that a slight variation of operations would lead to a huge performance drop, which is attributed to the sparse action space. Therefore, we propose a novel Convergence-Simulation driven Evolutionary search algorithm, named CSE-Autoloss, to alleviate the sparsity issue. Benefit from the flexibility and effectiveness of the search space, as Figure 1 shows, CSE-Autoloss discovers distinct loss formulas with comparable performance with the Crossentropy loss, such as (b) and (c) in the figure. Moreover, the best-searched loss function (d), named CSE-Autoloss-Acls, outperformed CE loss by a large margin. Specifically, to get preferable loss functions, CSE-Autoloss contains a well-designed search space, including 20 primitive mathematical operations, 3 constant-type inputs, and 3 variable-type inputs, which can cover a wide range of existing popular hand-crafted loss functions. Besides, to tackle the sparsity issue, CSE-Autoloss puts forward progressive convergence-simulation modules, which verify the evolved loss functions\nfrom two aspects, including mathematical convergence property and optimization behavior, facilitating the efficiency of the vanilla evolution algorithm for loss function search without compromising the accuracy of the best-discovered loss. For different types of detector, CSE-Autoloss is capable of designing appropriate loss functions automatically without onerous human works.\nIn summary, our main contributions are as follows: 1) We put forward CSE-Autoloss, an end-to-end pipeline, which makes the first study to search insightful loss functions towards aligning the evaluation metric for object detection. 2) A well-designed search space, consisting of various primitive operations and inputs, is proposed as the foundation of the search algorithm to explore novel loss functions. 3) To handle the inefficiency issue caused by the sparsity of action space, innovative convergence-simulation modules are raised, which significantly reduces the search overhead with promising discovered loss functions. 4) Extensive evaluation experiments on various detectors and different detection benchmarks including COCO, VOC2017, and BDD, demonstrate the effectiveness and generalization of both CSE-Autoloss and the discovered loss functions." }, { "heading": "2 RELATED WORK", "text": "Loss Functions in Object Detection. In object detection frameworks, CE loss dominates the twostage detectors (Ren et al., 2015; Cai & Vasconcelos, 2018) for classification purpose, while Focal loss (Lin et al., 2017b) and GHM loss (Li et al., 2019a) are widely equipped in one-stage detectors (Lin et al., 2017b; Tian et al., 2019) for solving imbalance issues between positive and negative samples. Chen et al. (2019b); Qian et al. (2020) attempt to handle the sample-imbalance problem from the ranking perspective. However, these works are sensitive to hyper-parameters hence they cannot generalize well. To further improve classification and localization quality, Jiang et al. (2018); Tian et al. (2019); Wu et al. (2020) introduce quality estimation, and GFocal (Li et al., 2020) unifies the quality estimation with classification towards consistent prediction. With regard to regression loss functions, Smooth L1 Loss (Girshick, 2015) has been commonly performed in the past years until IoU-based loss series (Yu et al., 2016; Rezatofighi et al., 2019; Zheng et al., 2020) gradually occupy regression branch for their effectiveness in bounding box distance representation due to the direct optimization of the evaluation metric. However, these works rely on expert experience on loss formula construction, which limits the development of the loss function design in object detection.\nAutomated Loss Function Design. By introducing a unified formulation of loss function, Li et al. (2019b); Wang et al. (2020) raise automated techniques to adjust loss functions for face recognition. However, they only search for specific hyper-parameters in the fixed loss formulas, which limit the search space and fail to discover new forms of loss functions. Real et al. (2020); Liu et al. (2020b); Ramachandran et al. (2018) attempt to design formulations from basic mathematical operations to construct building blocks in machine learning such as normalization layers and activation functions. However, these works cannot directly apply to loss function search since their search space and search strategies are specialized. Gonzalez & Miikkulainen (2020) proposes a framework for searching classification loss functions but the searched loss poorly generalizes to large-scale datasets. Besides, all these works are not suitable for the challenging object detection task due to the sparsity of the action space and the heavy evaluation cost for training object detectors. Instead, in this work, we make the first attempt to search for loss function formulations directly on the largescale detection dataset via an effective pipeline with novel convergence-simulation modules." }, { "heading": "3 METHOD", "text": "In this section, we present the CSE-Autoloss pipeline for discovering novel loss functions towards aligning the evaluation metric for object detection. We first introduce the well-designed search space in Section 3.1. The detailed CSE-Autoloss pipeline is elaborated in Section 3.2." }, { "heading": "3.1 SEARCH SPACE DESIGN", "text": "Input Nodes. In object detection, default classification losses (i.e. CE loss and FL loss), take prediction and label as inputs. Inspired by GFocal loss (Li et al., 2020), to better motivate the loss to align with the evaluation metric (i.e. AP), we introduce IoU between ground truth and prediction into the loss formula, where prediction, label, IoU are notated as x, y, w respectively. For the regression\nbranch, to cover the mainstream IoU loss series (i.e. IoU and GIoU), we take the intersection i, union u, and enclosing area e between the predicted and target box as input tensors. Besides, 3 constant-type inputs (i.e. 1, 2, 3) are brought in to improve the flexibility of the search space.\nAs mentioned above, for consistent prediction of localization quality and classification score between training and testing, we simply introduce IoU into the CE and FL loss to get two nuanced loss variants, namely CEI and FLI respectively:\nCEI(x, y, w) = −dot(wy, log(softmax(x))), FLI(x′, y′, w) = −w[y′(1− σ(x′)) log σ(x′) + (1− y′)σ(x′) log(1− σ(x′))],\nwhere x, y ∈ Rc+1, x′ ∈ R, c is the number of class. x and x′ are the prediction, y is a one-hot vector of ground truth class, y′ ∈ {0, 1} is the binary target, and w ∈ R is the IoU value between ground truth and prediction. As Table 4 shows, CEI and FLI outperform CE and FL by a small margin in terms of AP, which verifies the effectiveness of introducing IoU into the classification branch. Note that we apply CEI and FLI loss as the predefined initial loss LI in the search experiment for two-stage detectors and one-stage detectors, respectively.\nPrimitive Operators. The primitive operators, including element-wise and aggregation functions that enable interactions between tensors with different dimensions, are listed as below:\n• Unary functions: −x, ex, log(x), |x|, √ x, softmax(x), softplus(x), σ(x), gd(x), alf(x),\nerf(x), erfc(x), tanh(x), relu(x), sin(x), cos(x)\n• Binary functions: x1 + x2, x1 − x2, x1 × x2, x1x2+ , dot(x1, x2)\nis a small value to avoid zero division, softplus(x) = ln(1 + ex), σ(x) = 11+e−x is the sigmoid function. To enlarge the search space, more S-type curves and their variants∗ are included, i.e. gd(x) = 2 arctan(tanh(x2 )), alf(x) = x√ 1+x2 , erf(x) = 2√ π ∫ x 0 e−t 2\ndt is the error function, erfc(x) = 1 − erf(x). For binary operations, broadcasting is introduced to ensure the assembling of inputs with different dimensions. dot(x1, x2) is the class-wise dot product of two matrix x1, x2 ∈ RB×C , where B and C indicate batch size and number of categories respectively. In practice, regression branch contains fewer primitive operators than classification branch because introducing more primitive functions for regression branch does not bring performance gain. ∗In the implementation, the output of these functions are re-scaled to obtain the same range as sigmoid function, i.e. from 0 to 1.\nLoss Function as a Computation Graph. As Figure 1 illustrates, we represent the loss function as a directed acyclic graph (DAG) computation graph that transforms input nodes into the scalar output o (i.e. loss value) with multiple primitive operators in the intermediate.\nSparsity of the Action Space. We perform random search for Faster R-CNN classification branch on COCO and find only one acceptable loss every 105, which indicates the sparsity of the action space and the challenge for searching the loss function formulations." }, { "heading": "3.2 CSE-AUTOLOSS PIPELINE", "text": "We propose a novel CSE-Autoloss pipeline with progressive convergence-simulation modules, consisting of two perspectives: 1) convergence property verification module verifies the mathematical property of the loss candidates. 2) model optimization simulation module estimates the optimization quality to further filter out divergent, training-unfriendly, and poor-performed loss functions. An overview of CSE-Autoloss is illustrated in Figure 2 and the details are shown in Algorithm 1. Our CSE-Autoloss achieves more than 20x speedup in comparison to the vanilla evolutionary algorithm." }, { "heading": "3.2.1 EVOLUTIONARY ALGORITHM", "text": "Here we give a short overview of the vanilla evolutionary algorithm, which refers to the traditional tournament selection (Goldberg & Deb, 1991). We first generateN loss functions from a predefined initial loss LI as the first generation, where a loss function indicates an individual in the population. Then these individuals perform cross-over randomly pair by pair with probability p1 and conduct mutation independently with probability p2. We evaluate all the individuals on the proxy task and select top-P of them to serve as parents for the next generation. The above process is repeated for E generations to get the best-searched loss.\nThe bottleneck of the evolutionary process lies at tons of loss evaluations on the proxy set due to the highly sparse action space, and it costs approximately half an hour for each evaluation. How to estimate the quality of individuals in the early stage is crucial for efficiency. Therefore, we apply a computational validness check for the population. Specifically, invalid values (i.e. NaN, +∞, −∞) are not allowed. This simple verification accelerates the search process by 10x. But a great number of divergent, training-unfriendly, and poor-performed loss functions still remain, which yet require a great amount of computation cost." }, { "heading": "3.2.2 CONVERGENCE PROPERTY VERIFICATION", "text": "As stated above, the vanilla evolution search suffers from inefficiency issues due to the high sparsity of the action space and the unawareness of loss mathematical property. Therefore, we introduce the convergence property verification module into the evolutionary algorithm, which filters out either non-monotonic or non-convergent loss candidates within a short time.\nClassification Loss Function We analyze the property of existing popular classification loss function (i.e. CE loss). For simplicity, we only consider binary classification scenario, which is known as binary Cross-entropy loss (BCE):\nBCE(x) = −ln( 1 1 + e−x ), ∂BCE(x) ∂x = −1 + 1 1 + e−x ,\nwhere x indicates positive class score. Further analysis of BCE inspires us that binary classification loss should meet basic and general mathematical properties to guarantee validness and convergence in challenging object detection. We summarize these properties as follows:\n• Monotonicity. The loss value should monotonically decrease w.r.t positive class score and increase w.r.t negative class score because large positive score or small negative score implies a well-performed classifier, which should have a small loss penalty.\n• Convergence. When the positive class score tends to be +∞, the loss gradient magnitude should converge to zero to ensure model convergence.\nRegression Loss Function For the regression branch, consistency of loss value and distance between the predicted and target bounding box, named distance-loss consistency, should be confirmed. Distance-loss consistency claims that loss value is supposed to change consistently with distance, where loss value increases when the prediction moves away from the ground truth." }, { "heading": "3.2.3 MODEL OPTIMIZATION SIMULATION", "text": "Although the convergence property verification module filters out either non-monotonic or nonconvergent losses, it does not guarantee that the loss candidates are suitable for optimization. The sparse action space demands that there should be other techniques to ensure the quality of optimization. To address this issue, we propose the model optimization simulation module that inspects the optimization capability of the loss.\nSpecifically, we train and test the loss candidates on a small verification dataset Dverify, which is constructed by sampling only one image randomly from each category on benchmark dataset like COCO, to estimate the optimization quality. Then we apply AP performance on Dverify of the top loss in the previous generation as a threshold to filter out divergent, training-unfriendly, and poor-performed loss candidates.\nAlgorithm 1 CSE-Autoloss algorithm. Input: Search Space S, Number of Generations E, Population Size N , Number of Parents P , Verification\nSet Dverify , Proxy Training Set Dtrain, Proxy Validation Set Dval, Initial Predefined Loss Function LI , Number of Loss to Evaluate on Verification Set K\n1: Evolving N loss functions to generate loss function population from LI 2: for e← 1, E do 3: Check loss function properties with convergence-simulation modules 4: Train and test on Dverify and keep top-K loss candidates 5: Evaluate on Dtrain and Dval, and keep top-P losses as parents 6: Generate N offspring from the parents for the next generation 7: end for\nOutput: Best-discovered loss function Lbest" }, { "heading": "4 EXPERIMENTS", "text": "Datasets We conduct loss search on large-scale object detection dataset COCO (Lin et al., 2014) and further evaluate the best-searched loss combinations on datasets with different distribution and domain, i.e. PASCAL VOC (VOC) (Everingham et al., 2015) and Berkeley Deep Drive (BDD) (Yu et al., 2020), to verify the generalization of the searched loss functions across datasets. COCO is a common dataset with 80 object categories for object detection, containing 118K images for training and 5K minival for validation. In the search experiment, we randomly sample 10k images from the training set for validation purpose. VOC contains 20 object categories. We use the union of VOC 2007 trainval and VOC 2012 trainval for training and VOC 2007 test for validation and report mAP using IoU at 0.5. BDD is an autonomous driving dataset with 10 object classes, in which 70k images are for training and 10k images are for validation.\nExperimental Setup We apply Faster R-CNN and FCOS as the representative detectors for twostage and one-stage, respectively, in the loss search experiments on COCO for object detection. We apply ResNet-50 (He et al., 2016) and Feature Pyramid Network (Lin et al., 2017a) as feature extractor. For FCOS, we employ common tricks such as normalization on bounding box, centerness on regression, and center sampling. Besides that, we replace centerness branch with the IoU scores as the target instead of the original design for FCOS and ATSS to better utilize the IoU information, which has slight AP improvement. Note that loss weights are set default as MMDetection (Chen et al., 2019a) but the regression weight for ATSS sets to 1 for the search loss combinations. To be consistent with the authors’ implementation, we use 4 GPUs with 4 images/GPU and 8 GPUs with 2 images/GPU for FCOS and Faster-RCNN. Concerning the proxy task for loss function evaluation, we perform training on the whole COCO benchmark for only one epoch to trade off performance and efficiency. Our code for object detection and evolutionary algorithm are based on MMDetection (Chen et al., 2019a) and DEAP (Fortin et al., 2012).\nIn the search experiments, regression and classification loss functions are searched independently. For regression loss search, we take GIoU loss (Rezatofighi et al., 2019) as predefined initial loss LI to generate the first loss function population, with CE and FL serving as the fixed classification loss for Faster R-CNN and FCOS, respectively. While for classification branch search, CEI and FLI loss are served as LI , with GIoU serving as the fixed regression loss." }, { "heading": "4.1 RESULTS ON TWO-STAGE AND ONE-STAGE DETECTORS", "text": "We name the best loss combination searched with Faster R-CNN R50 (Ren et al., 2015) and FCOS R50 (Tian et al., 2019) as CSE-Autoloss-A and CSE-Autoloss-B respectively, with the subscript cls and reg indicating classification and regression branch. The searched formulations are as follows:\nCSE-Autoloss-Acls(x, y, w) = −dot((1 + sin(w))y, log(softmax(x))),\nCSE-Autoloss-Areg(i, u, e) = (1− i u ) + (1− i+ 2 e ),\nCSE-Autoloss-Bcls(x, y, w) = −[wy(1 + erf(σ(1− y)))logσ(x) + (gd(x)− wy)(σ(x)− wy)log(1− σ(x))],\nCSE-Autoloss-Breg(i, u, e) = 3eu+ 12e+ 3i+ 3u+ 18\n−3eu+ iu+ u2 − 15e+ 5i+ 5u .\nThe quantitative results about the gain brought by the searched loss under the same hyper-parameters for multiple popular two-stage and one-stage detectors are shown in Table 1. Results on Faster R-CNN R50 and FCOS R50 indicate the generalization of CSE-Autoloss across detectors and the best-searched loss combination is capable of stimulating the potential of detectors by a large margin. Take Faster R-CNN R50 for example, CSE-Autoloss-A outperforms the baseline by 1.1% in terms of AP, which is a great improvement for that two years of expert effort into designing hand-crafted loss function yields only 0.2% AP improvement from Bounded IoU loss to GIoU loss.\nTo verify the searched loss can generalize to different detectors, we respectively apply CSEAutoloss-A on other two-stage models (i.e. Faster R-CNN R101, Cascade R-CNN R50 (Cai & Vasconcelos, 2018), and Mask R-CNN R50 (He et al., 2017)), and CSE-Autoloss-B on ATSS (Zhang et al., 2020). Note that we only replace the classification and regression branch of Mask R-CNN with CSE-Autoloss-A. Results in Table 1 represent the consistent gain brought by the effective searched loss without additional overhead.\nWe further conduct best-searched loss evaluation experiments for Faster R-CNN R50 on VOC and BDD to validate the loss transferability across datasets. Results are displayed in Table 3, which indicate the best-searched loss combination enables the detectors to converge well on datasets with different object distributions and domains." }, { "heading": "4.2 ABLATION STUDY", "text": "Effectiveness of Convergence-Simulation Modules. Efficiency improvement with progressive convergence-simulation modules is shown in Table 2 on searching for Faster R-CNN classification branch. With our proposed modules, CSE-Autoloss filters out 99% loss candidates with bad mathematical property and poor performance, which largely increases the efficiency without compromising the accuracy of the best-discovered loss.\nIndividual Loss Contribution. As shown in Table 6 and Table 7, our searched losses match or outperform the existing popular losses consistently for Faster R-CNN and FCOS. Figure 3 illustrates the convergence behaviors of popular loss combinations and our CSE-Autoloss-A.\nSearch Algorithm. We compare the efficiency between different search algorithms including random search, vanilla evolution search, and CSE-Autoloss. Table 5 shows that the evolution search is much better than random search. But due to the high sparsity of the action space, the vanilla evolution search requires hundreds of wall-clock hours for a server with 8 GPUs. CSE-Autoloss speeds up the search process by 20x because of the effective convergence-simulation modules and discovers well-performed loss combinations in around 1.5 wall-clock days.\nSearch Complexity of Different Branches. In our search space, the input nodes of the regression branch are scalars, while the classification branch contains vectors. Besides, the optimization of classification branch is more sensitive and difficult than regression, leading to sparse valid individuals. Therefore, the evolution population required for classification is 104, while only 103 for regression. The total number of evaluated loss and search overhead for discovering classification branch is much larger than that for regression as Table 5 shows.\nEvolution Behavior Analysis. To validate the effectiveness of CSE-Autoloss, we further compare the behaviors of CSE-Autoloss and vanilla evolution search with FCOS. The vanilla evolution search evaluates around 1000 loss candidates in each generation, compared to only about 10 individuals for CSE-Autoloss. The loss individuals are scattered in Figure 4. Note that we only plot top-100 wellperformed losses in each generation in the vanilla evolutionary search for clear visualization. The vanilla evolution search is hard to converge because a discrete replacement of the operation could fail a loss function. With our convergence-simulation modules, the efficiency is largely improved." }, { "heading": "5 CONCLUSION", "text": "In this work, we propose a convergence-simulation driven evolutionary search pipeline, named CSEAutoloss, for loss function search on object detection. It speeds up the search process largely by regularizing the mathematical property and optimization quality and discovers well-performed loss functions efficiently without compromising the accuracy. We conduct search experiments on both two-stage detectors and one-stage detectors and further empirically validate the best-searched loss functions on different architectures across datasets, which shows the effectiveness and generalization of both CSE-Autoloss and the best-discovered loss functions. We hope our work could provide insights for researchers of this field to design novel loss functions for object detection and more effective frameworks for automated loss function search." } ]
2,021
TECTION VIA CONVERGENCE-SIMULATION DRIVEN SEARCH
SP:c1297729b15bbaece175e784cd5f7db7f395fede
[ "This paper presents a meta-active learning approach to obtain an LSTM-based embedding of a dynamic system and use a chance-constrained (probabilistic safe) optimization to find optimal control configuration via applying mixed-inter linear programming (MILP) on the learned embeddings of the dynamic system. The main idea is to learn a Q-function as an acquisition function to describe the future information gain, which is the percent decrease in the error of the objective (e.g. model error) via a meta-learning strategy in which the agent interacts with the environment via distribution of altered dynamics. ", "In this work, a meta active learning approach is proposed to learn the hidden dynamics of control systems, where safety is also a major concern. The main idea lies in doing meta-learning with Q-learning, meanwhile selecting safe actions by solving a mixed-integer linear programming problem. The performance of the proposed approach is verified from real datasets of deep brain stimulation by outperforming existing baselines for a significant gap in both accuracy and computational complexity." ]
Learning to control a safety-critical system with latent dynamics (e.g. for deep brain stimulation) requires judiciously taking calculated risks to gain information. We present a probabilistically-safe, meta-active learning approach to efficiently learn system dynamics and optimal configurations. The key to our approach is a novel integration of meta-learning and chance-constrained optimization in which we 1) meta-learn an LSTM-based embedding of the active learning sample history, 2) encode a deep learning-based acquisition function with this embedding into a mixed-integer linear program (MILP), and 3) solve the MILP to find the optimal action trajectory, trading off the predicted information gain from the acquisition function and the likelihood of safe control. We set a new state-of-the-art in active learning to control a high-dimensional system with latent dynamics, achieving a 46% increase in information gain and a 20% speedup in computation time. We then outperform baseline methods in learning the optimal parameter settings for deep brain stimulation in rats to enhance the rats’ performance on a cognitive task while safely avoiding unwanted side effects (i.e., triggering seizures).
[]
[ { "authors": [ "Marcin Andrychowicz", "Misha Denil", "Sergio Gómez Colmenarejo", "Matthew W Hoffman" ], "title": "Learning to learn by gradient descent by gradient descent", "venue": "(Conference on Neural Information Processing Systems (Conference on Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Omer Ashmaig", "Mark Connolly", "Robert E. Gross", "Babak Mahmoudi" ], "title": "Bayesian Optimization of Asynchronous Distributed Microelectrode Theta Stimulation and Spatial Memory", "venue": "Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society,", "year": 2018 }, { "authors": [ "Philip Bachman", "Alessandro Sordoni", "Adam Trischler" ], "title": "Learning Algorithms for Active Learning", "venue": null, "year": 2016 }, { "authors": [ "Suneel Belkhale", "Rachel Li", "Gregory Kahn", "Rowan McAllister", "Roberto Calandra", "Sergey Levine" ], "title": "Model-Based Meta-Reinforcement Learning for Flight with Suspended Payloads", "venue": null, "year": 2020 }, { "authors": [ "Robert Burbidge", "Jem J. Rowland", "Ross D. King" ], "title": "Active Learning for Regression Based on Query by Committee. Intelligent Data Engineering and Automated Learning - IDEAL", "venue": null, "year": 2007 }, { "authors": [ "Wenbin Cai", "Muhan Zhang", "Ya Zhang" ], "title": "Batch mode active learning for regression with expected model change", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2017 }, { "authors": [ "Ignasi Clavera", "Anusha Nagabandi", "Ronald S Fearing", "Pieter Abbeel", "Sergey Levine", "Chelsea Finn" ], "title": "Learning to Adapt : Meta-Learning for Model-Based Control", "venue": null, "year": 2009 }, { "authors": [ "B Efron" ], "title": "Bootstrap Methods: Another Look at the Jackknife Author(s): B", "venue": "Efron Source : The Annals of Statistics", "year": 1979 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks", "venue": null, "year": 2017 }, { "authors": [ "Chelsea Finn", "Kelvin Xu", "Sergey Levine" ], "title": "Probabilistic Model-Agnostic Meta-Learning", "venue": "(Conference on Neural Information Processing Systems (NeurIPS)),", "year": 2018 }, { "authors": [ "Jurgen Franke", "Michael A Nuemann" ], "title": "Bootstrapping Neural Networks", "venue": "Neural Computation,", "year": 1929 }, { "authors": [ "Michael Ganger", "Ethan Duryea", "Wei Hu" ], "title": "Double sarsa and double expected sarsa with shallow and deep learning", "venue": "Journal of Data Analysis and Information Processing,", "year": 2016 }, { "authors": [ "Yonatan Geifman", "Ran El-Yaniv" ], "title": "Deep Active Learning with a Neural Architecture Search. (NeurIPS), 2018", "venue": "URL http://arxiv.org/abs/1811.07579", "year": 2018 }, { "authors": [ "Felix A Gers", "Jürgen Schmidhuber", "Fred Cummins" ], "title": "Learning to forget: Continual prediction with lstm", "venue": null, "year": 1999 }, { "authors": [ "Martina Hasenjager", "Helge Ritter" ], "title": "Active Learning with Local Models 1 Introduction", "venue": "Neural Processing Letters, pp", "year": 1998 }, { "authors": [ "Trevor Hastie", "Robert Tibshirani", "Jerome Friedman" ], "title": "Model Assessment and Selection. In The Elements of Statistical Learning Data Mining, Inference, and Prediction", "venue": null, "year": 2017 }, { "authors": [ "Timothy Hospedales", "Antreas Antoniou", "Paul Micaelli", "Amos Storkey" ], "title": "Meta-Learning in Neural Networks: A Survey", "venue": "pp. 1–23,", "year": 2020 }, { "authors": [ "Andreas Kirsch", "Joost van Amersfoort", "Yarin Gal" ], "title": "BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning", "venue": "(NeurIPS),", "year": 2019 }, { "authors": [ "Ksenia Konyushkova", "Sznitman Raphael", "Pascal Fua" ], "title": "Learning Active Learning from Data", "venue": "(Conference on Neural Information Processing Systems (Nips)),", "year": 2017 }, { "authors": [ "Ion Muslea", "Steven Minton", "Craig A. Knoblock" ], "title": "Active learning with multiple views", "venue": "Journal of Artificial Intelligence Research,", "year": 2006 }, { "authors": [ "Anusha Nagabandi", "Ignasi Clavera", "Simin Liu", "Ronald S Fearing", "Pieter Abbeel", "Sergey Levine", "Chelsea Finn" ], "title": "Learning to Adapt in Dynamics, Real-World Environments", "venue": "Through MetaReinforcement Learning", "year": 2019 }, { "authors": [ "Jeffrey A. Ouellette" ], "title": "Flight Dynamics and Maneuver Loads on a Commercial Aircraft with Discrete Source Damage", "venue": "Thesis Master, Aerospace Engineering,", "year": 2010 }, { "authors": [ "Kunkun Pang", "Mingzhi Dong", "Yang Wu", "Timothy Hospedales" ], "title": "Meta-Learning Transferable Active Learning Policies by Deep Reinforcement Learning", "venue": null, "year": 2018 }, { "authors": [ "Sotirios Posporelis", "Anthony S David", "Keyoumars Ashkan", "Paul Shotbolt" ], "title": "Deep brain stimulation of the memory circuit: improving cognition in alzheimer’s disease", "venue": "Journal of Alzheimer’s Disease,", "year": 2018 }, { "authors": [ "Alexander Schrijver" ], "title": "Theory of linear and integer programming", "venue": null, "year": 1998 }, { "authors": [ "Mariah L Schrum", "Matthew C" ], "title": "Gombolay. When Your Robot Breaks : Active Learning During Plant Failure", "venue": "IEEE Robotics and Automation Letters,", "year": 2020 }, { "authors": [ "Yanan Sui", "Alkis Gotovos", "Joel W. Burdick", "Andreas Krause" ], "title": "Safe exploration for optimization with Gaussian processes", "venue": "32nd International Conference on Machine Learning, ICML 2015,", "year": 2015 }, { "authors": [ "Yanan Sui", "Joel Burdick", "Yisong Yue" ], "title": "Stagewise safe bayesian optimization with gaussian processes", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Richard S. Sutton", "Andrew G. Barto" ], "title": "Reinforcement Learning: An Introduction. The MIT Press, second edition, 2018", "venue": "URL http://incompleteideas.net/book/the-book-2nd. html", "year": 2018 }, { "authors": [ "Matteo Turchetta", "Felix Berkenkamp", "Andreas Krause" ], "title": "Safe exploration in finite Markov decision processes with Gaussian processes", "venue": "Advances in Neural Information Processing Systems, (Nips): 4312–4320,", "year": 2016 }, { "authors": [ "JX Wang", "Z Kurth-Nelson", "D Turumala", "H Soyer" ], "title": "Learning to reinforcement learn", "venue": null, "year": 2016 }, { "authors": [ "Li Wang", "Evangelos A Theodorou", "Magnus Egerstedt" ], "title": "Safe Learning of Quadrotor Dynamics Using Barrier Certificates", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Zi Wang", "Beomjoon Kim", "Leslie Pack Kaelbling" ], "title": "Regret bounds for meta Bayesian optimization with an unknown Gaussian process prior", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Eric John Watkiss" ], "title": "Flight dynamics of an unmanned aerial vehicle", "venue": "URL https://calhoun. nps.edu/handle/10945/28222", "year": 1994 }, { "authors": [ "Y. Zhang", "C.C. de Visser", "Q.P. Chu" ], "title": "Aircraft Damage Identification and Classification for Database-Driven Online Flight-Envelope Prediction", "venue": "Journal of Guidance, Control, and Dynamics,", "year": 2017 }, { "authors": [ "Christoph Zimmer", "Mona Meister", "Duy Nguyen-Tuong" ], "title": "Safe active learning for time-series modeling with Gaussian processes", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Safe and efficient control of a novel systems with latent dynamics is an important objective in domains from healthcare to robotics. In healthcare, deep brain stimulation devices implanted in the brain can improve memory deficits in patients with Alzheimers (Posporelis et al., 2018) and responsive neurostimulators (RNS) can counter epileptiform activity to mitigate seizures. Yet, the surgeon’s trial-and-error process of finding effective RNS parameters for each patient is time-consuming and risky, with poor device settings possibly damaging the brain.\nResearchers studying active learning and Bayesian optimization have sought to develop algorithms to efficiently and safely learn a systems’ dynamics, e.g. learning a brain’s dynamics for RNS configuration (Ashmaig et al., 2018; Sui et al., 2018). However, because these algorithms fail to scale up to higher-dimensional state-action spaces, researchers utilize only simple voltage and frequency controls rather than all 32 channels of the RNS waveform (Ashmaig et al., 2018). Similarly, tasks in robotics, e.g. learning the dynamics of novel robotic systems (e.g., an autopilot learning to fly a damaged aircraft), require active learning methods that succeed in higher-dimensional domains.\nIn this paper, we develop a probabilistically-safe, meta-active learning approach to tackle these challenging tasks to efficiently learn system dynamics and optimal configurations. We draw inspiration from recent contributions in meta-learning (Finn et al., 2017; Nagabandi et al., 2019; Wang et al., 2016; Andrychowicz et al., 2016) that seek to leverage a distribution over training tasks to optimize the parameters of a neural network for efficient, online adaptation. Researchers have previously investigated meta-learning for active learning, e.g. learning a Bayesian prior over a Gaussian Process (Wang et al., 2018b) for learning an acquisition function. However, these approaches do not consider the important problem of safely and actively learning to control a system with altered dynamics, which is a requirement for safety-critical robotic applications. Furthermore, as we show in Section 5, on challenging control tasks for healthcare and robotics, the performance of prior active learning approaches (Kirsch et al., 2019; Hastie et al., 2017) leaves much to be desired.\nWe seek to overcome these key limitations of prior work by harnessing the power of meta-learning for active learning in a chance-constrained optimization framework for safe, online adaptation by encoding a learned representation of sample history. Instead of hand-engineering an acquisition\nfunction for our specific domains, our approach employs a data-drive, meta-learning approach, which results in better performance than prior approaches, as shown in Section 5. Furthermore, our approach has the unique ability to impose analytical safety constraints over a sample trajectory.\nContributions – We develop a probabilistically safe, meta-learning approach for active learning (”meta-active learning”) that sets a new state-of-the-art. Our acquisition function (i.e., the function that predicts the expected information gain of a data point) is meta-learned offline, allowing the policy to benefit from past experience and provide a more robust measure of the value of an action. The key to our approach is a novel interweaving of our deep, meta-learned acquisition function as a Long-Short Term Memory Network (Gers et al., 1999) (LSTM) within a chance-constrained, mixed-integer linear program (MILP) (Schrijver, 1998). By encoding the LSTM’s linear, piece-wise output layers into the MILP, we directly optimize an action trajectory that best ensures the safety of the system while also maximizing the information learned about the system. In this paper, we describe our novel architecture which uniquely combines the power of a learned acquisition function with chance-constrained optimization and evaluate its performance against state-of-the-art baselines in several relevant domains. To the best of our knowledge, this is the only architecture which meta-learns an acquisition function for optimization tasks and is capable of embedding this acquisition function in a chance-constrained linear program to guarantee a minimum level of safe operation.\nThe contributions of this paper are as follows: 1. Meta-active learning for autonomously synthesizing an acquisition function to efficiently\ninfer altered or unknown system dynamics and optimize system parameters. 2. Probabilistically-safe control combined with an active-learning framework through the\nintegration of our deep learning-based acquisition function and integer linear programming. 3. State-of-the art results for safe, active learning. We achieve a 46% increase in information\ngain in a high-dimensional environment of controlling a damaged aircraft, and we achieve a 58% increase in information gain in our deep brain stimulation against our baselines." }, { "heading": "2 PRELIMINARIES", "text": "In this section, we review the foundations of our work in active, meta-, and reinforcement learning.\nActive Learning – Labelled training data is often difficult to obtain due either to tight time constraints or lack of expert resources. Active learning attempts to address this problem by utilizing an “acquisition function” to quantify the amount of information an unlabelled training sample, x ∈ DU = 〈xi〉ni=1, would provide a base learner, T̂ψ , if that sample were given a label, y and added to a labeled dataset, DL = 〈xj , yj〉mj=1, i.e., DL ← DL ∪ 〈x, y〉. The active learning algorithm queries its acquisition function, H(DU , DL, Tψ), to select which x ∈ DU should be labeled and added to DL; then, a label is queried (e.g., by taking an action in an environment and observing the effect) for x, and the new labeled sample is added to DL (Muslea et al., 2006; Pang et al., 2018).\nMeta-Learning – Meta-learning approaches attempt to learn a method to quickly adapt to new tasks online. In contrast to active learning, meta-learning attempts to learn a skill or learning method, e.g. learning an active learning function, which can be transferable to novel tasks or scenarios. These tasks or skills are trained offline, and a common assumption is that the tasks selected at test time are drawn from the same distribution used for training (Hospedales et al., 2020).\nReinforcement Learning and Q-Learning – A Markov decision process (MDP) is a stochastic control process for decision making and can be defined by the 5-tuple 〈X ,U , T ,R, γ〉. X represents the set of states and U the set of actions. T : X × U × X ′ → [0, 1] is the transition function that returns the probability of transitioning to state x′ from state x applying action, u. R : X × U → R is a reward function that maps a state and action to a reward, and γ weights the discounting of future rewards. Reinforcement learning seeks to synthesize a policy, π : X → U , mapping states to actions to maximize the future expected reward. When π is the optimal policy, π∗, the following Bellman condition holds: Qπ ∗ (x, u) := Ex′∼T [ R(x, u) + γQπ ∗ (x′, π∗(x)) ] (Sutton & Barto, 2018)." }, { "heading": "2.1 PROBLEM SET-UP", "text": "Our work is at the unique nexus of active learning, meta-learning and deep reinforcement learning with the objective of learning the Q-function as an acquisition function to describe the expected future information gained when taking action, u, in state, x, given a set of previously experienced states\nand actions. We define information gain as the percent decrease in the error of the objective (e.g., decrease in model error). A formal definition is provided in the Appendix. The Q-function is trained via a meta-learning strategy in which an agent interacts in environments sampled from a distribution of different scenarios to distill the most effective acquisition function for active learning of system dynamics. In our context, the state, X, of the active learning system is given by X : 〈DU , DL, T̂ψ〉, where DU consists of all possible state-action pairs, DL is the set of state-action pairs that the agent has already experienced, and T̂ψ is a neural network function approximator of the transition dynamics, which is parameterized by ψ and updated online as samples are collected. We note that this state, X, is distinct from the state, X , of the robotic (or other) system we are controlling. The reward, R(i), is proportional to the reduction of the mean squared error (MSE) loss of T̂ψ at time step, i." }, { "heading": "3 SAFE META-LEARNING ARCHITECTURE", "text": "Several key components are vital for learning about an unknown system in a timely manner. First, an encoded representation of the context of the new dynamics is important for determining where exploration should be focused and which actions may be beneficial for gaining information of unknown dynamics. Second, a range of prior experiences in active learning should be leveraged to best inform which actions elicit the most information in a novel context within a distribution of tasks. We seek to develop a framework with these key attributes to enable sample-efficient and computationally light-weight active learning. An overview of our system is shown in Fig. 1." }, { "heading": "3.1 META-LEARNING ALGORITHM", "text": "To infer the Q-function for an action (i.e. the acquisition function), we meta-learn over a distribution of altered dynamics as described in Algorithm 1. For each episode, we sample from this distribution of altered dynamics and limit each episode to the number of time steps, M , tuned to collect enough data to accurately learn our approximate dynamics, T̂ψ , as a neural network with parameters, ψ. We utilize Q-learning to infer the Q-function which is represented by a DQN. In our training scheme we search over the action space via a MILP, as described in Section 3.2 and select the action set, ~U (t:T ), which maximizes the Q-function while satisfying safety constraints.\nThe acquisition Q-function, Qθ, as described in Eq. 1 is trained via Deep Q-Learning (Ganger et al., 2016) with target network, Qφ. During training, our Deep Q-Learning framework is augmented to account for safety . The learned acquisition function, Qθ, is utilized by our policy (Eq. 1), which is solved via a safety-constrained MILP solver. The reward, R(t), for taking a set of actions in a given state is defined as the percent decrease in the MSE error of the model, T̂ψ . This Bellman loss is backpropagated through the Q-value linear and (ReLU) output layers through the LSTM encoding\nlayers. π(~Z(t+1)) is the set of actions, ~U (t+1), determined by maximizing Eq. 1, which we describe in Section 3.2. The dynamics model, T̂ψ , is retrained with each new set of state-action pairs.\nAlgorithm 1 Meta-learning for training 1: Randomly initialize Qθ and Qφ with weights θ = φ 2: Initialize replay buffer, D 3: for episode=1 to N do 4: Initialize T̂ψ based on meta-learning distribution 5: Collect small initial set of state-action pairs, U (0), X(0) 6: Train T̂ψ on initial set 7: for t=1 to M do 8: Forward pass on encoder to obtain ~Z(t)\n9: Select U (t) from Eq. 1 10: Execute actions U (t) + N according to exploration noise, N ;\nobserve states X(t+1)\n11: Retrain T̂ψ on X(0:t), U (0:t) and observe reward R(t) 12: D ← D ∪ 〈U (t),X(t),X(t+1), R(t)〉 13: Sample a batch of transitions from D 14: Perform forward pass through encoder to obtain ~Z(t+1)\n15: Calculate y(t) = R(t) + γQφ ( π(~Z(t+1)), ~Z(t+1) ) 16: Update Qθ according to ( y(t) −Qθ(~U (t), ~Z(t))\n) 17: Qφ ← τQθ + τ(1−Qφ) 18: end for 19: end for;\nAlgorithm 2 Metalearning for testing\n1: Draw test example from distribution 2: Initialize T̂ψ 3: for t=1 to M do 4: Do forward pass through\nencoder to get ~Z(t)\n5: Select actions, U (t), according to Eq. 1 6: Execute actions, U (t); observe states X(t+1) and reward R(t) 7: Retrain T̂ψ on X(0:t+1), U (0:t+1) 8: end for" }, { "heading": "3.2 MIXED-INTEGER LINEAR PROGRAM", "text": "Our objective (Eq. 1) is to maximize both the probability of the system remaining in a safe configuration and the information gained along the finite trajectory horizon from [t, t+ T ).\n~U (t:t+T ) ∗\n= argmax ~U(t:t+T )∈~U(t:t+T ) λ1\n[ Qφ(~U (t:t+T ), ~Z)− Q̄θ(·, ~Z) ] + Pr {∥∥∥~x(t+T ) − ~xr∥∥∥ 1 ≤ r } (1)\nQφ(~U (t:t+T ), ~Z) describes the expected information gained along the trajectory when the set of actions ~U (t:t+T ) is taken in the context of the embedding ~Z, and Q̄θ(·, ~Z) is the expected Q-value in state ~Z, which we discuss further in Section 3.3. λ1 is a hyper-parameter that can be tuned to adjust the tradeoff between safety and information gain. Pr\n{∥∥~x(t+T ) − ~xr∥∥1 ≤ r} defines the probability of remaining within the hyper-ellipsoid of safety. We derive a linearization of this equation based on an assumption of Gaussian dynamics which can then be solved via MILP.\nDefinition of Safety - We define a hyper-ellipsoid of safety, parameterized by the safe state xr and radius ~r encompassing all known safe system states. In the case of an aircraft, ~xr would be straight and level flight and the hyper-ellipsoid would include deviations considered safe. Additionally, we\nassume that our model error comes from a Gaussian distribution with known mean and variance. This assumption allows us to impose safety constraints which can either be enforced with a probability of one, or this requirement can be relaxed to increase the potential for information gain.\nBy assuming that our model error originates from a Gaussian distribution, we can linearize our probability constraints described in Eq. 9 to include in the MILP (See Appendix for full derivation). Here, Φ−1 is the inverse cumulative distribution function for the standard normal distribution and 1− d denotes the probability level. σ represents the uncertainty in the dynamics network parameters. d represents the dth row and j represents the jth column. ∆(t:t+T )d = x r d − [βx(t)]d where β defines the evolution of a state (See Appendix A.4.1 for more detail). We compute the uncertainty, σ, of our network via bootstrapping (Efron, 2020; Franke & Nuemann, 1998), which we describe further in the Appendix. We guarantee a minimum probability of safety by selecting the minimum risk value, d, to be at or above our minimum desired probability of safety. This procedure ensure that our algorithm must select a safety constraint that exceeds this minimum safety requirement. Note: the hyper-ellipse, when linearized, is a hyper-rectangle.\nPr { ∥∥∥~x(t+T ) − ~xr∥∥∥\n1 ≤ r\n} −→ ∥∥∥Φ−1(1− d) √√√√∑\nj\nσ2 d,j x (t)2 j + ∑ j σ 2 d,j U(t:t+T ) 2 j + Γt ~U(t:t+T ) 2 −∆(t:t+T ) d ∥∥∥ 1 < rd, ∀d\n(2)\nOur policy takes a set of information rich actions at time t, potentially out of the d-dimensional hyper-ellipsoid of safety, guaranteeing with probability 1− that the system will return to a safe state after T time steps. Thus, our T -step propagation allows the system to deviate from the safe region, if desired, to focus on actively learning so long as it can return to the safe region with high confidence." }, { "heading": "3.3 AN LSTM Q-FUNCTION AS A LINEAR PROGRAM", "text": "We leverage an LSTM as a function approximator for Q-learning. To blend our Q-function with the MILP, we pass the LSTM output embedding, which serves as a learned representation of the sample history, through a fully connected layer with ReLU activations.1 This design enables us to backprop the Bellman residual through the output layers encoded in the MILP all the way to the LSTM inputs. Thus we can leverage the power of an LSTM and mathematical programming in a seamless framework.2 Given our meta-learned acquisition function and chance-constrained optimization framework, we can now perform probabilistically-safe active learning. Algorithm 2 describes how we perform our online, safe and active learning. Intuitively, our algorithm initializes a new dynamics model to represent the unknown or altered dynamics, and we iteratively sample information rich, safe actions via our MILP policy, update our dynamics model, and repeat." }, { "heading": "4 EXPERIMENTAL EVALUATION", "text": "We design two experiments to validate our approach’s ability to safely and actively learn altered or unknown dynamics in real time. We compare our approach to several baseline approaches and demonstrate that our approach is superior in terms of both information gain and computation time. More details on the experimental domains can be found in the Appendix." }, { "heading": "4.1 ASYNCHRONOUS DISTRIBUTED MICROELECTRODE THETA", "text": "Asynchronous distributed microelectrode theta stimulation (ADMETS) is a cutting edge deep brain stimulation approach to treating seizure conditions that cannot be controlled via pharmacological methods. In ADMETS, a neuro-stimulator is implanted in the brain to deliver continuous electrical pulses to reduce seizures. However, there is no clear mapping from parameter values to reduction in seizures that applies to all patients, as the optimal parameter settings can depend on the placement of the device, the anatomy of an individual’s brain and other confounding factors. Further, a latent subset of parameters can cause negative side-effects.\n1We include the mean and standard deviation of previously collected states and actions as we find the first and second moments of the data to be helpful, additional features.\n2Details on the hyperparameter settings including learning rates and layer sizes can be found in the Appendix along with the derivation for the linearization of the Q-function for inclusion in the linear program.\nAs a surrogate task for evaluating how well an algorithm can find the optimal paremeters of an ADMETS device for in vivo epilepsy therapy, researchers have proposed leveraging experimental data collected from parameter sweeps of ADMETS devices in rats. In keeping with Ashmaig et al. (2018), we create simulation environments for six rats where, at each ADMETS parameter setting, the cognitive function of a rat was tested (i.e., the rats ability to recall where a treat was located in a maze), as measured by a “memory score.” The task for safe, active learning task is to determine the ADMETS parameters (i.e., signal amplitude) in the simulation environments that maximize each rat’s memory scores without causing unwanted side effects (e.g., seizures), which can arise when the memory score drops below zero. The reward signal utilized by our meta-learner is the percent decrease in error between the predicted and actual optimal parameters." }, { "heading": "4.2 HIGH-DIMENSIONAL DOMAIN (RECOVERING DYNAMICS OF A DAMAGED AIRCRAFT)", "text": "Active learning algorithms can be ineffective in high-dimensional domains. As such, we seek to stress-test our algorithms in just such a domain: Learning the nonlinear dynamics of a damaged aircraft online before the system enters an unrecoverable configuration (e.g., a spin or crashing). We base our simulation on theoretical damage models from prior work describing the full equations of motion (Watkiss, 1994; Zhang et al., 2017; Ouellette, 2010) within the Flightgear virtual environment. The objective of this domain is to learn the difference between the altered dynamics that result from the damage and to maintain safe flight. We consider safe flight to be our designated safe state, ~xr. The aircraft takes an information rich action potentially resulting in a deviation outside of the d-dimensional hyper-ellipsoid of safety. The next action is constrained to guarantee that the plane returns to the hyper-ellipsoid of safety with probability 1− via action u(t+1)." }, { "heading": "4.3 BASELINE COMPARISONS", "text": "We evaluate against the following baselines in active learning and Bayesian optimization. The acquisition function baselines, Epistemic Uncertainty (Hastie et al., 2017) and Maximizing Diversity (Schrum & Gombolay, 2020) are embedded in our safety framework, therefore providing a head-tohead comparison between our meta-learned acquisition function and these active learning heuristics.\n• Epistemic Uncertainty (Hastie et al., 2017) - This active learning metric selects the action that maximizes uncertainty. • Maximizing Diversity (Schrum & Gombolay, 2020) - This acquisition function selects actions which maximize the difference between previously seen states and actions. • Bayesian Optimization (BaO) (Ashmaig et al., 2018) - This algorithm was developed for the\nADMETS domain (Section 4.1) and is based upon a Gaussian Process model. • Meta Bayesian Optimization (Meta BO) (Wang et al., 2018a) - This approach meta-learns a\nGaussian process prior offline over previously sampled data. • Learning Active Learning (LAL) (Konyushkova et al., 2017) - This approach meta-learns an\nacquisition function leveraging hand-engineered features.\nWe benchmark against each of the above algorithms in both the ADMETS and high-dimensional domains. We do not compare to MAML (Finn et al., 2017) or Nagabandi et al. (2019) as these algorithms have no notion of safety or active learning." }, { "heading": "5 RESULTS", "text": "We empirically validate that in the our meta-active learning approach outperforms baselines across both the ADMETS and high-dimensional domains in terms of its ability to actively learn latent parameters and its ability to safely perform these tasks. In contrast to our approach, even though BaO, Meta BO, and LAL do not consider safety, we are able to outperform those baselines across all metrics.\nActive Learning – Results from both the ADMETS and the high-dimensional domains empirically validate that our algorithm more efficiently and safely learns the optimal control parameters (Fig. 4) and system dynamics (Fig. 5). In Fig. 4b-5b, we report the mean (standard deviation) for each measure and baseline where possible. In the ADMETS domain, our model selects an action which results in a 58% higher information gain than BaO and 87% higher information gain than Meta BO on average. When compared to the state-of-the-art active learning baselines, our method performs 41% better than Maximizing Diversity and 98% better than Uncertainty in terms of information gain.\nIn the high-dimensional domain, we achieve a 49% improvement over Hastie et al. (2017) and a 46% improvement over both Schrum & Gombolay (2020) and Konyushkova et al. (2017). We point out that our approach outperforms active learning heuristics (i.e., Maximizing Diversity and Epistemic Uncertainty) when run inside our chance-constrained framework for a direct comparison. This result validates meta-learning an acquisition function is a necessary and beneficial component of our framework.\nComputation Time – We demonstrate that our method also outperforms previous work in terms of computation time. Across both domains, our approach not only achieves a more efficient reduction in model error and improvement in information gain, we are also faster than all baselines in the more challenging, high-dimensional (Fig. 5c) environment. In the lower-dimensional, ADMETS\nenvironment (Fig. 4c), BaO has a slight advantage in computation time, but our algorithm trades the time for a 58% information gain over BaO. Additionally, we are 68x faster than LAL and 61x faster than Meta BO, the two other meta-learning approaches we benchmark against.\nSafety – In the high-dimensional domain, we empirically validate that, for the information results we report in Fig. 5 we can achieve an 87% probability the aircraft will return to the safe region 5a. As shown in Fig 5b, only Maximizing Diversity (85% safe) and Epistemic Uncertainty (78% safe) allow for safety constraints to be imposed. Our algorithm thus outperforms the baselines in safety as well as information gain. When we maximize for safety (i.e., λ = 0), our algorithm is able to achieve a 99.9% safe return to the hyper-ellipsoid even without a ground-truth dynamics model.\nIn the ADMETS domain, we find that our algorithm achieves a 2.3% higher guarantee of safety compared to Maximizing Diversity. Our baseline Uncertainty achieves an equivalent safety guarantee to our algorithm, yet our algorithm achieves a 98% greater information gain comparatively, placing it at the Pareto front in terms of safety and information gain. We outperform LAL, BaO, and MetaBo in information gain and computation time under our safety constraints even though these algorithms are unconstrained.\nWe show in Fig 6 the results of our algorithm in both domains when maximizing only safety and removing the active learning component from our objective. Without the meta-learned active learning function, our algorithm does not explore as efficiently and therefore achieves a lower model accuracy at each time step, demonstrating that our meta-learned acquisition function is an important component of our architecture." }, { "heading": "5.1 DISCUSSION", "text": "Through our empirical investigation, we have demonstrated that our meta-learned acquisition function operating within a chance-constrained optimization framework outperforms prior work in active learning, meta-learning, and Bayesian optimization (Hastie et al., 2017; Schrum & Gombolay, 2020; Ashmaig et al., 2018; Konyushkova et al., 2017; Wang et al., 2018a). Specifically, we are able to simultaneously achieve an improvement in information gain via increased sample efficiency and decreased computation time. We achieve a 46% increase in information gain while still achieving a 20% speedup in computation compared to active learning baselines and 60x faster computation time compared to our meta learning baseline. Our novel, deep learning architecture, demonstrates a unique ability to leverage the power of feature learning across time-series data within a LSTM neural network and the utility of deep Q-learning, within mathematical optimization with chance constraints to explicitly tradeoff safety and active learning. Our approach allows for minimum safety guarantees while also maximizing the information gained based on a learned representation of sample history." }, { "heading": "6 RELATED WORK", "text": "Our work lies at the crossroads of active learning, meta-learning and safe learning. We discuss the contributions of our work and why our approach is novel in light of prior work.\nActive Learning - Active learning acquisition functions provide heuristics to selecting the candidate unlabeled training data sample that, if the label were known, would provide the most information to the model being learned (Burbidge et al., 2007; Hasenjager & Ritter, 1998; Cai et al., 2017; Hastie et al., 2017). For example, in Hastie et al. (2017) the action is selected that the learner is least certain about. In work by Ashmaig et al. (2018), the authors utilize the acquisition function Expected Improvement (EI) to balance exploration versus exploitation to determine the optimal stimulation parameters in ADMETS.\nPrior literature has also investigated on-the-fly active learning and meta-active learning (Bachman et al., 2016; Konyushkova et al., 2017). Konyushkova et al. (2017) describes the algorithm Learning Active Learning (LAL). The authors present a meta-learning method for learning an acquisition function in which a regressor is trained to predict the reduction in model error of candidate samples via hand engineered features. Wang et al. (2018a) alternatively considers a Gaussian Process based method to meta-train an acquisition function on a distribution of tasks. They show that their method is capable of extracting structural properties of the objective function for improved data-efficiency. Work by Geifman & El-Yaniv (2018) attempts to actively learn the neural network architecture that is most appropriate for a given task, e.g. active learning. Pang et al. (2018) additionally proposed a method to learn an acquisition function that generalizes to a variety of classification tasks. Yet, this work has only been demonstrated for classification.\nMeta-Learning for Dynamics - Prior work has attempted to address the problem of learning altered dynamics via a meta-learning (Clavera et al., 2009). Belkhale et al. (2020) investigate a meta-learning approach to learn the altered dynamics of an aircraft carrying a payload; the authors train a neural network on prior data to predict environmental and task factors to inform how to adapt to new payloads. Finn et al. (2018) present a meta-learning approach to quickly learning a control policy. In this approach, a distribution over prior model parameters that are most conducive to learning the new dynamics is meta-learned offline. While this approach provides fast policies for learning new dynamics, it does not explicitly reason about sample efficiency or safety.\nSafe Learning - Prior work has investigated safe learning in the context of safe Bayesian optimization and safe reinforcement learning. For example, Sui et al. (2015) develop the algorithm SafeOpt which balances exploration and exploitation to learn an unknown function; however, this approach has significant limiting assumptions about the underlying nature of the task. Turchetta et al. (2016) address the problem of safely exploring an MDP by defining an a priori unknown safety constraint updated during exploration, and Zimmer et al. (2018) utilizes a Gaussian process for safely learning time series data. However, these approaches do not incorporate knowledge from prior data to increase sample efficiency, limiting their ability to choose the optimal action. Schrum & Gombolay (2020) attempt to overcome this problem by employing a novel acquisition function, Maximizing Diversity, which is utilized to quickly learn altered system dynamics in a chance constrained framework. Yet, the hand engineered acquisition function limits the capabilities of this approach." }, { "heading": "7 CONCLUSION", "text": "In this paper we demonstrate a state of the art meta-learning approach to active learning for control. By encoding the context of the dynamics via an LSTM and learning a Q-function – which we encode into a mixed-integer optimization framework – that predicts the expected information gain of taking a given action, our framework is able to efficiently and safely learn the nature of the altered dynamics. We compare our approach to baseline acquisition functions and demonstrate that ours is superior in both computation time and information gain, achieving a 46% increase in information gain while still achieving a 20% speedup in computation time compared to state-of-the-art acquisition functions and more than a 58% higher information compared to Bayesian approaches." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 DETAILS ON DOMAINS", "text": "Here we describe in more detail the input and output spaces of the ADMETS and high-dimensional domains as well as other details relevant to implementation." }, { "heading": "A.1.1 AIRCRAFT", "text": "Our simulation consists of ten states, i.e., forward velocity (u), vertical velocity (w), pitch rate (q), pitch angle (θ), sideslip angle (β), roll rate (p), yaw rate (r), roll angle (ψ), yaw angle (φ), and altitude (Z). The control inputs are elevator (∆e), thrust (∆(t)), aileron (∆a), and rudder (∆r). Our sampling rate is 20 Hz." }, { "heading": "A.1.2 ADMETS", "text": "The input parameter space is one-dimensional and consists of the voltage amplitude (va). The output space is a score quantifying memory, referred to as the discrimination score (ds). The objective of this domain is to maximize discrimination area, which is defined as da := ds ∗ va. More detail on this domain and the data on which the simulation is based can be found at Ashmaig et al. (2018)." }, { "heading": "A.2 DEFINITION OF INFORMATION GAIN", "text": "We define information gain, I , at time step t as the percent decrease in the error of the objective as described in Eq. 3. e(t) is the error of the objective at time step t. In the high-dimensional domain, e(t) is defined as the mean squared error of the model, e(t) = 1N ∑N i−1(x\n(i) − ˆx(i))2. x is the ground truth state and x̂ is the state predicted by the model. In the ADMETS domain e(t) is defined as the L1 norm of the predicted optimal parameter and the ground truth optimal parameter (e(t) = ||da− d̂a||1). During offline training, the ground truth can be obtained from the known model.\nI(t+1) = e(t) − e(t+1)\ne(t) (3)" }, { "heading": "A.3 BASELINE ACQUISITION FUNCTIONS", "text": "" }, { "heading": "A.3.1 MAXIMIZING DIVERSITY", "text": "We compare our method to the acquisition function, maximizing diversity, presented by (Schrum & Gombolay, 2020). Here, u(i) and x(i) are states and actions that the robot has previously experienced, and f is the current dynamics model.\nu∗ = argmax u∈U N∑ i=1 ∥∥∥u− u(i)∥∥∥ 1 + β ∥∥∥T̂ψ (x, u)− x(i)∥∥∥ 1 (4)" }, { "heading": "A.3.2 MAXIMIZING UNCERTAINTY", "text": "This active learning metric described by (Hastie et al., 2017) quantifies the uncertainty in the output of the model for each training example as described in Eq. 5. Here, T̂z(u)) is the zth dynamics model and T̄ is the average across models z.\nu∗ = argmax u∈U\n1\nZ Z∑ z=1 ∥∥∥T̄ − T̂z(u))∥∥∥ 1\n(5)" }, { "heading": "A.4 ADDITIONAL DETAILS ON MIXED-INTEGER LINEAR PROGRAM FORMULATION", "text": "In this section we provide additional details on the linearization of our probability constraints and objective function for integration into a linear programming formulation." }, { "heading": "A.4.1 DYNAMICS MODEL REPRESENTATION", "text": "In practice, in the high-dimensional domain we find that T̂ψ can be represented as a single-layer perceptron (i.e. linear regression) which advantageously is computationally efficient in domains that require fast computation times. We adopt a multi-layer perceptron with ReLU activations in the ADMETS domain. Our inferred dynamics therefore evolve according Γ. A describes the evolution of a state with no input, B the change in the state due to an action at time t, ~u(t), and I is the identity matrix.\n~X(t+1:t+T ) = β~x(t) + Γ~U (t:t+T ) (6) β = [ A2 A3 . . . AT ] (7)\nΓ = AB B 0 0 . . . 0 A2B AB B 0 . . . 0 ... . . .\nAT−1B AT−2B AT−3 . . . . . . B\n (8)" }, { "heading": "A.4.2 LINEARIZATION OF PROBABILITY CONSTRAINTS", "text": "To include our safety constraints in a mixed-integer linear programming formulation, we remove the non-linearities via conservative assumptions and other techniques. Our safety constraints are defined in Eq. 9 and the dynamics evolve according to equations 6-8. d represents the dth row and j represents the columns.\n∥∥∥Φ−1(1− d)√∑ j σ2d,jx (t)2 j + ∑ j σ 2 d,jU (t:t+T )2 j + Γ ~U (t:t+T ) 2 −∆(t:t+T )d ∥∥∥ 1 < rd (9)\nThe following conservative assumption is made to linearize the sum of squares in Eq. 9. 0 ≤ √∑\nj\nσ2d,jx 2 j ≤ ∑ j σd,j |xj | (10)\nWe utilize the binary decision variable, δ ∈ {0, 1} as a probability selector variable to linearize the absolute value in Eq. 9. M represents a large positive number and Γ̄ is the point estimate of the dynamics. This linearization technique combined with the conservative assumption in Eq. 10 results in the following linear equations (Eq. 11-13) which can be integrated into a mixed-integer linear programming formulation. E is the set of “probability levels”, e.g., E = {0.75, 0.8, ...} where minE defines the minimum enforced probability of safety.\n−Mδ − Φ−1(1− p) ∑ j σd,jŨ (t:t+T )j − Γ̄~U (t:t+T ) < ~rd + ∆ (t:t+T ) d + ∑ j σd,j |x(t)j | (11)\n−Mδ + Φ−1(1− p) ∑ j σd,jŨ (t:t+T )j + Γ̄~U (t:t+T ) < rd −∆(t:t+T )d − ∑ j\nσd,j |x(t)j | (12)∑ p∈E δ p,d = |E| − 1,∀d ∈ D (13)\nA.4.3 VARIANCE ESTIMATION\nWe compute uncertainty of our network via bootstrapping. We follow the method proposed in Efron (2020) and randomly redraw bootstrap training samples with replacement from our set of training data. This technique has been verified by Franke & Nuemann (1998) to be an effective method for\napproximating the uncertainty of neural networks. The components of σ are calculated according to Eq. 14. Here x̄ is the average of the bootstrapped networks and x(b)d,j a single bootstrapped network. B is the number of bootstrapped networks.\nσd,j =\n√∑B b=1(x̄d,j − x (b) d,j) 2\nB − 1 (14)" }, { "heading": "A.4.4 LINEARIZATION OF Q-FUNCTION", "text": "Our Q-function includes a non-linear relu activation function which is linearized to be included in the mixed-integer linear programming formulation. Equations 15-17 define the equations for a neural network with relu activation. ξ = [[ U (t:T ) ]ᵀ , [ ~Z ]ᵀ]\nis the input to the Q-function and ω(l) j,i is the connection between neurons j and i between layers l and l + 1.\nQ(~U (t:T ), ~Z) = ∑ j ω (2) j,d o (2) j ,∀d ∈ D (15)\no (2) i = ∑ j ω (1) j,i o (1) j1 ( o (1) j≥0 ),∀i (16)\no (1) i = ∑ j ω (0) j,iξ (t) j (17)\nThis formulation is linearized in Eq. 18-20. ki ∈ {0, 1} is a binary indicator variable and M represents a large positive number.\nMki −M + ∑ j ω (0) j,iξ (t) j ≤ 0 ≤Mki + ∑ j ω (0)\nj,iξ (t) j (18)∑\nj\nω (0) j,iξ (t) j −M ≤ o (1) i ≤ ∑ j ω (0) j,iξ (t) j +Mki (19)\nM −Mki ≥ o(1) i ≥ 0,∀i (20)" }, { "heading": "A.4.5 LINEARIZATION OF “RESETTING” TERM", "text": "In practice, we find that taking a final, “resetting” action at time t+ T by adding z3 = ∥∥~x(T ) − ~xr∥∥ to minimize the distance between the aircraft’s state and ~xr helps to ensure the aircraft does not loiter along the boundary of safe operation until where a random perturbation could result in failure the aircraft. We linearize the resetting term in our objective function, i.e. the difference between our designated safe state, ~xr and the final state xT by maximizing −(z+ + z−) subject to the constraint in Eq. 21. z+ and z− are both positive continuous variables.\n~xT − ~xr = z+ − z− (21) z+, z− > 0 (22)" }, { "heading": "A.4.6 LINEARIZED OBJECTIVE", "text": "The resultant linearized objective is defined in Eq. 23. In the ADMETS domain λ3 is set to 0.\n~U (t:T ) ∗\n= argmax ~U(t:T )∈~U(t:T )\n( λ1 H∑ j ( ω (2) j,d ∗ o (2) j − Q̄θ(·, ~Z))\n+ λ2 ∑ p∈E (1− δ p,d) p,d − λ3(z − d + z + d ) )\n(23)" }, { "heading": "B SAFETY", "text": "" }, { "heading": "B.0.1 ADMETS DOMAIN", "text": "The baselines used in the ADMETS domain did not have built in safety guarantees. Therefore, when comparing against these baselines, we removed the safety constraints in our algorithm to make the comparison fair." }, { "heading": "B.0.2 HIGH-DIMENSIONAL DOMAIN", "text": "Safety constraints are necessary in the high-dimensional domain because without these constraints, the aircraft would quickly go into an unrecoverable configuration. The epistemic uncertainty Hastie et al. (2017) and diversity Schrum & Gombolay (2020) baselines can be integrated into our safety framework by linearizing the acquisition functions. Therefore, in our comparison of our algorithm versus the baselines, we enforced the safety constraints.\nLAL Konyushkova et al. (2017), however, is not an inherently safe method of active learning. To fairly compare this baseline to our method in the high-dimensional domain, we simulate the possible actions that can be selected by LAL and discard those that are not safe (i.e., the actions that take the aircraft out of the cylinder of safety). Therefore, LAL can only select an action considered safe by our definition of safety." }, { "heading": "B.1 SENSITIVITY ANALYSIS", "text": "To robustly evaluate our method compared to the baselines, we vary the hyperparameters of the approach by (Schrum & Gombolay, 2020) for maximizing diversity and our meta-learned function to show that our function is robust and is still superior despite changes in hyperparameters. The results of this hyperparameter sweep are shown in Table 1. The hyper-parameter we vary for maximizing diversity is the number of previous training samples that we compare to. The information gain increases as the number of samples increases up to a point at which the selected sample tends to converge to the mean of the previously collected samples, causing the information gain to fall. We vary the number of hidden neurons in our Q-function as the hyper-parameter of interest as it governs the trade-off between computational speed and function approximation power.\nIn our approach, the addition of a hidden neuron adds an additional integer variable, resulting in an increase in computation time as demonstrated in Table 1. However, an addition of 25 neurons only increase the computation time by 3.7% and provides a 50% increase in information gain. In comparison, Schrum & Gombolay (2020)’s approach results in increase in information gain 4% while trading off a 6% loss in computation time. Likewise, increasing the number of bootstrapped models in Hastie et al. (2017)’s approach results in a 35% increase in information gain and a 37.5% increase in computation time. Therefore, in our approach we are able to gain more information without large increases in computation time." }, { "heading": "B.2 ADDITIONAL RESULTS", "text": "We present results for each damage condition in the high-dimensional domain comparing our approach to the baseline acquisition functions diversity Schrum & Gombolay (2020) and epistemic uncertainty Hastie et al. (2017). Our approach outperforms the baselines for each damage condition." }, { "heading": "B.3 HYPERPARAMETERS", "text": "The hyperparameters employed for each domain are listed in the Table 3. We show the learning rates for each domain which were determined via experimentation to be effective values. The size of the LSTM hidden layer and each of the two layers of the Q-function is also presented. τ is the soft update coefficient for updating the target network." } ]
2,020
null
SP:a27b9b91520ec4d7e1cabce40411ff8a10dea9c8
[ "This work uses the idea of variational inference to map categorical data to continuous space affording the use of normalizing flows. Authors use several ideas to increase their framework's applicability--factorized distribution assumption, use of multi-scale architecture for step-generation, and permutation invariant components—achieving favorable results on several problems. The approach seems to be especially useful when data is non-sequential.", "The paper considers the problem of modeling discrete distributions with normalizing flows. Authors propose a novel framework “Categorical Normalizing Flows”, i.e CNF. By jointly modeling a mapping to continuous latent space, and the likelihood of flows CNF solves some of the bottlenecks in current algorithms. With experiments on some synthetic domains, and benchmarking tasks like Zinc250, the authors empirically demonstrate that CNF-based algorithms perform competitively and often improve significantly on related approaches like Latent NF, discrete flows." ]
Despite their popularity, to date, the application of normalizing flows on categorical data stays limited. The current practice of using dequantization to map discrete data to a continuous space is inapplicable as categorical data has no intrinsic order. Instead, categorical data have complex and latent relations that must be inferred, like the synonymy between words. In this paper, we investigate Categorical Normalizing Flows, that is normalizing flows for categorical data. By casting the encoding of categorical data in continuous space as a variational inference problem, we jointly optimize the continuous representation and the model likelihood. Using a factorized decoder, we introduce an inductive bias to model any interactions in the normalizing flow. As a consequence, we do not only simplify the optimization compared to having a joint decoder, but also make it possible to scale up to a large number of categories that is currently impossible with discrete normalizing flows. Based on Categorical Normalizing Flows, we propose GraphCNF a permutationinvariant generative model on graphs. GraphCNF implements a three-step approach modeling the nodes, edges, and adjacency matrix stepwise to increase efficiency. On molecule generation, GraphCNF outperforms both one-shot and autoregressive flow-based state-of-the-art.
[ { "affiliations": [], "name": "Phillip Lippe" }, { "affiliations": [], "name": "Efstratios Gavves" } ]
[ { "authors": [ "Yuri Burda", "Roger Grosse", "Ruslan Salakhutdinov." ], "title": "Importance weighted autoencoders", "venue": "4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings, pages 1–14.", "year": 2016 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio." ], "title": "Density estimation using Real NVP", "venue": "5th International Conference on Learning Representations, ICLR 2017, Toulon, France.", "year": 2017 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "2019. UCI Machine Learning Repository [http://archive.ics.uci.edu/ml", "venue": null, "year": 2019 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel." ], "title": "Gaussian Error Linear Units (GELUs)", "venue": "arXiv preprint arXiv:1606.08415v3.", "year": 2016 }, { "authors": [ "Jonathan Ho", "Xi Chen", "Aravind Srinivas", "Yan Duan", "Pieter Abbeel." ], "title": "Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design", "venue": "Proceedings of the 36th International Conference on Machine Learning, volume 97, pages 2722–2730, Long Beach, California, USA. PMLR.", "year": 2019 }, { "authors": [ "Emiel Hoogeboom", "Taco S. Cohen", "Jakub M. Tomczak." ], "title": "Learning Discrete Distributions by Dequantization", "venue": "arXiv preprint arXiv:2001.11235v1.", "year": 2020 }, { "authors": [ "Emiel Hoogeboom", "Jorn W.T. Peters", "Rianne van den Berg", "Max Welling." ], "title": "Integer Discrete Flows and Lossless Compression", "venue": "Advances in Neural Information Processing Systems 32, pages 12134–12144, Vancouver, BC, Canada.", "year": 2019 }, { "authors": [ "John J Irwin", "Teague Sterling", "Michael M Mysinger", "Erin S Bolstad", "Ryan G Coleman." ], "title": "ZINC: A Free Tool to Discover Chemistry for Biology", "venue": "Journal of Chemical Information and Modeling, 52(7):1757–1768.", "year": 2012 }, { "authors": [ "Wengong Jin", "Regina Barzilay", "Tbmmi Jaakkola." ], "title": "Junction tree variational autoencoder for molecular graph generation", "venue": "35th International Conference on Machine Learning, ICML 2018, 5:3632–3648.", "year": 2018 }, { "authors": [ "Sungwon Kim", "Sang-Gil Lee", "Jongyoon Song", "Jaehyeon Kim", "Sungroh Yoon." ], "title": "FloWaveNet : A Generative Flow for Raw Audio", "venue": "Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 3370–3378, Long Beach, California, USA. PMLR.", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba." ], "title": "Adam: A Method for Stochastic Optimization", "venue": "International Conference on Learning Representations (ICLR) 2015, San Diego, CA, USA.", "year": 2015 }, { "authors": [ "Diederik P. Kingma", "Prafulla Dhariwal." ], "title": "Glow: Generative Flow with Invertible 1x1 Convolutions", "venue": "Advances in Neural Information Processing Systems, volume 31, pages 10215–10224. Curran Associates, Inc.", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Tim Salimans", "Rafal Jozefowicz", "Xi Chen", "Ilya Sutskever", "Max Welling." ], "title": "Improved variational inference with inverse autoregressive flow", "venue": "Advances in Neural Information Processing Systems, 29:4743–4751.", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Max Welling." ], "title": "Auto-Encoding Variational Bayes", "venue": "2nd International Conference on Learning Representations, ICLR, Banff, AB, Canada.", "year": 2014 }, { "authors": [ "Ivan Kobyzev", "Simon Prince", "Marcus A. Brubaker." ], "title": "Normalizing Flows: Introduction and Ideas", "venue": "arXiv preprint arXiv:1908.09257v1.", "year": 2019 }, { "authors": [ "Henrique Lemos", "Marcelo Prates", "Pedro Avelar", "Luis Lamb." ], "title": "Graph colouring meets deep learning: Effective graph neural network models for combinatorial problems", "venue": "Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI, volume 2019-Novem, pages 879–885. IEEE Computer Society.", "year": 2019 }, { "authors": [ "Yujia Li", "Oriol Vinyals", "Chris Dyer", "Razvan Pascanu", "Peter Battaglia." ], "title": "Learning deep generative models of graphs", "venue": "arXiv preprint arXiv:1803.03324.", "year": 2018 }, { "authors": [ "Renjie Liao", "Yujia Li", "Yang Song", "Shenlong Wang", "Will Hamilton", "David K Duvenaud", "Raquel Urtasun", "Richard Zemel." ], "title": "Efficient Graph Generation with Graph Recurrent Attention Networks", "venue": "H Wallach, H Larochelle, A Beygelzimer, F D’Alché-Buc, E Fox, and R Garnett, editors, Advances in Neural Information Processing Systems 32, pages 4255–4265. Curran Associates, Inc.", "year": 2019 }, { "authors": [ "Jenny Liu", "Aviral Kumar", "Jimmy Ba", "Jamie Kiros", "Kevin Swersky." ], "title": "Graph Normalizing Flows", "venue": "Advances in Neural Information Processing Systems, pages 13556–13566.", "year": 2019 }, { "authors": [ "Liyuan Liu", "Haoming Jiang", "Pengcheng He", "Weizhu Chen", "Xiaodong Liu", "Jianfeng Gao", "Jiawei Han." ], "title": "On the Variance of the Adaptive Learning Rate and Beyond", "venue": "International Conference on Learning Representations.", "year": 2020 }, { "authors": [ "Qi Liu", "Miltiadis Allamanis", "Marc Brockschmidt", "Alexander Gaunt." ], "title": "Constrained Graph Variational Autoencoders for Molecule Design", "venue": "S Bengio, H Wallach, H Larochelle, K Grauman, N Cesa-Bianchi, and R Garnett, editors, Advances in Neural Information Processing Systems 31, pages 7795–7804. Curran Associates, Inc.", "year": 2018 }, { "authors": [ "Tengfei Ma", "Jie Chen", "Cao Xiao." ], "title": "Constrained Generation of Semantically Valid Graphs via Regularizing Variational Autoencoders", "venue": "S Bengio, H Wallach, H Larochelle, K Grauman, N Cesa-Bianchi, and R Garnett, editors, Advances in Neural Information Processing Systems 31, pages 7113–7124. Curran Associates, Inc.", "year": 2018 }, { "authors": [ "Kaushalya Madhawa", "Katushiko Ishiguro", "Kosuke Nakago", "Motoki Abe." ], "title": "GraphNVP: An Invertible Flow Model for Generating Molecular Graphs", "venue": "arXiv preprint arXiv:1905.11600v1.", "year": 2019 }, { "authors": [ "Matt Mahoney." ], "title": "Large text compression benchmark", "venue": "Benchmark published at http://mattmahoney.net/dc/text.html.", "year": 2011 }, { "authors": [ "Mitch Marcus", "Grace Kim", "Mary Ann Marcinkiewicz", "Robert MacIntyre", "Ann Bies", "Mark Ferguson", "Karen Katz", "Britta Schasberger." ], "title": "The Penn Treebank: Annotating Predicate Argument Structure", "venue": "Proceedings of the Workshop on Human Language Technology, pages 114–119, Plainsboro, NJ. Association for Computational Linguistics.", "year": 1994 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher." ], "title": "Pointer Sentinel Mixture Models", "venue": "5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.", "year": 2017 }, { "authors": [ "Tomáš Mikolov", "Ilya Sutskever", "Anoop Deoras", "Hai-Son Le", "Stefan Kombrink", "Jan Cernocky." ], "title": "Subword language modeling with neural networks", "venue": "preprint (http://www.fit.vutbr.cz/imikolov/rnnlm/char. pdf), 8.", "year": 2012 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga", "Alban Desmaison", "Andreas Kopf", "Edward Yang", "Zachary DeVito", "Martin Raison", "Alykhan Tejani", "Sasank Chilamkurthy", "Benoit Steiner", "Lu Fang", "Junjie Bai", "Soumith Chintala." ], "title": "PyTorch: An Imperative Style, High-Performance Deep Learning Library", "venue": "H Wallach, H Larochelle, A Beygelzimer, F d’ Alché-Buc, E Fox, and R Garnett, editors, Advances in Neural", "year": 2019 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher Manning." ], "title": "Glove: Global Vectors for Word Representation", "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics.", "year": 2014 }, { "authors": [ "Daniil Polykovskiy", "Alexander Zhebrak", "Benjamin Sanchez-Lengeling", "Sergey Golovanov", "Oktai Tatanov", "Stanislav Belyaev", "Rauf Kurbanov", "Aleksey Artamonov", "Vladimir Aladinskiy", "Mark Veselov", "Artur Kadurin", "Simon Johansson", "Hongming Chen", "Sergey Nikolenko", "Alan Aspuru-Guzik", "Alex Zhavoronkov." ], "title": "Molecular Sets (MOSES): A Benchmarking Platform for Molecular Generation Models", "venue": "arXiv preprint arXiv:1811.12823v3, pages 1–17.", "year": 2018 }, { "authors": [ "Ryan Prenger", "Rafael Valle", "Bryan Catanzaro." ], "title": "Waveglow: A flow-based generative network for speech synthesis", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3617–3621. IEEE.", "year": 2019 }, { "authors": [ "Afshin Rahimi", "Trevor Cohn", "Timothy Baldwin." ], "title": "Semi-supervised User Geolocation via Graph Convolutional Networks", "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2009–2019, Melbourne, Australia. Association for Computational Linguistics.", "year": 2018 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed." ], "title": "Variational Inference with Normalizing Flows", "venue": "Proceedings of the 32nd International Conference on Machine Learning, volume 37, Lille, France.", "year": 2015 }, { "authors": [ "Michael Schlichtkrull", "Thomas N Kipf", "Peter Bloem", "Rianne van den Berg", "Ivan Titov", "Max Welling." ], "title": "Modeling Relational Data with Graph Convolutional Networks", "venue": "The Semantic Web, pages 593–607, Cham. Springer International Publishing.", "year": 2018 }, { "authors": [ "Chence Shi", "Minkai Xu", "Zhaocheng Zhu", "Weinan Zhang", "Ming Zhang", "Jian Tang." ], "title": "GraphAF: a Flow-based Autoregressive Model for Molecular Graph Generation", "venue": "International Conference on Learning Representations.", "year": 2020 }, { "authors": [ "Martin Simonovsky", "Nikos Komodakis." ], "title": "GraphVAE: Towards Generation of Small Graphs Using Variational Autoencoders", "venue": "International Conference on Artificial Neural Networks, volume abs/1802.0, pages 412–422.", "year": 2018 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov." ], "title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "venue": "Journal of Machine Learning Research, 15:1929–1958.", "year": 2014 }, { "authors": [ "Esteban Tabak", "Eric Vanden Eijnden." ], "title": "Density estimation by dual ascent of the log-likelihood", "venue": "Communications in Mathematical Sciences, 8(1):217–233.", "year": 2010 }, { "authors": [ "Lucas Theis", "Aäron Van Den Oord", "Matthias Bethge." ], "title": "A note on the evaluation of generative models", "venue": "4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings.", "year": 2016 }, { "authors": [ "Jakub M. Tomczak", "Max Welling." ], "title": "Improving Variational Auto-Encoders using Householder Flow", "venue": "arXiv preprint arXiv:1611.09630v4.", "year": 2017 }, { "authors": [ "Dustin Tran", "Keyon Vafa", "Kumar Krishna Agrawal", "Laurent Dinh", "Ben Poole." ], "title": "Discrete Flows: Invertible Generative Models of Discrete Data", "venue": "Advances in Neural Information Processing Systems, pages 14692–14701.", "year": 2019 }, { "authors": [ "Benigno Uria", "Iain Murray", "Hugo Larochellehugo." ], "title": "RNADE: The real-valued neural autoregressive density-estimator", "venue": "Advances in Neural Information Processing Systems, volume 26, pages 2175–2183.", "year": 2013 }, { "authors": [ "Rianne Van Den Berg", "Leonard Hasenclever", "Jakub M. Tomczak", "Max Welling." ], "title": "Sylvester normalizing flows for variational inference", "venue": "34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018, 1:393–402.", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin." ], "title": "Attention is All you Need", "venue": "I Guyon, U V Luxburg, S Bengio, H Wallach, R Fergus, S Vishwanathan, and R Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc.", "year": 2017 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio." ], "title": "Graph Attention Networks", "venue": "International Conference on Learning Representations.", "year": 2018 }, { "authors": [ "Jiaxuan You", "Rex Ying", "Xiang Ren", "William Hamilton", "Jure Leskovec." ], "title": "GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models", "venue": "Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 5708–5717, Stockholmsmässan, Stockholm Sweden. PMLR.", "year": 2018 }, { "authors": [ "Jie Zhou", "Ganqu Cui", "Zhengyan Zhang", "Cheng Yang", "Zhiyuan Liu", "Lifeng Wang", "Changcheng Li", "Maosong Sun." ], "title": "Graph Neural Networks: A Review of Methods and Applications", "venue": "arXiv preprint arXiv:1812.08434.", "year": 2018 }, { "authors": [ "Zachary M. Ziegler", "Alexander M. Rush." ], "title": "Latent Normalizing Flows for Discrete Sequences", "venue": "Proceedings of the 36th International Conference on Machine Learning, volume 97, pages 7673–7682, Long Beach, California, USA. PMLR.", "year": 2019 }, { "authors": [ "Tran" ], "title": "2019) for their coupling layers and implemented it in PyTorch (Paszke et al., 2019). We use a discrete prior over the set elements which is jointly optimized with the flow. However, we experienced significant optimization issues due to the straight-through gradient estimator in the Gumbel Softmax. Across this paper, we experiment with the two optimizers Adam (Kingma and Ba, 2015) and RAdam", "venue": null, "year": 2015 }, { "authors": [ "Liu" ], "title": "2020), and experienced RAdam to work slightly better. The learning rate decay is applied every update and leads to exponential decay. However, we did not observe the choice of this hyperparameter to be crucial. Table 6: Hyperparameter overview for the set modeling experiments", "venue": null, "year": 2020 }, { "authors": [ "Ziegler", "Rush" ], "title": "2019) and split the dataset into sentences of a maximum length of 288. Furthermore, instead of an end-of-sentence token, the length is passed to the model and encoded by an external discrete prior which is created based on the sentence lengths in the training dataset. Text8 contains about 100M characters and has a vocabulary size of K = 27", "venue": null, "year": 2019 }, { "authors": [ "Mikolov" ], "title": "and split the dataset into 90M characters for training, and 5M characters each for validation and testing. We train and test the models on a sequence length of 256. In contrast to the previous two datasets, we use Wikitext103 as a word-level language dataset. First, we create a vocabulary and limit it to the most frequent 10,000 words in the training corpus. We thereby use pre-trained Glove (Pennington et al., 2014) embeddings to represent the words", "venue": null, "year": 2012 } ]
[ { "heading": "1 INTRODUCTION", "text": "Normalizing Flows have been popular for tasks with continuous data like image modeling (Dinh et al., 2017; Kingma and Dhariwal, 2018; Ho et al., 2019) and speech generation (Kim et al., 2019; Prenger et al., 2019) by providing efficient parallel sampling and exact density evaluation. The concept that normalizing flows rely on is the rule of change of variables, a continuous transformation designed for continuous data. However, there exist many data types typically encoded as discrete, categorical variables, like language and graphs, where normalizing flows are not straightforward to apply.\nTo address this, it has recently been proposed to discretize the transformations inside normalizing flows to act directly on discrete data. Unfortunately, these discrete transformations have shown to be limited in terms of the vocabulary size and layer depth due to gradient approximations (Hoogeboom et al., 2019; Tran et al., 2019). For the specific case of discrete but ordinal data, like images where integers represent quantized values, a popular strategy is to add a small amount of noise to each value (Dinh et al., 2017; Ho et al., 2019). It is unnatural, however, to apply such dequantization techniques for the general case of categorical data, where values represent categories with no intrinsic order. Treating these categories as integers for dequantization biases the data to a non-existing order, and makes the modeling task significantly harder. Besides, relations between categories are often multi-dimensional, for example, word meanings, which cannot be represented with dequantization.\nIn this paper, we investigate normalizing flows for the general case of categorical data. To account for discontinuity, we propose continuous encodings in which different categories correspond to unique, non-overlapping and thus close-to-deterministic volumes in a continuous latent space. Instead of pre-specifying the non-overlapping volumes per category, we resort to variational inference to jointly learn those and model the likelihood by a normalizing flow at the same time. This work is not the first to propose variational inference with normalizing flows, mostly considered for improving the flexibility of the approximate posterior (Kingma et al., 2016; Rezende and Mohamed, 2015; Van Den Berg et al., 2018). Different from previous works, we use variational inference to learn\na continuous representation z of the discrete categorical data x to a normalizing flow. A similar idea has been investigated in (Ziegler and Rush, 2019), who use a variational autoencoder structure with the normalizing flow being the prior. As both their decoder and normalizing flow model (complex) dependencies between categorical variables, (Ziegler and Rush, 2019) rely on intricate yet sensitive learning schedules for balancing the likelihood terms. Instead, we propose to separate the representation and relation modeling by factorizing the decoder both over the categorical variable x and the conditioning latent z. This forces the encoder and decoder to focus only on the mapping from categorical data to continuous encodings, and not model any interactions. By inserting this inductive bias, we move all complexity into the flow. We call this approach Categorical Normalizing Flows (CNF).\nCategorical Normalizing Flows can be applied to any task involving categorical variables, but we primarily focus on modeling graphs. Current state-of-the-art approaches often rely on autoregressive models (Li et al., 2018; Shi et al., 2020; You et al., 2018) that view graphs as sequences, although there exists no intrinsic order of the node. In contrast, normalizing flows can perform generation in parallel making a definition of order unnecessary. By treating both nodes and edges as categorical variables, we employ our variational inference encoding and propose GraphCNF. GraphCNF is a novel permutation-invariant normalizing flow on graph generation which assigns equal likelihood to any ordering of nodes. Meanwhile, GraphCNF efficiently encodes the node attributes, edge attributes, and graph structure in three consecutive steps. As shown in the experiments, the improved encoding and flow architecture allows GraphCNF to outperform significantly both the autoregressive and parallel flow-based state-of-the-art. Further, we show that Categorical Normalizing Flows can be used in problems with regular categorical variables like modeling natural language or sets.\nOur contributions are summarized as follows. Firstly, we propose Categorical Normalizing Flows using variational inference with a factorized decoder to move all complexity into the prior and scale up to large number of categories. Secondly, starting from the Categorical Normalizing Flows, we propose GraphCNF, a permutation-invariant normalizing flow on graph generation. On molecule generation, GraphCNF sets a new state-of-the-art for flow-based methods outperforming one-shot and autoregressive baselines. Finally, we show that simple mixture models for encoding distributions are accurate, efficient, and generalize across a multitude of setups, including sets language and graphs." }, { "heading": "2 CATEGORICAL NORMALIZING FLOWS", "text": "" }, { "heading": "2.1 NORMALIZING FLOWS ON CONTINUOUS DATA", "text": "A normalizing flow (Rezende and Mohamed, 2015; Tabak and Vanden Eijnden, 2010) is a generative model that models a probability distribution p(z(0)) by applying a sequence of invertible, smooth mappings f1, ..., fK : Rd → Rd. Using the rule of change of variables, the likelihood of the input z(0) is determined as follows:\np(z(0)) = p(z(K)) · K∏\nk=1\n∣∣∣∣det ∂fk(z(k−1))∂z(k−1) ∣∣∣∣ (1)\nwhere z(k) = fk(z(k−1)), and p(z(K)) represents a prior distribution. This calculation requires to compute the Jacobian for the mappings f1, ..., fK , which is expensive for arbitrary functions. Thus, the mappings are often designed to allow efficient computation of its determinant. One of such is the coupling layer proposed by Dinh et al. (2017) which showed to work well with neural networks. For a detailed introduction to normalizing flows, we refer the reader to Kobyzev et al. (2019)." }, { "heading": "2.2 NORMALIZING FLOWS ON CATEGORICAL DATA", "text": "We define x = {x1, ..., xS} to be a multivariate, categorical random variable, where each element xi is itself a categorical variable of K categories with no intrinsic order. For instance, x could be a sentence with xi being the words. Our goal is to learn the joint probability mass function, Pmodel(x), via a normalizing flow. Specifically, as normalizing flows constitute a class of continuous transformations, we aim to learn a continuous latent space in which each categorical choice of a variable xi maps to a stochastic continuous variable zi ∈ Rd whose distribution we learn.\nCompared to variational autoencoders (Kingma and Welling, 2014) and latent normalizing flows (Ziegler and Rush, 2019), we want to ensure that all modeling complexity is solely in the prior, and keep a lossless reconstruction from latent space. To implement this, we simplify the decoder by factorizing the decoder over latent variables: p(x|z) = ∏ i p(xi|zi). Factorizing the conditional likelihood means that we enforce independence between the categorical variables xi given their learned continuous encodings zi. Therefore, any interaction between the categorical variables x must be learned inside the normalizing flow. If in this setup, the encoding distributions of multiple categories would overlap, the prior would be limited in the dependencies over x1, ..., xS it can model as it cannot clearly distinguish between all categories. Therefore, the encoder q(z|x) is being optimized to provide suitable representations of the categorical variables to the flow while separating the different categories in latent space. Meanwhile, the decoder is incentivized to be deterministic, i.e. precisely reconstructing x from z, in order to minimize the overlap of categories. Overall, our objective becomes:\nEx∼Pdata [logPmodel(x)] ≥ Ex∼PdataEz∼q(·|x) [ log pmodel(z) ∏\ni p(xi|zi) q(z|x)\n] (2)\nWe refer to this framework as Categorical Normalizing Flows. In contrast to dequantization, the continuous encoding z is not bounded by the domain of the encoding distribution. Instead, the partitioning is jointly learned with the model likelihood. Furthermore, we can freely choose the dimensionality of the continuous variables, zi, to fit the number of categories and their relations.\nModeling the encoder The encoder q(z|x) and decoder p(xi|zi) can be implemented in several ways. The first and main setup we consider is to encode each category by a logistic distribution with a learned mean and scaling. Therefore, our encoding distribution q(zi) is a mixture of K logistics, one per category. With g denoting the logistic, the encoder becomes q(z|x) = ∏S i=1 g(zi|µ(xi), σ(xi)). In this setup, the decoder likelihood can actually be found correspondingly to the encoder by applying Bayes: p(xi|zi) = p̃(xi)q(zi|xi)∑\nx̂ p̃(x̂)q(zi|x̂) with p̃(xi) being a prior over categories. Hence, we do not need\nto learn a separate decoder but can calculate the likelihood based on the encoder’s parameters. The objective in Equation 2 simplifies to the following:\nEx∼Pdata [logPmodel(x)] ≥ Ex∼PdataEz∼q(·|x)\n[ log ( pmodel(z)\nS∏ i=1 p̃(xi)∑ x̂ p̃(x̂)q(zi|x̂)\n)] (3)\nNote that the term q(zi|xi) in the numerator of p(xi|zi) cancels out with the denominator in Equation 2. Given that the encoder and decoder are sharing the parameters, we remove any possible mismatch between p(xi|zi) and q(xi|zi). This allows changes in the encoding distribution to directly being propagated to the decoder, and further moves the focus of the training to the prior. Besides, the mixture encoding introduces a dependency of the true posterior p(z|x) on the approximate posterior q(z|x), which potentially tightens the variational gap compared to a separately learned decoder. During testing, we can use importance sampling (Burda et al., 2016) to further reduce the gap. Details on the posterior dependency in the variational gap, and training and test steps can be found in Appendix A.1.\nThe mixture model is simple and efficient, but might be limited in the distributions it can express. To test whether greater encoding flexibility is needed, we experiment with adding flows conditioned on the categories which transform each logistic into a more complex distribution. We refer to this approach as linear flows. Taking a step further, we can also represent the encoder q(z|x) with a flow across categorical variables, similar to variational dequantization (Ho et al., 2019). Experiments presented in Section 5 show however that a simple mixture of logistics usually suffices." }, { "heading": "3 GRAPH GENERATION WITH CATEGORICAL NORMALIZING FLOWS", "text": "A graph G = (V,E) is defined by a set of nodes V , and a set of edges E representing connections between nodes. When modeling a graph, we must take into account the node and edge attributes, often represented by categorical data, as well as the overall graph structure. Moreover, nodes and edges are better viewed as sets and not sequences since any permutation represents the same graph and assigned the same likelihood.\nWe propose GraphCNF, a normalizing flow for graph generation, which is invariant to the order of nodes by generating all nodes and edges at once. Given a graph G, we model each node and edge as a\nseparate categorical variable where the categories correspond to their discrete attributes. To represent the graph structure, i.e. between which pairs of nodes does or does not exist an edge, we add an extra category to the edges representing the missing or virtual edges. Hence, to model an arbitrary graph, we consider an edge variable for every possible tuple of nodes.\nTo apply normalizing flows on the node and edge categorical variables, we map them into continuous latent space using Categorical Normalizing Flows. Subsequent coupling layers map those representations to a continuous prior distribution. Thereby, GraphCNF uses two crucial design choices for graph modeling: (1) we perform the generation stepwise by first encoding the nodes, edges and then the adjacency matrix for improved efficiency, and (2) we introduce an inductive bias that the model assigns equal likelihood to any ordering of the nodes." }, { "heading": "3.1 THREE-STEP GENERATION", "text": "Modeling all edges including the virtual ones requires a significant amount of latent variables and is computationally expensive. However, normalizing flows have been shown to benefit from splitting of latent variables at earlier layers while increasing efficiency (Dinh et al., 2017; Kingma and Dhariwal, 2018). Thus, we propose to add the node types, edge attributes and graph structure stepwise to the latent space as visualized in Figure 1. In the first step, we encode the nodes into continuous latent space, z(V )0 , using Categorical Normalizing Flows. On those, we apply a group of coupling layers, f1, which additionally use the adjacency matrix and the edge attributes, denoted by Eattr, as input. Thus, we can summarize the first step as:\nz (V ) 1 = f1 ( z (V ) 0 ;E,Eattr ) (4)\nThe second step incorporates the edge attributes, Eattr, into latent space. Hence, all edges of the graph except the virtual edges are encoded into latent variables, z(Eattr)0 , representing their attribute. The following coupling layers, denoted by f2, transform both the node and edge attribute variables:\nz (V ) 2 , z (Eattr) 1 = f2 ( z (V ) 1 , z (Eattr) 0 ;E ) (5)\nFinally, we add the virtual edges to the latent variable model as z(E ∗)\n0 . Thereby, we need to slightly adjust our encoding from Categorical Normalizing Flows as we consider the virtual edges as an additional category of the edges. While the other categories are already encoded by z(Eattr)1 , we add a separate encoding distribution for the virtual edges, for which we use a simple logistic. Meanwhile, the decoder needs to be applied on all edges, as we need to distinguish the continuous representation between virtual and non-virtual edges. Overall, the mapping can be summarized as:\nz (V ) 3 , z (E) 1 = f3 ( z (V ) 2 , z (E) 0 ) where z(E)0 = [ z (Eattr) 1 , z (E∗) 0 ] (6)\nwhere the latent variables z(V )3 and z (E) 1 are trained to follow a prior distribution. During sampling, we first inverse f3 and determine the general graph structure. Next, we inverse f2 and reconstruct the edge attributes. Finally, we apply the inverse of f1 and determine the node types." }, { "heading": "3.2 PERMUTATION-INVARIANT GRAPH MODELING", "text": "To achieve permutation invariance for the likelihood estimate, the transformations of the coupling layers need to be independent of the node order. This includes both the split of variables that will be transformed and the network model that predicts the transformation parameters. We ensure the first aspect by applying a channel masking strategy (Dinh et al., 2017), where the split is performed over the latent dimensions for each node and edge separately making it independent of the node order. For the second aspect, we leverage the graph structure in the coupling networks and apply graph neural networks. In the first step of GraphCNF, f1, we use a Relation GCN (Schlichtkrull et al., 2018) which incorporates the categorical edge attributes into the layer. For the second and third steps, we need a graph network that supports the modeling of both node and edge features. We implement this by alternating between updates of the edge and the node features. Specifically, given node features vt and edge features et at layer t, we update those as follows:\nvt+1 = fnode(v t; et), et+1 = fedge(e t;vt+1) (7)\nWe call this network Edge-GNN, and compare different implementations of fnode and fedge in Appendix B. Using both design choices, GraphCNF models a permutation invariant distribution." }, { "heading": "4 RELATED WORK", "text": "Dequantization Applying continuous normalizing flows on discrete data leads to undesired density models where arbitrarily high likelihoods are placed on particular values (Theis et al., 2016; Uria et al., 2013). A common solution to this problem is to dequantize the data x by adding noise u ∈ [0, 1)D. Theis et al. (2016) have shown that modeling pmodel(x+ u) lower-bounds the discrete distribution Pmodel(x). The noise distribution q(u|x) is usually uniform or learned by a second normalizing flow. The latter is referred to as variational dequantization and has been proven to be crucial for state-of-the-art image modeling (Ho et al., 2019; Hoogeboom et al., 2020). Categories, however, are not quantized values, so that ordering them as integers introduces bias to the representation.\nDiscrete NF Recent works have investigated normalizing flows with discretized transformations. Hoogeboom et al. (2019) proposed to use additive coupling layers with rounding operators for ensuring discrete output. Tran et al. (2019) discretizes the output by a Gumbel-Softmax approximating an argmax operator. Thereby, the coupling layers resemble a reversible shift operator. While both approaches achieved competitive results, due to discrete operations the gradient approximations have been shown to introduce new challenges, such as limiting the number of layers or distribution size.\nLatent NF Several works have investigated the application of normalizing flows in variational auto-encoders (VAEs) (Kingma and Welling, 2014) for increasing the flexibility of the approximate posterior (Kingma et al., 2016; Van Den Berg et al., 2018; Tomczak and Welling, 2017). However, VAEs model a lower bound of the true likelihood. To minimize this gap, Ziegler and Rush (2019) proposed Latent Normalizing Flows that move the main model complexity into the prior by using normalizing flows. In contrast to CNFs, Latent NF have a joint decoder over the latent space, p(x|z), which allows the modeling of interactions between variables in the decoder. Thus, instead of an inductive bias to push all complexity to the normalizing flow, Latent NF rely on a loss-scheduling weighting the decoder loss much higher. This pushes the decoder to be deterministic but can lead to unstable optimization due to neglecting the flow’s likelihood. Further, experiments on sequence tasks show Latent NF to be competitive but are still outperformed by an LSTM baseline as we observed in our experiments.\nGraph modeling The first generation models on graphs have been autoregressive (Liao et al., 2019; You et al., 2018), generating nodes and edges in sequential order. While being efficient in memory, they are slow in sampling and assume an order in the set of nodes. The first application of normalizing flows for graph generation was introduced by Liu et al. (2019), where a flow modeled the node representations of a pretrained autoencoder. Recent works of GraphNVP (Madhawa et al., 2019) and GraphAF (Shi et al., 2020) proposed normalizing flows for molecule generation. GraphNVP consists of two separate flows, one for modeling the adjacency matrix and a second for modeling the node types. Although allowing parallel generation, the model is sensitive to the node order due to its masking strategy and feature networks in the coupling layer. GraphAF is an autoregressive normalizing flow sampling nodes and edges sequentially but allowing parallel training. However, both flows use standard uniform dequantization to represent the node and edge categories. VAE have\nalso been proposed for latent-based graph generation (Simonovsky and Komodakis, 2018; Ma et al., 2018; Liu et al., 2018; Jin et al., 2018). Although those models can be permutation-invariant, they model a lower bound and do not provide a lossless reconstruction from latent space." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "We start our experiments by evaluating GraphCNF on two benchmarks for graph generation, namely molecule generation and graph coloring. Further, to test generality we evaluate CNFs on other categorical problems, specifically language and set modeling. For the normalizing flows, we use a sequence of logistic mixture coupling layers (Ho et al., 2019) mapping a mixture of logistic distributions back into a single mode. Before each coupling layer, we include an activation normalization layer and invertible 1x1 convolution (Kingma and Dhariwal, 2018). For reproducibility, we provide all hyperparameter details in Appendix D, and make our code publicly available.1" }, { "heading": "5.1 MOLECULE GENERATION", "text": "Modeling and generating graphs is crucial in biology and chemistry for applications such as drug discovery, where molecule generation has emerged as a common benchmark (Jin et al., 2018; Shi et al., 2020). In a molecule graph, the nodes are atoms and the edges represent bonds between atoms, both represented by categorical features. Using a dataset of existing molecules, the goal is to learn a distribution of valid molecules as not all possible combinations of atoms and bonds are valid. We perform experiments on the Zinc250k (Irwin et al., 2012) dataset which consists of 250,000 drug-like molecules. The molecules contain up to 38 atoms of 9 different types, with three different bond types possible between the atoms. For comparability, we follow the preprocessing of Shi et al. (2020).\nWe compare GraphCNF to baselines that consider molecules as a graph and not as text representation. As per VAE-based approaches, we consider R-VAE (Ma et al., 2018) and Junction-Tree VAE (JTVAE) (Jin et al., 2018). R-VAE is a one-shot generation model using regularization to ensure semantic validity. JT-VAE represents a molecule as a junction tree of sub-graphs that are obtained from the training dataset. We also compare our model to GraphNVP (Madhawa et al., 2019) and GraphAF (Shi et al., 2020). The models are evaluated by sampling 10,000 examples and measuring the proportion of valid molecules. We also report the proportion of unique molecules and novel samples that are not in the training dataset. These metrics prevent models from memorizing a small subset of graphs. Finally, the reconstruction rate describes whether graphs can be accurately decoded from latent space. Normalizing Flows naturally score 100% due to their invertible mapping, and we achieve the same with our encoding despite no guarantees.\nTable 1 shows that GraphCNF generates almost twice as many valid molecules than other oneshot approaches. Yet, the validity and uniqueness stay at almost 100%. Even the autoregressive normalizing flow, GraphAF, is outperformed by GraphCNF by 15%. However, the rules for generating valid molecules can be enforced in autoregressive models by masking out the invalid outputs. This has been the case for JT-VAE as it has been trained with those manual rules, and thus achieves a\n1Code available here: https://github.com/phlippe/CategoricalNF\nvalidity of 100%. Nevertheless, we are mainly interested in the model’s capability of learning the rules by itself and being not specific to any application. While GraphNVP and GraphAF sample with a lower standard deviation from the prior to increase validity, we explicitly sample from the original prior to underline that our model covers the whole latent space well. Surprisingly, we found out that most invalid graphs consist of two or more that in isolation are valid, as shown in Figure 2c. This can happen as one-shot models have no guidance for generating a single connected graph. By taking the largest sub-graph of these predictions, we obtain a validity ratio of 96.35% making our model generate almost solely valid molecules without any manually encoded rules. We also evaluated our model on the Moses (Polykovskiy et al., 2018) dataset and achieved similar scores as shown in Appendix C.\n5.2 GRAPH COLORING\nGraph coloring is a well-known combinatorial problem (Bondy et al., 1976) where for a graph G, each node is assigned to one of K colors. Yet, two adjacent nodes cannot have the same color (see Figure 3). Modeling the distribution of valid color assignments to arbitrary graphs is NP-complete. To train models on such a distribution, we generate a dataset of valid graph colorings for randomly sampled graphs. To further investigate the effect of complexity, we create two dataset versions, one with graphs of size 10 ≤ |V | ≤ 20 and another with 25 ≤ |V | ≤ 50, as larger graphs are commonly harder to solve. For graph coloring, we rely on GraphCNF and compare to a variational autoencoder and an autoregressive model generating one node at a time. As no edges are being modeled here, we only use the first step of GraphCNF’s three-step generation. For all models, we apply the same Graph Attention network (Veličković et al., 2018). As autoregressive\nmodels require a manually prescribed node order, we compare the following: a random ordering per graph, largest_first which is inspired by heuristics of automated theorem provers that start from the nodes with the most connections, and smallest_first, where we reverse the order of the previous heuristic. We evaluate the models by measuring the likelihood of color assignments to unseen test graphs in bits per node. Secondly, we sample one color assignment per model for each test graph and report the proportion of valid colorings.\nThe results in Table 2 show that the node ordering has indeed a significant effect on the autoregressive model’s performance. While the smallest_first ordering leads to only 32% valid solutions on the large dataset, reversing the order simplifies the task for the model such that it generates more than twice as many valid color assignments. In contrast, GraphCNF is invariant of the order of nodes. Despite generating all nodes in parallel, it outperforms all node orderings on the small dataset, while being close to the best ordering on the larger dataset. This invariance property is especially beneficial in tasks where an optimal order of nodes is not known, like molecule generation. Although having more parameters, the sampling with GraphCNF is also considerably faster than the autoregressive models. The sampling time can further be improved by replacing the logistic mixture coupling layers with\n16/11/2020 Categorical Normalizing Flows via Continuous Transformations Slide 21 of 19\naffine ones. Due to the lower complexity, we see a slight decrease in validity and bits per dimension but can verify that logistic mixture couplings are not crucial for CNFs." }, { "heading": "5.3 LANGUAGE MODELING", "text": "To compare CNF with Latent NF, we test both models on language modeling. We experiment with two popular character-level datasets, Penn Treebank (Marcus et al., 1994) and text8 (Mahoney, 2011) with a vocabulary size of K = 51 and K = 27 respectively. We also test a word-level dataset, Wikitext103 (Merity et al., 2017), with K = 10, 000 categories, which Discrete NF cannot handle due to its gradient approximations (Tran et al., 2019). We follow the setup of Ziegler and Rush (2019) for the Penn Treebank and train on sequences of 256 tokens for the other two datasets. Both Latent NF and CNF apply a single mixture coupling layer being autoregressive across time and latent dimensions and differ only in the encoding/decoding strategy. The applied LSTM in the coupling layers is shown as an additional RNN baseline in Figure 4. Categorical Normalizing Flow performs on par with their autoregressive baseline, only slightly underperforming on Wikitext103 due to using a single flow layer. Latent NF however performs considerably worse on text8 and Wikitext103 due to a non-deterministic decoding and higher fluctuation in the loss that we experienced during training (see Appendix A.4.2 for visualizations). This shows the importance of a factorized likelihood and underlines the two benefits of CNFs. Firstly, CNFs are more stable and simpler to train as no loss scheduling is required, and the likelihood is mainly modeled in the flow prior. Secondly, the encoding of CNFs is much more efficient than Latent NFs. We conclude that CNFs are more widely applicable, and Latent NFs might be the better choice if the decoder can explicitly help, i.e., prior knowledge." }, { "heading": "5.4 SET MODELING", "text": "Finally, we present experiments on sets of categorical variables, for which we create two toy datasets with known data likelihood: set shuffling and set summation. The goal is to assign high likelihood only to those possible sets that occur in the dataset, and shows how accurately our flows can model an arbitrary, discrete dataset. In set shuffling, we model a set of N categorical variables each having one out of N categories. Each category has to appear exactly once, which leads to N ! possible" }, { "heading": "Model Set shuffling Set summation", "text": "assignments that need to be modeled. In set summation, we again consider a set of size N with N categories, but those categories represent the actual integers 1, 2, ..., N and have to sum to an arbitrary number, L. In contrast to set shuffling, the data is ordinal, which we initially expected to help dequantization methods. For both experiments we set N = 16 and L = 42.\nIn Table 3, we compare CNFs to applying variational dequantization (Ho et al., 2019), Latent Normalizing Flows (Ziegler and Rush, 2019) and Discrete Normalizing Flows (Tran et al., 2019). The results show that CNFs achieve nearly optimal performance. Although we model a lower bound in continuous space, our flows can indeed model discrete distributions precisely. Interestingly, representing the categories by a simple mixture model is sufficient for achieving these results. We observe the same trend in domains with more complex relations between categories, such as on graphs and language modeling, presumably because of both the coupling layers and the prior distribution resting upon logistic distributions as well. Variational dequantization performs worse on the shuffling dataset, while on set summation with ordinal data, the gap to the optimum is smaller. The same holds for Discrete NFs, although it is worth noting that unlike CNFs, optimizing Discrete NFs had issues due to their gradient approximations. Latent Normalizing Flows with the joint decoder achieve similar performance as CNF, which can be attributed to close-to deterministic decoding. When looking at the encoding space (see Appendix A.4.1 for visualization), we see that Latent NF has indeed learned a mixture model as well. Hence, the added complexity is not needed on this simple dataset." }, { "heading": "6 CONCLUSION", "text": "We present Categorical Normalizing Flows which learn a categorical, discrete distribution by jointly optimizing the representation of categorical data in continuous latent space and the model likelihood of a normalizing flow. Thereby, we apply variational inference with a factorized posterior to maintain almost unique decoding while allowing flexible encoding distributions. We find that a plain mixture model is sufficient for modeling discrete distributions accurately while providing an efficient way for encoding and decoding categorical data. Compared to a joint posterior, CNFs are more stable, efficient, and have an inductive bias to move all modeling complexity into the prior. Furthermore, GraphCNF, a normalizing flow on graph modeling based on CNFs, outperforms autoregressive and one-shot approaches on molecule generation and graph coloring while being invariant to the node order. This emphasizes the potential of normalizing flows on categorical tasks, especially for such with non-sequential data." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We thank SURFsara for the support in using the Lisa Compute Cluster." }, { "heading": "A.1 MIXTURE OF LOGISTICS", "text": "The mixture model represents each category by an independent logistic distribution in continuous latent space, as visualized in Figure 5. Specifically, the encoder distribution q(z|x), with x being the categorical input and z the continuous latent representation, can be written as:\nq(z|x) = N∏ i=1 g(zi|µ(xi), σ(xi)) (8)\ng(v|µ, σ) = d∏\nj=1\nexp(− j) (1 + exp(− j))2 where j = vj − µj σj\n(9)\ng represent the logistic distribution, and d the dimensionality of the continuous latent space per category. Both parameters µ and σ are learnable parameter, which can be implemented via a simple table lookup. For decoding the discrete categorical data from continuous space, the true posterior is calculated by applying the Bayes rule:\np(xi|zi) = p̃(xi)q(zi|xi)∑ x̂ p̃(x̂)q(zi|x̂)\n(10)\nwhere the prior over categories, p̃(xi), is calculated based on the category frequencies in the training dataset. Although the posterior models a distribution over categories, the distribution is strongly peaked for most continuous points in the latent space as the probability steeply decreases the further a point is away from a specific mode. Furthermore, the distribution is trained to minimize the posterior entropy which pushes the posterior to be deterministic for commonly sampled continuous points. Hence, the posterior partitions the latent space into fragments in which all continuous points are assigned to one discrete category. The borders between the fragments, where the posterior is not close to deterministic, are small and very rarely sampled by the encoder distribution. We visualized the partitioning for an example of three categories in Figure 5.\nNotably, the posterior can also be learned by a second, small linear network. While this possibly introduces a difference between encoder and decoder, we experienced it to vanish quickly over training iterations and did not observe any significant difference compared to using the Bayes posterior besides a slower training in the very early stages of training. Additionally, we were able\nto achieve very low reconstruction errors in two dimensions for most discrete distributions of ≤ 50 categories. Nevertheless, higher dimensionality of the latent space is not only crucial for a large number of categories as for word-level vocabularies but can also be beneficial for more complex problems. Still, using even higher dimensionality rarely caused any problems or showed significantly decreasing performance. Presumably, the flow learns to ignore latent dimensions if those are not needed for modeling the discrete distribution. To summarize, the dimensionality of the latent space should be considered as important, but robust hyperparameter which can be tuned in an early stage of hyperparameter search.\nIn the very first training iterations, it can happen that the mixtures of multiple categories are at the exact same spot and have troubles to clearly separate. This can be easily resolved by either weighing the reconstruction loss slightly higher for the first ∼500 iterations or initializing the mean of the mixtures with a higher variance. Once the mixtures are separated, the model has no incentive to group them together again as it has started to learn the underlying discrete distribution which results in a considerably higher likelihood than a plain uniform distribution." }, { "heading": "A.1.1 KL DIVERGENCE", "text": "The objective for learning categorical normalizing flows with a mixture encoding, shown in Equation 3, constitutes a lower bound. The variational gap between the objective and the evidence logPmodel is given by the KL divergence between the approximate posterior q(z|x), and the true posterior p(z|x): DKL (q(z|x)||p(z|x)). The advantage of using the mixture encoding is that by replacing the decoder by a Bayes formulation of the encoder, as in Equation 10, we introduce a dependency between q(z|x) and p(z|x). Specifically, we can rewrite the true posterior as:\np(z|x) = p(x|z)p(z) p(x)\n(11)\n=\n∏ i p(xi|zi)p(z)\np(x) (12)\n= (∏ i q(zi|xi)p(xi)∑ x̂ q(zi|x̂)p(x̂) ) p(z) p(x) (13)\n= ∏ i q(zi|xi) · ∏ i p(xi) p(x) · p(z)∏ i q(zi) (14)\n= q(z|x) · p(z) p(x) · ∏ i p(xi) q(zi) (15)\nIntuitively, this makes it easier for the model to tighten the gap because a change in q(z|x) entails a change in p(z|x). In experiments, we observe the difference by a considerably faster optimization in the first steps during iteration. However, once the decoder is close to deterministic, both approaches with the Bayes formulation and the separate network will reach a similar variational gap as p(x|z) will drop out of the Equation 11 with being 1.\nNote that this variational gap is very similar to that from variational dequantization for integers, which is being used in most SOTA image modeling architectures. The difference to dequantization is that the decoder p(xi|zi) has been manually fixed, but yet represents the Bayes formulation of the encoder q(z|x) (the latent variable z represents x+ u here, i.e. the original integers with a random variable u between 0 and 1)." }, { "heading": "A.1.2 TRAINING AND TESTING", "text": "Below, we lay out the specifics for training and testing a categorical normalizing flow with the logistic mixture encoding. Training and testing are almost identical to normalizing flows trained on image modeling, except for the loss calculation and encoding. Algorithm 1 shows the training procedure. First of all, we determine the prior p(xi) over categories, which can be done by counting the number of occurrences of each category and divide by the sum of all. The difference between the prior probabilities in the training and testing is usually neglectable, while the training set commonly is\nlarger and hence provides a better overall data statistic. After this, the training can start by iterating over batches in the training set D. For encoding, we can make use of the reparameterization trick and simply shift and scale the samples of a standard logistic distribution. The loss is the lower bound of Equation 3.\nAlgorithm 1 Training procedure for the logistic mixture encoding in CNFs 1: Calculate prior probabilities p(xi) on the training dataset; 2: for x ∈ D do 3: Sample a random logistic variable z′: z′ ∼ LogisticDist(0, 1); 4: Reparameterization trick for encoding: zi = z′i · σ(xi) + µ(xi) 5: Negative log-likelihood calculation L = − log ( pmodel(z) ∏S i=1 p̃(xi)∑ x̂ p̃(x̂)q(zi|x̂) ) (Eq. 3)\n6: Minimize loss L by updating parameters in pmodel(z) and q(zi|xi); 7: end for\nDuring testing, we make use of importance sampling to tighten the gap, given that logEx [p(x)] ≥ Ex [ log 1N ∑N n=1 p(xn) ] ≥ Ex [log p(x)] (Burda et al., 2016). This is again a standard technique for evaluating normalizing flows on images, and can improve the bits per dimension score slightly. In our experiments however, we did not experience a significant difference between N = 1 and N = 1000. The bits per dimension score is calculated by using the log with base 2 on the likelihood, and divide it by the number of dimensions/elements in categorical space (denoted by S).\nAlgorithm 2 Test procedure for the logistic mixture encoding in CNFs. We use N = 1000 in our experiments, although the difference between N = 1 and N = 1000 was marginal for most cases.\n1: for x ∈ D do 2: for n = 1, ..., N do 3: Encode x as continuous variable z: z ∼ q(z|x); 4: Determine likelihood Ln = pmodel(z) ∏ i p̃(xi)∑ x̂ p̃(x̂)q(zi|x̂)\n; 5: end for 6: Determine bits per dimension score: − log2 [ 1 N ∑N n=1 Ln ] /S; 7: end for" }, { "heading": "A.1.3 EXAMPLE ENCODING ON MOLECULE GENERATION", "text": "An example of a trained model encoding is shown in Figure 6. Here, we visualize the encoding of the edge attributes in GraphCNF trained the molecule generation. In this setup, we have 3 categories, representing the single, double and triple bond. While the single bond category is clearly the dominant one due to the higher prior probability, we did not observe any specific trends across trained models of the position or scale of the distributions.\nVisualizations on graph coloring show similar behavior as Figure 5 because all three categories have the same prior probability. Other encoding distributions such as the node types (atoms) cannot be so easily visualized because of their higher dimensionality than 2. We tried applying dimensionality reduction techniques for those but experienced that those do not capture the distribution shape well." }, { "heading": "A.2 LINEAR FLOWS", "text": "The flexibility of the mixture model can be increased by applying normalizing flows on each mixture that dependent on the discrete category. We refer to this approach as linear flows as the flows are applied for each categorical input variable independently. We visualize possible encoding distributions\nwith linear flows in Figure 7. Formally, we can write the distribution as:\nq(z|x) = N∏ i=1 q(zi|xi) (16)\nq ( z(K) ∣∣xi) = g (z(0)) · K∏ k=1 ∣∣∣∣det ∂fk(z(k−1);xi)∂z(k−1) ∣∣∣∣ where zi = z(K) (17)\nwhere f1, ..., fK are invertible, smooth mappings. In particular, we use here again a sequence of coupling layers with activation normalization and invertible 1x1 convolutions (Kingma and Dhariwal, 2018). Both the activation normalization and coupling use the category xi as additional external input to determine their transformation parameters by a neural network. The class-conditional transformations could also be implemented by storing K parameter sets for the coupling layer neural networks, which is however inefficient for a larger number of categories. Furthermore, in coupling layers, we apply a channel mask that splits zi over latent dimensionality d into two equally sized parts, of which one is transformed using the other as input.\nSimilarly to the mixture model, we can calculate the true posterior p(xi|zi) using the Bayes rule. Thereby, we sample from the flow for xi and need to inverse the flows for all other categories. Note that as the inverse of the flow also needs to be differentiable in this situation, we apply affine coupling layers instead of logistic mixture layers. However, this gets computationally expensive for more than 20 categories, and thus we used a single-layer linear network as posterior in these situations. The partitions of the latent space that can be learned by the encoding distribution are much more flexible, as illustrated in Figure 7.\nWe experimented with increasing sizes of linear flows but noticed that the encoding distribution usually fell back to rotated logistic distributions. The fact that the added complexity and flexibility by the flows are not being used further supports our observation that mixture models are indeed sufficient for representing categorical data well in normalizing flows.\nA.3 VARIATIONAL ENCODING\nThe third encoding distribution we experimented with is inspired by variational dequantization (Ho et al., 2019) and models q(z|x) by one flow across all categorical variables. Still, the posterior, p(xi|zi), is applied per categorical variable independently to maintain unique decoding and partitioning of the latent space. The normalizing flow again consists of a sequence of logistic mixture coupling layers with activation normalization and invertible 1x1 convolutions. The inner feature network of the coupling layers depends on the task the normalizing flow is applied on. Hence, for sets, we used a transformer architecture, while for the graph experiments, we used a GNN. On the language modeling task, we used a Bi-LSTM model to generate the transformation parameters. All those networks use the discrete, categorical data x as additional input.\nAs the true posterior cannot be found for this distribution, we apply a two-layer linear network to determine p(xi|zi). While the reconstruction error was again very low, we again experienced that the model mainly relied on a logistic mixture model, even if we initialize it differently beforehand. Variational dequantization is presumably important for images as every pixel value has its own independent Gaussian noise signal. This noise can be nicely modeled by flexible dequantization distributions which need to be complex enough to capture the true mean and variance of this Gaussian noise. In categorical distributions, however, we do not have such noise signals and therefore seem not to benefit from variational encodings." }, { "heading": "A.4 LATENT NORMALIZING FLOW", "text": "" }, { "heading": "A.4.1 ENCODING VISUALIZATION", "text": "0.0 0.2 0.4 0.6 0.8 1.0 0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nC-0 C-1\nC-2 C-3\nC-4 C-5\nC-6 C-7\nC-8 C-9\nC-10 C-11\nC-12 C-13\nC-14 C-15\nFigure 8: Visualization of the encoding distribution q(z|x) of latent NF on set summation.\nIn this section, we show visualizations of the learned encoding distribution of Latent NF (Ziegler and Rush, 2019) on the task of set summation. As shown in Figure 8, the encoder learned a clear mixture distribution although not being restricted too. This shows that the possible complexity in the decoder is not being used. The figure was generated by sampling 5, 000 examples and merging them into one plot. Each latent variable zi is two dimensional, hence our two-dimensional plot. The\ncolors represent different categories. For readability, each color is normalized independently (i.e. highest value 1, lowest 0) as otherwise, the colors of the stretched mixtures would be too low. For visualizations of the latent space in language modeling, we refer the reader to Ziegler and Rush (2019)." }, { "heading": "A.4.2 LOSS COMPARISON", "text": "In the following, we compare LatentNF and CNF on their training behavior and loss fluctuation. Figure 9 visualizes the loss curve for both models on the task of language modeling, trained on the text8 dataset (Mahoney, 2011). Note that both models use the same hyperparameters and model except that the reconstruction loss is weighted 10 times higher in the first 5k iterations, and decays exponentially over consecutive iterations. This is why the initial loss of LatentNF is higher, but the reconstruction loss becomes close to zero after 3k iterations in the example of Figure 9. Interestingly, we experience high fluctuations of the loss with LatentNF during the first iterations. Although Figure 9 shows only a single run, similar fluctuations have occurred in all trained LatentNF models but at different training iterations. Hence, we decided to plot a single loss curve instead of the average of multiple. The fluctuations can most likely be explained by the need for a strong loss scheduling. At the beginning of the training, the decoder loss is weighted significantly higher than the prior component. Thus, the backpropagated gradients mostly focus on the decoder, which can peak for rare categories/occurrences in the dataset. These peaks cause the encoding distribution to change abruptly, and the prior has to adapt to these changes within the next iterations. However, this can again lead to a high loss when the transformations in the prior do not fit to the new encoding, and thus map them to points of low likelihood. Thus, we see a loss peak across a couple of iterations until the model balances itself out again.\nSimilarly, when training on Wikitext103 (Merity et al., 2017), we experience even more frequent peaks in the loss, as Wikitext103 with 10k categories contains even more rare occurrences (see Figure 10). Reducing the learning rate did not show to improve the stability while considerably increasing the training time. When reducing the weight of the decoder, the model was not able to optimize as well as before and usually reached bits per dimension scores of 10-20.\nB IMPLEMENTATION DETAILS OF GRAPHCNF\nIn this section, we describe further implementation details of GraphCNF. We detail the implementation of the Edge-GNN model used in the coupling layers of GraphCNF, and discuss how we encode graphs of different sizes." }, { "heading": "B.1 EDGE GRAPH NEURAL NETWORK", "text": "GraphCNF implements a three-step generation approach, for which the second and third step also models latent variables for edges. Hence, in the coupling layers, we need a graph neural network\nwhich supports both node and edge features. We implement this by alternating between updates of the edge and the node features. Specifically, given node features vt and edge features et at layer t, we update those as follows:\nvt+1 = fnode(v t; et) (18)\net+1 = fedge(e t;vt+1) (19)\nThe update functions, fnode and fedge, are both common GNN layers with slight adjustments to allow a communication between nodes and edges. Before detailing the update layers, it should be noted that we use Highway GNNs (Rahimi et al., 2018) which apply a gating mechanism. Specifically, the updates for the nodes are determined by:\nvt+1 = vt · T ( ṽt+1 ) +H ( ṽt+1 ) · ( 1− T ( ṽt+1 )) (20)\nwhere ṽt+1 is the output of the GNN layer. H and T represent single linear layer networks where T has a consecutive sigmoid activation to limit the outputs between 0 and 1. The edge updates are applied in the similar manner. We experienced that such a gated update functions helps the gradient flow through the layers back to the input. This is important for normalizing flows as coupling layers or transformations in general strongly depend on previous transformations. Hence, we apply the same gating mechanism in the first step of GraphCNF, f1.\nNext, we detail the GNN layers to obtain ẽt+1 and ṽt+1. The edge update layer fedge resembles a graph convolutional layer (Zhou et al., 2018), and can be specified as follows:\nẽt+1ij = g ( W tee t ij +W t vv t i +W t vv t j ) (21)\nwhere e·ij represents the features of the edge between node i and j. g stands for a GELU (Hendrycks and Gimpel, 2016) non-linear activation. Using more complex transformations did not show to significantly improve the performance of GraphCNF.\nTo update the node representations, we took inspiration of the transformer architecture (Vaswani et al., 2017) and use a modified multi-head attention layer. In particular, a linear transformation maps each node to a key, query and value vector:\nKvi , Qvi , Vvi = WKv t i ,WQv t i ,WV v t i (22)\nThe attention value is usually computed based on the dot product between two nodes. However, as we explicitly have features for the edge between the two nodes, we use those to control the attention mechanism. Hence, we have an additional weight matrix u to map the edge features to an attention bias:\nâij = QviK T vi/ √ d+ et+1ij u T (23)\nwhere d represents the hidden dimensionality of the features. Finally, we also add a edge-based value vector to allow a full communication from edges to nodes. Overall, the updates node features are\ncalculated by:\naij = exp (âij)∑ m exp (âim) , (24)\nṽt+1i = ∑ j aij · [ Vvj +Wee t+1 ij ] (25)\nAlternatively to transformers, we also experimented with Graph Attention Networks (Veličković et al., 2018). However, those showed slightly worse results which is why we used the transformer-based layer.\nIn step 2, the (binary) adjacency matrix is given such that each node has a limited number of neighbors. A full transformer-based architecture as above is then not necessary anymore as every atom has usually between 1 and 3 neighbors. Especially the node-to-node dot product is expensive to perform. Hence, we experimented with a node update layer where the attention is purely based on the edge features in step 2. We found both to work equally well while the second is computationally more efficient." }, { "heading": "B.2 ENCODING GRAPH SIZE", "text": "The number of nodes N varies across graphs in the dataset, and hence a generative model needs to be flexible regarding N . To encode the number of nodes, we use a similar approach as Ziegler and Rush (2019) for sequences and add a simple prior over N . The prior is parameterized based on the graph size frequency in the training set. Alternatively, to integrate the number of nodes in the latent space, we could add virtual nodes to the model, similar to virtual edges. Every graph in the training dataset would be filled up to the maximum number of nodes (38 for Zinc250k (Irwin et al., 2012)) by adding such virtual nodes. Meanwhile, during sampling, we remove virtual nodes if the model generates such. GraphNVP (Madhawa et al., 2019) uses such an encoding as their coupling layers did not support flexible graph sizes. However, in experiments, we obtained similar performance with both size encodings while the external prior is computationally more efficient and therefore used in this paper." }, { "heading": "C ADDITIONAL RESULTS ON MOLECULE GENERATION", "text": "In this section, we present additional results on the molecule generation task. Table 4 shows the results of our model on the Zinc250k (Irwin et al., 2012) dataset including the likelihood on the test set in bits per node. We calculate this metric by summing the log-likelihood of all latent variables, both nodes, and edges, and divide by the number of nodes. Although the number of edges scales with O(N2), a higher proportion of those are virtual and did not have a significant contribution to the likelihood. Thus, bits per node constitutes a good metric for comparing the likelihood of molecules of varying size. Additionally, we also report the standard deviation for all metrics over 4 independent runs. For this, we initialized the random number generator with the seeds 42, 43, 44, and 45 before creating the model. The specific validity values we obtained are 80.74%, 81.16%, 85.3% and 86.44% (in no particular order). It should be noted that the standard deviation among those models is considerably high. This is because the models in molecule generation are trained on maximizing the likelihood of the training dataset and not explicitly on generating valid molecules. We experienced that among over seeds, models that perform better in terms of likelihood do not necessarily perform better in terms of validity.\nWe also evaluated GraphCNF on the Moses (Polykovskiy et al., 2018) molecule dataset. Moses contains 1.9 million molecules with up to 30 heavy atoms of 7 different types. Again, we follow the preprocessing of Shi et al. (2020) and represent molecules in kekulized form in which hydrogen is removed. The results can be found in Table 5 and show that we achieve very similar scores to the experiments on Zinc250k. Compared to the normalizing flow baseline GraphAF, GraphCNF generates considerably more valid atoms while being parallel in generation in contrast to GraphAF being autoregressive. JT-VAE uses manually encoded rules for generating valid molecules only such that the validity rate is 100%. Overall, the experiment on Moses validates that GraphCNF is not specialized on a single dataset but can improve on current flow-based graph models across datasets.\nFinally, we show 12 randomly sampled molecules from our model in Figure 11. In general, GraphCNF is able to generate a very diverse set of molecules with a variety of atom types. This qualitative analysis endorses the previous quantitative results of obtaining close to 100% uniqueness on 10k samples.\nBesides generating multiple sub-graphs, the most common failure case we have found are single, invalid edges in a large molecule, as shown in four examples in Figure 12." }, { "heading": "D EXPERIMENTAL SETTINGS", "text": "In this section, we detail the hyperparameter settings and datasets for all experiments. All experiments have been implemented using the deep learning framework PyTorch (Paszke et al., 2019). The experiments for graph coloring and molecule generation have been executed on a single NVIDIA TitanRTX GPU. The average training time was between 1 and 2 days. The set and language experiments have been executed on a single NVIDIA GTX1080Ti in 4 to 16 hours. All experiments have been repeated with at least 3 different random seeds." }, { "heading": "D.1 SET MODELING", "text": "Dataset details We use two toy datasets, set shuffling and set summation, to simulate a discrete distribution over sets in our experiments. Note that we do not have a classical split of train/val/test dataset, but instead train and test the models on samples from the same discrete distribution. This is because we want to verify whether a categorical normalizing flow and other baselines can model an arbitrary discrete distribution. The special property of sets is that permuting the elements of a set still represent the same set. However, a generative model still has to learn all possible permutations. While an autoregressive model considers those permutations as different data points, a permutation-invariant model as Categorical Normalizing Flow contains an inductive bias to assign the same likelihood to any permutation.\nIn set shuffling, we only have one set to model which is the following (with categories C1 to C16):\n{C1, C2, C3, C4, C5, C6, C7, C8, C9, C10, C11, C12, C13, C14, C15, C16} This set has 16! possible permutations and therefore challenging to model. The optimal likelihood in bits per element is calculated by log2 (16!) /16 ≈ 2.77. The dataset set summing contains of 2200 valid sets for N = 16 and L = 42. An example for a valid set is: {1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 6, 8} For readability, the set is sorted by ascending values, although any permutation of the elements represent the exact same set. Taking into account all possible permutations of the sets in the dataset, we obtain a optimal likelihood of log2 ( 6.3 · 1010 ) /16 ≈ 2.24. The values for the sequence length N and sum L was chosen such that the task is challenging enough to show the differences between Categorical Normalizing Flows and its baselines, but also not too challenging to prevent unnecessarily long training times and model complexities.\nHyperparameter details Table 6 shows an overview of the hyperparameters per model applied on set modeling. We use the notation “{val1, val2, ...}” to show the different values we have tried during hyperparameter search. Thereby, the underlined value denotes the hyperparameter value with the best performance and finally was being used to generate the results in Table 3.\nThe number of encoding coupling layers in Categorical Normalizing Flows are sorted by the used encoding distribution. The mixture model uses no additional coupling layers, while for the linear flows, we apply 4 affine coupling layers using an external input for the discrete category. For the variational encoding distribution q(z|x), we use 4 mixture coupling layers across the all latent variables z with external input for x. A larger dimensionality of the latent space per element showed to be beneficial for all encoding distributions. Note that due to a dimensionality larger than 1 per\nelement, we can apply the channel mask instead of a chess mask and maintain permutation invariance compared to the baselines.\nIn variational dequantization and Discrete NF, we sort the categories randomly for set shuffling (the distribution is invariant to the category order/assignment) and in ascending order for set summation. In Discrete NF, we followed the published code from Tran et al. (2019) for their coupling layers and implemented it in PyTorch (Paszke et al., 2019). We use a discrete prior over the set elements which is jointly optimized with the flow. However, we experienced significant optimization issues due to the straight-through gradient estimator in the Gumbel Softmax.\nAcross this paper, we experiment with the two optimizers Adam (Kingma and Ba, 2015) and RAdam (Liu et al., 2020), and experienced RAdam to work slightly better. The learning rate decay is applied every update and leads to exponential decay. However, we did not observe the choice of this hyperparameter to be crucial." }, { "heading": "D.2 GRAPH COLORING", "text": "Dataset details In our experiments, we focus on the 3-color problem meaning that a graph has to be colored using K = 3 colors. We generate the datasets by randomly sampling a graph and using an SAT solver2 for finding one valid coloring assignment. In case no solution can be found, we discard the graph and sample a new graph. We further ensure that every graph cannot be colored by less than 3 colors to exclude too simple graphs. For creating the graphs, we take inspiration from Lemos et al. (2019) and first uniformly sample the number of nodes between 10 ≤ |V | ≤ 20 for the small dataset, and 25 ≤ |V | ≤ 50 for the large dataset. Next, we sample a value p between 0.1 and 0.3 which represents the probability of having an edge between a random pair of nodes. Thus, p controls how dense a graph is, and we aim to have both dense and sparse graphs in our dataset. Finally, for each pair of nodes, we sample from a Bernoulli distribution with probability p of adding an edge between the two nodes or not. Finally, we check whether each node has at least one connection and that all nodes can be reached from any other node. This ensures that we have one connected graph and not multiple sub-graphs. Overall, we create a train/val/test size of 192k/24k/24k for the small dataset, and 450k/20k/30k for the large graphs. We visualize examples of the datasets in Figure 13.\nDuring training, we randomly permute the colors of a graph (e.g. red becomes blue, blue becomes green, green becomes red) as any permutation is a valid color assignment. When we sample a color assignment from our models, we explicitly use a temperature value of 1.0. For the autoregressive\n2We have used the following solver from the OR-Tools library in python: https://developers.google.com/optimization/cp/cp_solver\nmodel and the VAE, this means that we sample from the softmax output. A common alternative is to take the argmax, which corresponds to a temperature value of 0.0. However, we stick to the original distribution because we want to test whether the models capture the full discrete distribution of valid color assignments and not only the most likely solution. For the normalizing flow, a temperature of 1.0 corresponds to sampling from the prior distribution as it was used during training.\nHyperparameter details Table 7 shows an overview of the used hyperparameters. If “/” is used in the table, the first parameter refers to the hyperparameter value used on a small dataset and the second for the larger dataset. The activation function used within the graph neural networks is GELU (Hendrycks and Gimpel, 2016). Interestingly we experience that a larger latent space dimensionality is crucial for larger graphs despite having the same number of categories as the small dataset. This shows that having an encoding being flexible in the number of dimensions can be further important for datasets where complex relations between categorical variables need to be modeled. Increasing the number of dimensions on the small dataset did not show any significant differences in performance. The number of mixtures in the mixture coupling layers is in general beneficial to be large. However, this can also increase the sampling time. In the case of sampling time being crucial, the number of mixtures can be decreased for the tradeoff of slightly worse performance.\nThe input to the autoregressive model is the graph with the color assignment at time step T where each category including unassigned nodes is represented by an embedding vector. We experiment with an increasing number of hidden layers. While more layers are especially important for sub-optimal node ordering, the performance does not significantly improve for more than 5 layers. As the sampling time also increases linearly with the number of layers, we use 5 hidden layers for the models.\nFor the variational autoencoder, we encode each node by a latent vector of size 4. As VAEs have shown to benefit from slowly adding the KL divergence between prior and posterior to the loss, we experiment with a scheduler where the slope is based on a sigmoid and stretched over 10k iterations. We apply a 5 layer graph attention network for both the encoder and decoder. Increasing the number of layers did not show a significant gain while making the loss scheduling more difficult, which is why we stuck with 5 layers.\nDetailed results Table 8 shows the standard deviation of the results reported in Table 2. Each model was run with 3 different seeds." }, { "heading": "Hyperparameters GraphCNF Variational AE Autoregressive", "text": "" }, { "heading": "D.3 MOLECULE GENERATION", "text": "Dataset details The Zinc250k (Irwin et al., 2012) dataset we use contains 239k molecules of which we use 214k molecules for training, 8k for validation, and 17k for testing. We follow the preprocessing of Shi et al. (2020) and represent molecules in kekulized form in which hydrogen is removed. This leaves the molecules with up to 38 heavy atoms, with a mean and median size of about 23. The smallest graph consists of 8 nodes. Thereby, Zinc250k considers molecule with 8 different atom types where the distribution is significantly imbalanced. The most common atom is carbon with 73% of all nodes in the dataset. Besides oxygen (10%) and nitrogen (12%), the rest of the atoms occur in less than 2% of all nodes, with the rarest atom being Bromine (0.002%). Between those atoms, the dataset contains 3 different bonds or edge types, namely single, double and triple covalent bonds describing how many electrons are shared among the atoms. In over 90% of all node pairs there exist no bond. In 7% of the cases, the atoms are connected with a single connection, 2.4% with a double, and 0.02% with a triple connection. A similar imbalance is present in the Moses dataset and is based on the properties of molecules. Nevertheless, we experienced that GraphCNF was able to generate a similar distribution, where adding the third stage (adding virtual edges later) considerably helped to stabilize the edge imbalance.\nHyperparameter details We summarize our hyperparameters in Table 9. Generally, a higher latent dimensionality is beneficial for representing nodes/atoms, similar to the graph coloring task. However, we experienced that a lower dimensionality for edges is slightly better, presumably because the flow already has a significant amount of latent variables for edges. Many edges, especially the virtual ones, do not contain much information. Besides, a deeper flow showed to gain better results offering more complex transformations. However, in contrast to the graph coloring model, GraphCNF on\nmolecule generation requires a considerable amount of memory as we have to model a feature vector per edge. Nevertheless, we did not experience any issues due to the limited batch size of 96, and during testing, we could scale up the batch size easily to more than 128 on an NVIDIA GTX 1080Ti for both datasets." }, { "heading": "D.4 LANGUAGE MODELING", "text": "Dataset details The three datasets we use for language modeling are the Penn Treebank (Marcus et al., 1994), text8 and Wikitext103 (Merity et al., 2017). The Penn Treebank with a preprocessing of Mikolov et al. (2012) consists of approximately 5M characters and has a vocabulary size of K = 51. We follow the setup of Ziegler and Rush (2019) and split the dataset into sentences of a maximum length of 288. Furthermore, instead of an end-of-sentence token, the length is passed to the model and encoded by an external discrete prior which is created based on the sentence lengths in the training dataset.\nText8 contains about 100M characters and has a vocabulary size of K = 27. We again follow the preprocessing of Mikolov et al. (2012) and split the dataset into 90M characters for training, and 5M characters each for validation and testing. We train and test the models on a sequence length of 256.\nIn contrast to the previous two datasets, we use Wikitext103 as a word-level language dataset. First, we create a vocabulary and limit it to the most frequent 10,000 words in the training corpus. We thereby use pre-trained Glove (Pennington et al., 2014) embeddings to represent the words in the baseline LSTM networks and to determine the logistic mixture parameters in the encoding distribution of Categorical Normalizing Flows. Due to this calculation of the mixture parameters, we use a small linear network as a decoder. The linear network consists of three linear layers of hidden size 512 with GELU (Hendrycks and Gimpel, 2016) activation and output size of 10,000 (the vocabulary size). Similarly to text8, we train and test the models on an input sequence length of 256.\nHyperparameter details The hyperparameters for the language modeling experiments are summarized in Table 10. We apply the same hyperparameters for the flow and baseline if applicable. The best latent dimensionality for character-level is 3, although larger dimensionality showed to gain similar performance. For the word-level dataset, it is beneficial to increase the latent dimensionality to 10. However, note that 10 is still significantly smaller than the Glove vector size of 300. As Penn Treebank has a limited training dataset on which LSTM networks easily overfit, we use a dropout (Srivastava et al., 2014) of 0.3 throughout the models and dropout an input token with a chance of 0.1. The other datasets seemed to benefit slightly by a small input dropout to prevent overfitting at later stages of the training.\nDetailed results Table 11 shows the detailed numbers and standard deviations of the results reported in Figure 4. Each model was run with 3 different seeds based on which the standard deviation has been calculated." }, { "heading": "Model Likelihood in bpd", "text": "" }, { "heading": "D.5 REAL-WORLD EXPERIMENTS", "text": "" } ]
2,021
null
SP:6c4659d71144bea924d9e77ee2be1bd6d11cf7f0
[ "The paper extends over hypergraph convolutional networks (HCN) by adding a temporal evolution module in order to solve prediction tasks in a dynamic environment. The main part of the paper is the description of the proposed system. It is composed of a HCN for computing node embeddings at each time step and a LSTM as the temporal module. Experimental results are provided for dynamic prediction tasks over stock datasets.", "This paper proposes a method called DyHCN for learning dynamic hypergraph convolutional networks where the hypergraph structure is allowed to evolve over time. The interactions within each hyper edge, that between nodes, as well as related are used to learn the hypergaph embedding. The evolution of the centroid nodes is then modelled using LSTM. DyHCN gives better modelling accuracy as compared to some existing ones." ]
Hypergraph Convolutional Network (HCN) has become a default choice for capturing high-order relations among nodes, i.e., encoding the structure of a hypergraph. However, existing HCN models ignore the dynamic evolution of hypergraphs in the real-world scenarios, i.e., nodes and hyperedges in a hypergraph change dynamically over time. To capture the evolution of high-order relations and facilitate relevant analytic tasks, we formulate dynamic hypergraph and devise the Dynamic Hypergraph Convolutional Networks (DyHCN). In general, DyHCN consists of a Hypergraph Convolution (HC) to encode the hypergraph structure at a time point and a Temporal Evolution module (TE) to capture the varying of the relations. The HC is delicately designed by equipping inner attention and outer attention, which adaptively aggregate nodes’ features to hyperedge and estimate the importance of each hyperedge connected to the centroid node, respectively. Extensive experiments on the Tiigo and Stocktwits datasets show that DyHCN achieves superior performance over existing methods, which implies the effectiveness of capturing the property of dynamic hypergraphs by HC and TE modules.
[]
[ { "authors": [ "T-H Hubert Chan", "Anand Louis", "Zhihao Gavin Tang", "Chenzi Zhang" ], "title": "Spectral properties of hypergraph laplacian and approximation algorithms", "venue": "Journal of the ACM (JACM),", "year": 2018 }, { "authors": [ "Deli Chen", "Yanyan Zou", "Keiko Harimoto", "Ruihan Bao", "Xuancheng Ren", "Xu Sun" ], "title": "Incorporating fine-grained events in stock movement prediction", "venue": null, "year": 1910 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Fuli Feng", "Xiangnan He", "Xiang Wang", "Cheng Luo", "Yiqun Liu", "Tat-Seng Chua" ], "title": "Temporal relational ranking for stock prediction", "venue": "ACM Transactions on Information Systems (TOIS),", "year": 2019 }, { "authors": [ "Yifan Feng", "Haoxuan You", "Zizhao Zhang", "Rongrong Ji", "Yue Gao" ], "title": "Hypergraph neural networks", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Sheng-Hsun Hsu", "JJ Po-An Hsieh", "Ting-Chih Chih", "Kuei-Chu Hsu" ], "title": "A two-stage architecture for stock price forecasting by integrating self-organizing map and support vector regression", "venue": "Expert Systems with Applications,", "year": 2009 }, { "authors": [ "Ziniu Hu", "Weiqing Liu", "Jiang Bian", "Xuanzhe Liu", "Tie-Yan Liu" ], "title": "Listening to chaotic whispers: A deep learning framework for news-oriented stock trend prediction", "venue": "In Proceedings of the eleventh ACM international conference on web search and data mining,", "year": 2018 }, { "authors": [ "Jianwen Jiang", "Yuxuan Wei", "Yifan Feng", "Jingxuan Cao", "Yue Gao" ], "title": "Dynamic hypergraph neural networks", "venue": "In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI),", "year": 2019 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Pan Li", "Olgica Milenkovic" ], "title": "Inhomogeneous hypergraph clustering with applications", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Pan Li", "Olgica Milenkovic" ], "title": "Submodular hypergraphs: p-laplacians, cheeger inequalities and spectral clustering", "venue": "arXiv preprint arXiv:1803.03833,", "year": 2018 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),", "year": 2014 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2008 }, { "authors": [ "Michael Schlichtkrull", "Thomas N Kipf", "Peter Bloem", "Rianne Van Den Berg", "Ivan Titov", "Max Welling" ], "title": "Modeling relational data with graph convolutional networks", "venue": "In European Semantic Web Conference,", "year": 2018 }, { "authors": [ "Joakim Skarding", "Bogdan Gabrys", "Katarzyna Musial" ], "title": "Foundations and modelling of dynamic networks using dynamic graph neural networks: A survey", "venue": "arXiv preprint arXiv:2005.07496,", "year": 2020 }, { "authors": [ "Naganand Yadati", "Madhav Nimishakavi", "Prateek Yadav", "Vikram Nitin", "Anand Louis", "Partha Talukdar" ], "title": "Hypergcn: A new method for training graph convolutional networks on hypergraphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Rex Ying", "Ruining He", "Kaifeng Chen", "Pong Eksombatchai", "William L Hamilton", "Jure Leskovec" ], "title": "Graph convolutional neural networks for web-scale recommender systems", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Zhihong Zhang", "Lu Bai", "Yuanheng Liang", "Edwin Hancock" ], "title": "Joint hypergraph learning and sparse regression for feature selection", "venue": "Pattern Recognition,", "year": 2017 }, { "authors": [ "Zizhao Zhang", "Haojie Lin", "Yue Gao", "KLISS BNRist" ], "title": "Dynamic hypergraph structure learning", "venue": "In IJCAI,", "year": 2018 }, { "authors": [ "Dengyong Zhou", "Jiayuan Huang", "Bernhard Schölkopf" ], "title": "Learning with hypergraphs: Clustering, classification, and embedding", "venue": "In Advances in neural information processing systems,", "year": 2007 }, { "authors": [ "Xiaofeng Zhu", "Yonghua Zhu", "Shichao Zhang", "Rongyao Hu", "Wei He" ], "title": "Adaptive hypergraph learning for unsupervised feature selection", "venue": "In IJCAI,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graph Convolutional Network (GCN) Scarselli et al. (2008) extends deep neural networks to process graph data, which encodes the relations between nodes via propagating node features over the graph structure. GCN has become a promising solution in a wide spectral of graph analytic tasks, such as relation detection Schlichtkrull et al. (2018) and recommendation Ying et al. (2018). An emergent direction of GCN research is extending the graph covolution operations to hypergraphs, i.e., hypergraph convolutional networks Zhu et al. (2017); Zhou et al. (2007); Zhang et al. (2017); Feng et al. (2019b); Yadati et al. (2019), where high-order node relations are represented as hyperedges (one hyperedge can connect multiple nodes). For instance, in a hypergraph of stocks, an financial event relevant to several stocks is represented as a hyperedge. While a surge of attention paid on hypergraph convolutional networks, most of them discard the dynamic property of hypergraphs in real-world applications, e.g., new hyperedges (i.e., events) emerge in the hypergraph of stocks (see Fig. 1), where the evolution of the hypergraph is crucial for the analytic tasks (e.g., stock price prediction). Aiming to bridge the gap, this work explore the central theme of dynamic hypergraph and the corresponding GCN.\nFormally, a hypergraph with n nodes and m hyperedges is represented as G = (V ,E,A,H,X) where V and E denote the set of nodes and hyperedges respectively; A ∈ Rn×m is an incidence matrix with binary value indicating the connectedness of nodes; H ∈ Rm×c and X ∈ Rn×d are features represent the hyperedges and nodes respectively. In order to account for the evolution, we first extend the concept of static hypergraph to dynamic hypergraph, which has two different formulations when treating the time as continuous value or discrete value. 1) Discrete-time formulation. A straightforward solution is to treat a time window with length of T (e.g., T days) as a sequence of time-steps and get a snapshot at each time-step. In this way, a dynamic hypergraph is defined as GD = [G1, · · · ,Gt, · · · ,GT ]T where Gt is a hypergraph dumped at time-step t. 2) Continuous formulation. By treating time as a continuous variable, the dynamic hypergraph can be defined as GC = (G0,U) where G0 is the initial status (a hypergraph) and U = {(pt, v t, at)|t <= T} is a streaming of updates. pt denotes the target variable (e.g., a row of X) changed at time t ; v t denotes the latest value of the target variable, at denotes the action of change, including add, delete,\nupdate. It should be noted that both formulations have pros and cons, e.g., the discrete-time formulation is more friendly to existing analytic techniques on static hypergraph such as HCN while the continuous-time formulation records the accurate time of changes. This work focuses on the discrete-time formulation and makes the first attempt to extend HCN to dynamic hypergraph.\nA big challenge of capturing spatial-temporal dependency in a dynamic hypergraph is that it is tough to extract the features of those changing nodes or hyperedges in a unified manner for the sake of varied scales of nodes and hyperedges. Besides, how to absorb their dynamic properties is very important for various application tasks. Towards this end, we need to design the proper convolution operations on dynamic hypergraph. There are two challenging toughs: 1) at each time step, since there are various relations between hyperedges and nodes, it is important to update the node features by considering various relations in the hyperedges; 2) due to dynamically changes of the node features, modeling the temporal dependency needs to extract the corresponding temporal features.\nIn this work, we propose a framework of Dynamic Hypergraph Convolutional Networks (DyHCN) to tackle the challenges, which has two modules: Hypergraph Convolution (HC) module and Temporal Evolution (TE) module. In a dynamic hypergraph, the set of hyperedges at each time step includes different hyperedge embeddings and each hyperedge contains different numbers of nodes. We exploit three submodules to update an node’s embeddings in HC: inner attention, outer attention, and embeddings update. Firstly, inner attention transform node features along with its hyperedge into the node-hyperedge feature; and then outer attention utilizes attention mechanism to estimate the importance of each hyperedge and output the importance weights; and then we update the node’s embeddings by aggregating node-hyperedge, hyperedge and node features with the weight of each hyperedge. Getting the nodes embeddings, we extract temporal features of nodes’ embeddings and make a prediction by the TE module. Extensive experimental results on two real-world datasets validate the superior performance of DyHCN over the existing baselines which proves the effectiveness of DyHCN on dynamically hypergraphs.\nThe rest of the paper is organized as follows. Section 2 introduces the preliminary knowledge about GCN and the hypergraph convolutional network. Section 3 explains the proposed DyHCN method. Section 4 introduces related work about GCN on the graph and hyperedge. Applications and experimental results are presented in Section 5. Finally, we conclude this work in Section 6." }, { "heading": "2 PRELIMINARY", "text": "Graph Convolutional Network Given a graph G = (V ,E) with N nodes vi ∈ V , edges (vi, vj) ∈ E , an adjacency matrix A ∈ RN×N and a degree matrix Dii = ∑ j Aij . With the input signal x, Kipf & Welling (2016) considers spectral convolutions on graphs with a filter gθ = diag(θ) in the Fourier domain, gθ ? x = UgθUT x, where U is the matrix of eigenvectors of the normalized graph Laplacian L = IN − D−1/2AD−1/2 = UΛUT , with a diagonal matrix of eigenvalues Λ and the graph Fourier transform UTx. In order to reduce the computation complexity, gθ is approximated with Chebyshev polynomials Tk(x) = 2xTk−1(x)−Tk−2(x) Defferrard et al. (2016), which can be formulated as: gθ ≈ ∑K k=0 θkTk(Λ̂), where Λ̂ = 2 λmax Λ − I, λmax denotes the largest eigenvalue\nof Laplacian matrix, θk denotes the Chebyshev coefficients. Kipf & Welling (2016) proved that the GCN can be simplified to K=1 and λmax ≈ 2, which is the state-of-the-art of GCN.\nHypergraph Convolutional Network A hypergraph can be formulated as G = (V ,E,W), where V is a set of vertes, E is a set of hyperedges and W is a diagonal matrix which denotes the weight of each hyperedge. The adjacency matrix of hypergraph G can be denoted by H ∈ R|V|×|E|. The degree of node is d(v) = ∑ e∈E w(e)h(v, e) and the degree of edge δ(e) = ∑ v∈V h(v, e). De and Dv denotes the matrices of edge degrees and node degrees. The spectral convolution of x and filter g can be formulated as g?x = Φ((ΦT g) (ΦT x)) = Φg(Λ)ΦT x, where denotes the element-wise multiplication and g(Λ) is a function of Fourier coefficients Feng et al. (2019b). As simplified in GCN, the convolution operation can be simplified to g ? x ≈ θD−1/2v HWDe −1HTD−1/2v x." }, { "heading": "3 DYNAMIC HYPERGRAPH CONVOLUTIONAL NETWORKS", "text": "" }, { "heading": "3.1 FORMULATION OF DYNAMIC HYPERGRAPH", "text": "Dynamic hypergraph can be formulated into two categories: discrete-time and continuous-time dynamic hypergraph. The discrete-time approach views dynamic hypergraph as a collection of static graph snapshots over time, while the continuous counterpart extracts fine-grained temporal information on nodes and hyperedges which characterize the dynamic evolution of hypergraph.\nDiscrete-time Dynamic Hypergraph Discrete-time dynamic hypergraph can be formulated as GD = (Vt,Et,At,Ht,Xt), where Xt = [xt1,xt2, · · · ,xtn]T ∈ Rn×d, Ht = [ht1,ht2, · · · ,htm]T ∈ Rm×c, xti(i = 1, 2, · · · ,n) denotes the feature of the i -th node and htj(j = 1, 2, · · · ,m) denotes the feature of the j-th hyperedge, and m , n is the number of hyperedges and nodes on hypergraph Gt (hypergraph on time step t). At ∈ Rn×m is an incidence matrix with binary value indicating the connectedness of nodes on hypergraph Gt. Vt is the set of nodes, Et is the set of hyperedges. C te = [u t 1,u\nt 2, · · · ,u tkte ] T ∈ Rkte×d and D tu = [et1, et2, · · · , etktu ] T ∈ Rktu×c are used to denote\nthe node set contained in hyperedge e and the hyperedge set containing the node u at time setp t respectively. Note that k te and k t u are the number of nodes in hyperedge e and the number of hyperedges containing node u on time t, respectively. As the representation evolve over time, we capture the spatial dependency by hypergraph convolutional networks and use CNNs to model the temporal dependency.\nContinuous-time Dynamic Hypergraph Continuous-time dynamic hypergraph can be defined as GC = (G0,U) where G0 is the initial status (a hypergraph) and U = {(pt, v t, at)|t <= T} is a streaming of updates. pt denotes the target variable (e.g., a row of X) changed at time t ; v t denotes the latest value of the target variable, at denotes the action of change, including add, delete, update. Due to a static hypergraph model can be extended to dynamic hypergraphs by applying it on each snapshots and then aggregating the results of the model, and the distinction between an evolving and a temporal network is less important Skarding et al. (2020), we adapt discrete-time dynamic hypergraph to build the DyHCN model in our work.\nDyHCN DyHCN is composed of two modules: hypergraph convolution (HC) and temporal evolution (TE). The HC module is designed to aggregate features among nodes and hyperedges with attention mechanisms and update the embeddings of centroid nodes. The TE module is used for capturing dynamic changes in temporal features. The framework of DyHCN is illustrated in Fig.2," }, { "heading": "3.2 HYPERGRAPH CONVOLUTION", "text": "Hypergraph convolution consists of three submodules: inner attention, outer attention, and embeddings update. In particular, inner attention aggregates nodes’ features to hyperedge, outer attention uses attention mechanisms to determine the importance of each hyperedge, and embeddings update submodule aggregates node-hyperedge features, hyperedge features and the node features to update centroid node embeddings with the weight of each hyperedge.\nInner attention The inner attention is shown on the left plane of Fig. 3 which aggregates node embeddings to node-hyperedge features by using a self-attention mechanism. With a multi-layer perceptron (MLP) we can get the weight score of each node. For a specific node xti on time step t, the input of inner attention is C te = [u t 1,u\nt 2, · · · ,u tkte ] T ∈ Rkte×d and the output of node-hyperedge embedding d t is the weighted sum of node features, which is formulated as:\nωt = softmax (C tewe + be), (1)\ndt = kte∑ j=0 ωtju t j , (2)\nwhere we ∈ Rd×1 and be ∈ R k t e×1 are trainable parameters, ωt ∈ R kte×1 is the weight of nodes in hyperedge, dt ∈ R1×d denotes the node-hyperedge features, and k te denotes the number of nodes in hyperedge, d is node feature dimension.\nOuter attention Due to multiple hyperedges related to center node, and the importance of each hyperedge is different, we propose an outer attention submodule to determine the weight of each hyperedge. The right plane of Fig. 3 shows the outer attention submodule which calculates the weight of each hyperedge based on hyperedge features. For specific node xti, the input of outer attention is D tu = [e t 1, e\nt 2, · · · , etktu ] T ∈ Rktu×c, a hyperedge set containing vertex xti, and the output is ωth, the weight of each hyperedge on time step t.\nrtu = sigmoid(D t uwu + bu), (3)\nωth = softmax (r t u), (4)\nwhere wu ∈ Rc×1, bu ∈ Rk t u ×1 are trainable parameters and ωth ∈ Rk t u ×1 is the weight vector of each hyperedge, k tu is the number of hyperedges containing vertex u at time step t, and c is the hyperedge feature dimension.\nEmbeddings Update With the output of inner attention and out attention, we update the centroid node embeddings sti by aggregating node’s input features x t i, node-hyperedge features dt and hyperedge features hti with the weight of hyperedges ω t h. We explore three aggregation methods, 1) Concatenated features We concatenate the node-hyperedge features and hyperedge features directly with the activation funciton of tanh , qt = tanh[dt : hti] ∈ R1×(d+c). 2) Dot-product features We multiply the node-hyperedge features with hyperedge features with the element-wise operation to model the interaction of two kinds features, tanh , qt = tanh[dt hti] ∈\nR1×d (by setting d=c), where denotes element-wise product operation. 3) MLP features We concatenate the node-hyperedge features with hyperedge features with an MLP process to aggregate the features, qt = tanh([dt : hti]Wc + bc) ∈ R1×d, where Wc ∈ R(d+c)×d, bc ∈ R1×d are trainable parameters. Note that, htc only stands for the concatenated features for one hyperedge, so for k t u hyperedges, we can get a concatenated features matrix Qti = [qt0, qt1, · · · , qtktu ] T which denotes the influence from nodes and each hyperedge.\nConsidering the weight of each hyperedge ωth, we first calculate the weighted sum of concatenated features Qti to measure the influence from all hyperedges and related nodes. And then update the specific node embedding sti with the input feature x t i and the influence embeddings.\nzti = sum(ω t h ·Q t i), (5)\nsti = tanh([x t i : z t i ]Wh + bh), (6)\nwhere zti ∈ R1×d is the weighted aggregated features, Wh ∈ R2d×d and bh ∈ R1×d are trainable parameters." }, { "heading": "3.3 TEMPORAL EVOLUTION", "text": "The centroid node embeddings extracted by HC are independent on different time steps, we will get embeddings for each centroid node i along with time, i.e., Si = [s0i , s1i , · · · , sti]T . We adopt a temporal evolution module to process temporal information extraction. The TE module utilize the LSTM model to extract temporal features which can be used for classification or regression tasks.\nOi = LSTM(Si), (7) ŷi = (tanh(OiWo + bo))Wy + by, (8)\nwhere Oi ∈ R1×dim is the temporal features extracted by LSTM, dim is the hidden dimension of LSTM. Wo ∈ R dim×k, bo ∈ R1×k, Wy ∈ Rk×l, by ∈ R1×l are trainable parameters, k is the hidden dimension size for MLP and l is the final output size which determined by detail task." }, { "heading": "4 RELATED WORK", "text": "GCN on regular graphs Existing graph-based learning solutions are divided into two directions: spectral methods and spatial methods. Spectral graph convolution transform features to the spectral domain by using graph Laplacian eigenvectors and then conclude node or graph features by spectral convolution. However, the computation cost in Laplacian factorization is expensive, Defferrard et al. (2016) introduced Chebyshev polynomials to approximate Laplacian eigenvectors, and further, Kipf & Welling (2016) simplified the process by a localized first-order approximation of spectral graph convolutions. On the other side, spatial methods generate node embeddings by aggregating neighborhoods’ features. GraphSAGE generates embeddings by sampling and aggregating features from a node’s local neighborhood Hamilton et al. (2017). GAT leverage self-attentional mechanism and weighting different nodes in a neighborhood to generate node embeddings Veličković et al. (2017). The graph-based learning limits the relationships into pairwise, however, in many applications, the relations between objects are in higher-order that cannot be formulated by a graph structure.\nGCN on hypergraph To evaluate the higher-order relations between nodes, Zhou et al. (2007) introduced the first hypergraph learning, where multiple nodes share the same hyperedge. On the\ndirection of spectral methods, considering different subsets of nodes in the same hyperedge may have different structural importance, Li & Milenkovic (2017) proposed inhomogeneous hypergraph partitioning model to assign different costs to different hyperedge cuts. Li & Milenkovic (2018) defined the notion of p-Laplacians which constitute the basis of new spectral hypergraph clustering methods. Chan et al. (2018) considered a stochastic diffusion process and introduces a new hypergraph Laplacian operator generalizing the Laplacian matrix of graphs. Yadati et al. (2019) simplified hypergedges into simple edges with mediators and demonstrate the effectiveness through detailed experiments. The other way, spatial methods, Gilmer et al. (2017) raised a Message Passing Neural Networks (MPNN) framework which learns a message-passing algorithm and aggregate features for node representation. Feng et al. (2019b) introduced the first hypergraph deep learning method hypergraph neural network (HGNN). However, most of the existing works focus on static hypergraph structure which has little effort on optimizing the hypergraph structure during the learning process. DHSL Zhang et al. (2018) is the first dynamic hypergraph structure learning method that optimizes the label projection matrix and the hypergraph structure itself simultaneously. But DHSL fails to exploit high-order relations among features Jiang et al. (2019). DHGCN Jiang et al. (2019) proposed a stacked layers framework to evaluate the dynamic hypergraph with KNN to build the dynamic hyperedges. However, the input of DHGCN is fixed, which means the relations among nodes are fixed and the hypergraph structure just update on each layer. But in the real world, the relations of nodes may be connected temporarily, and existing models cannot process temporary connections or change connections among different nodes." }, { "heading": "5 EXPERIMENTS", "text": "The DyHCN model can be applied to various tasks that can be formulated as dynamic hypergraph. In our work, we adopt DyHCN with news and social comment datasets for stock price prediction." }, { "heading": "5.1 EXPERIMENTAL SETTING", "text": "Tiingo dataset 1. The dataset covers the news content, stocks contained in the news, and the release time of news. On a specific trading day, there are varity news and each news many contains different numbers of stocks, so we construct a hypergraph with news as hyperedges and stocks as nodes. We construct the dynamic hypergraph based on crawled news from June 22, 2016 to June 23, 2019, a total of 756 trading days with one hypergraph on each trading day. Inspired by Chen et al. (2019), we adapt Finance Event Dictionary (TFED) to extract fine-grained events, and pick out the most activate 91 stocks on market for price prediction.\nStockwits dataset The Stocktwits dataset is a stock comment dataset which can be crawled from the web of stockwits 2. The dataset covers the stock comment content, stocks mentioned in the comment, and the comment time. On a specific trading day, we construct a hypergraph with comments as hyperedges and stocks as nodes. We pick out 91 stocks with the highest market value in different industries on S&P 500 for price prediction, and collect the data from Aug. 7, 2014 to Aug. 20, 2018, a total of 1015 trading days with one hypergraph on each trading day. The details of datasets are shown on Table 1.\nWith the construction of dynamic hypergraph, we assign the nodes featuress with the hidden embedding of price and volume extracted by LSTM, and hyperedges featrues with the embedding represented by GloVe Pennington et al. (2014). The feature dimension of hyperedges and nodes are set to 50. The training, validation, and testing sets are separated as in Table 1. To measure the result\n1https://www.tiingo.com 2https://stocktwits.com\nDyHCN 0.0873 0.2533 0.0160 0.0732 0.1660 0.0118 of different models for prediction, we use three evaluation metrics, the mean squared error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE).\nBaselines To evaluate the result of our proposed DyHCN model, we compare the experiment result with the traditional time series, NLP-based, graph-based and hypergraph-based model: 1) DA-RNN Hsu et al. (2009) One of the state-of-the-art models for time series prediction. 2) HAN Hu et al. (2018) The representations of the NLP model for stock price prediction. 3) RSR Feng et al. (2019a) The state-of-the-art graph-based model for price prediction. 4) DHGCN Jiang et al. (2019) The hypergraph-based model for prediction. Because the model RSR and DHGCN are designed for static graph/hypergraph, we present the RSR and DHGCN for daily price prediction. To compare with baseline models, we use DyHCN with stacked layer HC and TE module for price prediction, and add a dropout layer with the dropout rate of 0.5 before TE module. We set the learning rate 0.005 and training epoch 1000." }, { "heading": "5.2 RESULTS AND ANALYSIS", "text": "We report the performance of all methods in Table 2, Table 3 and Fig. 4. From Table 2, we have the following observations:\n1) Compared with DA-RNN, the MAE and MAPE scores of HAN decreases by 21.56% and 36.96% from 0.1869, 0.4786 to 0.1466, 0.3017, while the MSE increases by 17.34% from 0.0467 to 0.0548, indicating that the extra features such as Tiingo or Stocktwits comment data are useful for stock price prediction.\n2) The MAE, MAPE and MSE results of RSR outperform HAN, and decrease by 35.47%, 13.95% and 66.61% respectively, indicating that the consideration of relations between different stocks would improve the performance of the prediction result.\n3) Comparing the graph-based model RSR with hypergraph-based model DHGCN, there is no significant difference in the performance of the model. However, in the Tiingo dataset, the results of RSR decrease by 3.86%, 5.74%, 15.28% comparied with DHGCN respectively, while in the Stocktwits comment dataset, the model shows the opposite result, the three metrics of RSR increase by 5.68%, 4.94%, 1.28% comparied with DHGCN. This shows that the performance of RSR and DHGCN models are not stable on different datasets.\n4) The results of DyHCN outperform RSR and DHGCN. On Tiingo dataset, the MAE decreases by 7.72%, 11.28%, the MAPE decreases by 2.43%, 8.02%, and the MSE decreases 12.57%, 25.93% compared with RSR and DHGCN respectively. On Stocktwits comment dataset, the MAE decreases by 18.03%, 13.37%, the MAPE decreases by 11.23%, 6.85%, and the MSE decreases by 25.32%, 24.36% compared with RSR and DHGCN respectively. The result shows that in both Tiingo and Stocktwits comment datasets, the performance of DyHCN would keep stable, and with the consideration of dynamic information, the performance is better than static graph/hypergraph based model.\n5) Comparing DyHCN with DA-RNN, HAN, RSR and DHGCN, the average loss of MAE, MAPE and MSE decrease by 55.37%, 42.43%, 7.57% and 15.08% on Tiingo dataset respectively, and 71.55%, 61.29%, 18.19%, 14.86% on Stocktwits comment dataset respectively.\nTo test the stability and the scalability of the model, we evaluate different feature aggregation methods. Fig. 4 shows the performance of different feature aggregation methods with a hidden size of 16, 32, 64, and 128 on TE module. Comparing the results, the MLP feature concatenate method reminds stable and outperforms the cat and multi-feature aggregate methods. Besides, with the comparison\nof results with different hidden size, the performance has no significant difference, indicating that the hidden size of LSTM is not the major factor for model prediction.\nIn addition to the comparison with baselines above, we also evaluate the effectiveness of submodules including inner attention, outer atteniton, HC and TE. We evaluate the effectiveness of each module which shown in Table 3, we use DyHCN without inner attention, DyHCN without outer attention and DyHCN without TE to evaluate the effectiveness of inner, outer attention and TE module. Also, we use HGCN model Feng et al. (2019b) which aggregate node features on static hypergrap to replace HC module to evaluate the effectiveness of HC module. The result shows that, on both datasets, the performance of DyHCN are better than DyHCN without inner, outer and TE module. The inner and outer attention module determines the importance of each message, and passes the corresponding information to the centroid which would be used for prediction more acurrate. The TE module considers the impact of previous information and extracts the temporal features from series embeddings, which would be better than from individual time step. In addition, the performance of HGCN Feng et al. (2019b) with TE model is also worse than DyHCN, while HGCN works well on static hypergraph, indicating that DyHCN is more suitble for dynamic hypergraph tasks." }, { "heading": "6 CONCLUSION", "text": "In this paper, we proposed a framework of dynamic hypergraph convolutional networks (DyHCN), which consists of hypergraph convolution (HC) and temporal evolution (TE) module. The HC is delicately designed by equipping inner attention and outer at-tention, which adaptively aggregate nodes’ features to hyperedge and estimate theimportance of each hyperedge connected to the centroid node, respectively. And then update the centroid node embeddings by aggregating various related features. The TE captures the long-range temporal information of dynamic hypergraph features. Based on the two modules, the DyHCN get the dynamic relations between different nodes with the dynamic weight of hyperedges on different time steps. DyHCN can be used for various tasks that can be formulated as dynamic hypergraph, and extensive experiments on the newly constructed Tiingo and Stocktwits comment datasets show that our proposed DyHCN outperforms the state-of-the-art model for the modeling of dynamic hyper-order relations." } ]
2,020
null
SP:5542cb8de7d232cde44071f0612827309298e98b
[ "The paper explores the problem of visual question answering from another perspective. Similar to VQA, a system is provided with a scene and a question. However, the difference is that the question needs to be answered from a viewpoint different from the one provided. Hence, the system needs to perform “mental rotation”. The paper creates a new dataset called CLEVR Mental Rotation Tests which is based on the prior CLEVR dataset. The paper also studies the efficacy of various supervised and self-supervised models on the proposed dataset.", "The paper studies visual question answering focusing on answering questions in a reference image of a different viewpoint. They propose a new dataset CLEVR-MRT drawing motivation from the well-known visual reasoning dataset CLEVR to illustrate the idea in which they have full control of the changes of viewpoints in an image. They then propose to use a volumetric encoder to represent 3D image features of an image via either 2D-to-3D projection or a contrastive-based encoder and further adapt an existing method (FiLM) to handle 3D tensors. Experiments on the CLEVR-MRT show that the use of the 2D features and 3D features of an image is complementary to each other." ]
Different types of mental rotation tests have been used extensively in psychology to understand human visual reasoning and perception. Understanding what an object or visual scene would look like from another viewpoint is a challenging problem that is made even harder if it must be performed from a single image. 3D computer vision has a long history of examining related problems. However, often what one is most interested in is the answer to a relatively simple question posed in another visual frame of reference – as opposed to creating a full 3D reconstruction. Mental rotations tests can also manifest as consequential questions in the real world such as: does the pedestrian that I see, see the car that I am driving? We explore a controlled setting whereby questions are posed about the properties of a scene if the scene were observed from another viewpoint. To do this we have created a new version of the CLEVR VQA problem setup and dataset that we call CLEVR Mental Rotation Tests or CLEVR-MRT, where the goal is to answer questions about the original CLEVR viewpoint given a single image obtained from a different viewpoint of the same scene. Using CLEVR Mental Rotation Tests we examine standard state of the art methods, show how they fall short, then explore novel neural architectures that involve inferring representations encoded as feature volumes describing a scene. Our new methods use rigid transformations of feature volumes conditioned on the viewpoint camera. We examine the efficacy of different model variants through performing a rigorous ablation study. Furthermore, we examine the use of contrastive learning to infer a volumetric encoder in a self-supervised manner and find that this approach yields the best results of our study using CLEVR-MRT.
[]
[ { "authors": [ "Philip Bachman", "R Devon Hjelm", "William Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Dzmitry Bahdanau", "Harm de Vries", "Timothy J O’Donnell", "Shikhar Murty", "Philippe Beaudoin", "Yoshua Bengio", "Aaron Courville" ], "title": "Closure: Assessing systematic generalization of clevr models", "venue": null, "year": 1912 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "SM Ali Eslami", "Danilo Jimenez Rezende", "Frederic Besse", "Fabio Viola", "Ari S Morcos", "Marta Garnelo", "Avraham Ruderman", "Andrei A Rusu", "Ivo Danihelka", "Karol Gregor" ], "title": "Neural scene representation and rendering", "venue": null, "year": 2018 }, { "authors": [ "Yasutaka Furukawa", "Carlos Hernández" ], "title": "Multi-view stereo: A tutorial", "venue": "Foundations and Trends R", "year": 2021 }, { "authors": [ "Adam W Harley", "Shrinidhi K Lakshmikanth", "Fangyu Li", "Xian Zhou", "Hsiao-Yu Fish Tung", "Katerina Fragkiadaki" ], "title": "Learning from unlabelled videos using contrastive predictive neural 3d mapping", "venue": null, "year": 1906 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Drew A. Hudson", "Christopher D. Manning" ], "title": "Compositional attention networks for machine", "venue": "reasoning. CoRR,", "year": 2018 }, { "authors": [ "Max Jaderberg", "Karen Simonyan", "Andrew Zisserman" ], "title": "Spatial transformer networks. In Advances in neural information processing", "venue": null, "year": 2017 }, { "authors": [ "Justin Johnson", "Bharath Hariharan", "Laurens van der Maaten", "Li Fei-Fei", "C Lawrence Zitnick", "Ross Girshick" ], "title": "Clevr: A diagnostic dataset for compositional language and elementary visual reasoning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Hiroharu Kato", "Yoshitaka Ushiku", "Tatsuya Harada" ], "title": "Neural 3d mesh renderer", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Satwik Kottur", "José MF Moura", "Devi Parikh", "Dhruv Batra", "Marcus Rohrbach" ], "title": "Clevr-dialog: A diagnostic dataset for multi-round reasoning in visual dialog", "venue": null, "year": 1903 }, { "authors": [ "Stephen Lombardi", "Tomas Simon", "Jason Saragih", "Gabriel Schwartz", "Andreas Lehrmann", "Yaser Sheikh" ], "title": "Neural volumes: Learning dynamic renderable volumes from images", "venue": "ACM Transactions on Graphics (TOG),", "year": 2019 }, { "authors": [ "Ben Mildenhall", "Pratul P Srinivasan", "Matthew Tancik", "Jonathan T Barron", "Ravi Ramamoorthi", "Ren Ng" ], "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "venue": null, "year": 2003 }, { "authors": [ "Thu Nguyen-Phuoc", "Chuan Li", "Stephen Balaban", "Yong-Liang Yang" ], "title": "Rendernet: A deep convolutional network for differentiable rendering from 3d shapes", "venue": "CoRR, abs/1806.06575,", "year": 2018 }, { "authors": [ "Thu Nguyen-Phuoc", "Chuan Li", "Lucas Theis", "Christian Richardt", "Yong-Liang Yang" ], "title": "Hologan: Unsupervised learning of 3d representations from natural images", "venue": "CoRR, abs/1904.01326,", "year": 2019 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Dong Huk Park", "Trevor Darrell", "Anna Rohrbach" ], "title": "Robust change captioning", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2019 }, { "authors": [ "Ethan Perez", "Florian Strub", "Harm de Vries", "Vincent Dumoulin", "Aaron C. Courville" ], "title": "Film: Visual reasoning with a general conditioning", "venue": "layer. CoRR,", "year": 2017 }, { "authors": [ "Charles R Qi", "Hao Su", "Kaichun Mo", "Leonidas J Guibas" ], "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Yue Qiu", "Yutaka Satoh", "Ryota Suzuki", "Kenji Iwata", "Hirokatsu Kataoka" ], "title": "Multi-view visual question answering with active viewpoint selection", "venue": null, "year": 2020 }, { "authors": [ "Sai Rajeswar", "Fahim Mannan", "Florian Golemo", "Jérôme Parent-Lévesque", "David Vazquez", "Derek Nowrouzezahrai", "Aaron Courville" ], "title": "Pix2shape: Towards unsupervised learning of 3d scenes from images using a view-based representation", "venue": "International Journal of Computer Vision,", "year": 2020 }, { "authors": [ "Roger N Shepard", "Jacqueline Metzler" ], "title": "Mental rotation of three-dimensional", "venue": "objects. Science,", "year": 1971 }, { "authors": [ "Vincent Sitzmann", "Justus Thies", "Felix Heide", "Matthias Nießner", "Gordon Wetzstein", "Michael Zollhöfer" ], "title": "Deepvoxels: Learning persistent 3d feature embeddings", "venue": "CoRR, abs/1812.01024,", "year": 2018 }, { "authors": [ "Justus Thies", "Michael Zollhöfer", "Matthias Nießner" ], "title": "Deferred neural rendering: Image synthesis using neural textures", "venue": "ACM Transactions on Graphics (TOG),", "year": 2019 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "arXiv preprint arXiv:1906.05849,", "year": 2019 }, { "authors": [ "Nanyang Wang", "Yinda Zhang", "Zhuwen Li", "Yanwei Fu", "Wei Liu", "Yu-Gang Jiang" ], "title": "Pixel2mesh: Generating 3d mesh models from single rgb images", "venue": null, "year": 2018 }, { "authors": [ "Jiajun Wu", "Chengkai Zhang", "Tianfan Xue", "Bill Freeman", "Josh Tenenbaum" ], "title": "Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Shunyu Yao", "Tzu-Ming Harry Hsu", "Jun-Yan Zhu", "Jiajun Wu", "Antonio Torralba", "William T. Freeman", "Joshua B. Tenenbaum" ], "title": "3d-aware scene manipulation via inverse graphics", "venue": "CoRR, abs/1808.09351,", "year": 2018 }, { "authors": [ "Kexin Yi", "Chuang Gan", "Yunzhu Li", "Pushmeet Kohli", "Jiajun Wu", "Antonio Torralba", "Joshua B Tenenbaum" ], "title": "Clevrer: Collision events for video representation and reasoning", "venue": null, "year": 1910 } ]
[ { "heading": "1 INTRODUCTION", "text": "Psychologists have employed mental rotation tests for decades (Shepard & Metzler, 1971) as a powerful tool for devising how the human mind interprets and (internally) manipulates three dimensional representations of the world. Instead of using these test to probe the human capacity for mental 3D manipulation, we are interested here in: a) understanding the ability of modern deep neural architectures to perform mental rotation tasks, and b) building architectures better suited to 3D inference and understanding.\nRecent applications of concepts from 3D graphics to deep learning, and vice versa, have led to promising results. We are similarly interested in leveraging models of 3D image formation from the graphics and vision communities to augment neural network architectures with inductive biases that improve their ability to reason about the real world. Here we measure the effectiveness of adding such biases, confirming their ability to improve the performance of neural models on mental rotation tasks. Concepts from inverse graphics can be used to guide the construction of neural architectures designed to perform tasks related to the reverse of the traditional image synthesis processes: namely, taking 2D image input and inferring 3D information about the scene. For instance, 3D reconstruction in computer vision (Furukawa & Hernández, 2015) can be realized with neural-based approaches that output voxel (Wu et al., 2016; Nguyen-Phuoc et al., 2019), mesh (Wang et al., 2018), or point cloud (Qi et al., 2017) representations of the underlying 3D scene geometry. Such inverse graphics methods range from fully-differentiable graphics pipelines (Kato et al., 2018) to implicit neural-based approaches with learnable modules designed to mimic the structure of certain components of the forward graphics pipeline (Yao et al., 2018; Thies et al., 2019). While inverse rendering is potentially\nan interesting and useful goal in itself, many computer vision systems could benefit from neural architectures that demonstrate good performance for more targeted mental rotation tasks.\nIn our work here we are interested in exploring neural “mental rotation” by adapting a well known standard benchmark for visual question-and-answering (VQA) through answering questions with respect to another viewpoint. We use the the Compositional Language and Elementary Visual Reasoning (CLEVR) Diagnostic Dataset (Johnson et al., 2017) as the starting point for our work. While we focus on this well known benchmark, many analogous questions of practical interest exist. For example, given the camera viewpoint of a (blind) person crossing the road, can we infer if each of the drivers of the cars at an intersection can see this blind person crossing the street? As humans, we are endowed with the ability to reason about scenes and imagine them from different viewpoints, even if we have only seen them from one perspective. As noted by others, it therefore seems intuitive that we should encourage the same capabilities in deep neural networks (Harley et al., 2019). In order to answer such questions effectively, some sort of representation encoding 3D information seems necessary to permit inferences to be drawn due to a change in the orientation and position of the viewpoint camera. However, humans clearly do not have access to error signals obtained through re-rendering scenes, but are able to perform such tasks. To explore these problems in a controlled setting, we adapt the original CLEVR setup in which a VQA model is trained to answer different types of questions about a scene consisting of various types and colours of objects. While images from this dataset are generated through the rendering of randomly generated 3D scenes, the three-dimensional structure of the scene is never fully exploited because the viewpoint camera never changes. We call our problem formulation and data set CLEVR-MRT, as it is a new Mental Rotation Test version of the CLEVR problem setup. In CLEVR-MRT alternative views of a scene are rendered and used as the input to a perception pipeline that must then answer a question that was posed with respect to another (the original CLEVR) viewpoint.1 This gives rise to a more difficult task where the VQA model must learn how to map from its current viewpoint to the viewpoint that is required to answer the question. This can be seen in Figure 1(b). Figure 1(a) depicts a real world situation and analogy where the answers to similar types of questions may help different types of systems make consequential decisions, e.g. intelligent intersections, cars, robots, or navigation assistants for the blind. The fact that MRTs are a classical tool of Psychology and the link to these different practical applications motivated us to create the controlled setting of CLEVR-MRTs depicted in Figure 1(b).\n(a) (Left) A view of a street corner. (Middle) a CLEVR-like representation of the scene with abstractions of buildings, cars and pedestrians. (Right) The same virtual scene from another viewpoint, where questions concerning the relative positions of objects after a mental rotation could be of significant practical interest.\n(b) Random views of an example scene in CLEVR-MRT. The center image is the ‘canonical’ view, which is the unseen point of view for which questions must be answered using only one of the other views as input.\nFigure 1: (a) A real-world example where the ability to perform mental rotations can be of practical utility. (b) Images from the CLEVR-MRT dataset.\nUsing our new mental rotation task definition and our CLEVR-MRT dataset, we examine a number of new inverse-graphics inspired neural architectures. We examine models that use the FILM (Feature-wise Linear Modulation) technique (Perez et al., 2017) for VQA, which delivers competitive performance using contemporary state-of-the-art convolutional network techniques. We observe that such methods fall short for this more challenging MRT VQA setting. This motivates us to create new architectures that involve inferring a latent feature volume that we subject to rigid 3D transformations (rotations and translations), in a manner that has been examined in 3D generative modelling techniques such as spatial transformers (Jaderberg et al., 2015) as well as HoloGAN\n1Dataset and code will be available at https://github.com/anonymouscat2434/clevr-mrt\n(Nguyen-Phuoc et al., 2019). This can either be done through the adaptation of a pre-trained 2D encoder network, i.e. an ImageNet-based feature extractor as in Section 2.2.1, or through training our encoder proposed here, which is obtained through the use of contrastive learning as in Section 2.2.3. In the case of the latter model, we leverage the InfoNCE loss (Oord et al., 2018) to minimise the distance between different views of the same scene in metric space, and conversely the opposite for views of different scenes altogether. However, rather than simply using a stochastic (2D) data augmentation policy to create positive pairs (e.g. random crops, resizes, and pixel perturbations), we leverage the fact that we have access to many views of each scene at training time and that this can be seen as a data augmentation policy operating in 3D. This in turn can be leveraged to learn an encoder that can map 2D views to a 3D latent space without assuming any extra guidance such as camera extrinsics." }, { "heading": "2 METHODS", "text": "We begin here by describing simple and strong baseline methods as well as upper bound estimates used to evaluate the performance of different techniques on this dataset. We then present our new approach to learning 3D features and two different ways to address this task." }, { "heading": "2.1 FILM BASELINES", "text": "The architecture we use is very similar to that proposed by FILM (Perez et al., 2017), in which a pre-trained ResNet-101 classifier on ImageNet extracts features from the input images which are then fed to a succession of FILM-modulated residual blocks using the hidden state output from the GRU. As a sanity check – to ensure our models are adequately parameterised – the simplest baseline to run is one where images in the dataset are filtered to only contain canonical views. In this setting, we would expect the highest validation performance since the viewpoint given is the same as the canonical viewpoint (the viewpoint wrt which the question must be answered). The second and third baselines to run are ones where we use the full dataset, with and without conditioning on the viewpoint camera via FILM, respectively. This is illustrated in Figure 2, where we can see the viewpoint camera also being embedded before being concatenated to the question embedding and passed through the subsequent FILM blocks. Note that in the case of the canonical baseline, the viewpoint and canonical view is the same thing, so no camera conditioning is necessary.\nIf we let S denote a scene consisting of all of its camera views (images) X, the camera c, the question q, and its corresponding answer y, we can summarise this as the following:\nS = (X,q, c,y) ∼ D (sample a scene) x ∼ X (sample a random view) h := ResNet(x)\necam := embed (film) φ (c)\negru := GRUφ(q) ỹ := FILMφ(h, [egru, ecam]) `cls := `(y, ỹ), (1)\nwhere the encoder (a ResNet) is frozen and we do not update its parameters during training." }, { "heading": "2.2 LEARNING 3D FEATURE REPRESENTATIONS FROM SINGLE VIEW IMAGES", "text": "So far we have been operating in 2D, based on the pre-trained ResNet-101 ImageNet encoder which outputs a high-dimensional stack of feature maps (a 3D tensor). To work in 3D, we would either need to somehow transform the existing encoding into a 4D tensor (a stack of 3D feature cubes) or use a completely different encoder altogether which can output a 3D volume directly. Assuming we already had such a volume, we can manipulate the volume in 3D space directly by having it undergo any rigid transformation that is necessary for the question to be answered. In Section 2.2.1 we illustrate a simple technique which simply takes the existing ImageNet encoder’s features and runs it through a learnable ‘post-processing’ block to yield a 3D volume, and in Section 2.2.3 we propose a self-supervised contrastive approach to learn such an encoder from scratch." }, { "heading": "2.2.1 PROJECTING 2D FEATURES INTO 3D FEATURES", "text": "To exploit the power of pre-trained representations, we start with a pre-trained ResNet encoder and transform its stack of feature maps through an additional set of 2D convolution blocks using the ‘post-processor’ shown in Figure 3, right before reshaping the 3D tensor into 4D. In other words, we learn a module which maps from a stack of feature maps h to a stack of feature cubes h′. Since the post-processor is a learnable module through which the FILM part of the pipeline is able to backpropagate through, it can be seen as learning an appropriate set of transforms that construct 3D feature volumes h′. Through back-propagation it learns to perform well when manipulated with camera controllable FILM operations either as is or, more interestingly, when also subjected to rigid 3D transformations as we will see shortly in Section 2.2.2." }, { "heading": "2.2.2 3D CAMERA CONTROLLABLE FILM", "text": "In lieu of conditioning the camera with FILM (as seen in Figure 2 with embed(film)φ ), we can also condition on it to output translation and rotation parameters (θ̃x, θ̃y, θ̃z, t̃x, t̃y, t̃z) which are then used to construct a 3D rotation and translation matrix Θ. Therefore, we can write out the 3D FILM pipeline as:\nh′ := postprocψ(encoderφ(x))\n(θ̃x, θ̃y, θ̃z, t̃x, t̃y, t̃z) := embed (rot) φ (c)\nh′rot := transform(h ′;P (θ̃x, θ̃y, θ̃z, t̃x, t̃y, t̃z))\nỹ := FILMφ(h′rot, [GRUφ(q)]) `cls := `(y, ỹ),\n(2)\nwhere we now minimise `cls with respect to our usual parameters φ and the post-processor parameters ψ, and P (·) is a function that produces a rigid transform matrix from its parameters (via Euler angles). This is illustrated in Figure 4. Note that in this figure we have illustrated that we can still perform camera conditioning via embed(film)φ , i.e. we can use the camera to do rigid transforms, modulate the\nGRU embedding, or both. For the sake of brevity, Equation 2 only shows the case where we use the camera for rigid transforms via embed(rot)ψ . Also note that we cannot directly use the raw camera parameters c = (θx,θy,θz, tx, ty, tz) to construct the rigid transform Θ is because these are relative to world coordinates. Finally, in the next section (Section 2.2.3), we will show that we can replace the encoder and its postprocessor (Figure 3) with a contrastive encoder trained from scratch, without the use of a postprocessor since it directly outputs a latent volume." }, { "heading": "2.2.3 LEARNING 3D CONTRASTIVE ENCODERS", "text": "In Section 2.2.1 the encoder proposed was an adaptation of a pre-trained ImageNet classifier backbone to output latent volumes. Here we propose the training of an encoder from scratch in an unsupervised manner, via the use of contrastive learning as demonstrated in Chen et al. (2020). Conceptually it is simple: given two random views of the same scene x1 and x2, the goal of the encoder ench is to infer a latent volume from each of those views h1 and h2. To enforce the notion of similarity between these volumes (and conversely the opposite for an x2 that is not from the same scene as x1), both these volumes are reduced down to dense codes (through an extra encoder encz) z1 and z2, where the contrastive loss is applied. This is shown in Figure 5. Unlike Chen et al. (2020) however, our main goal here is to infer latent volumes for downstream tasks, rather than latent codes.\nLet us denote X(1) and X(2) to be minibatches of images, with subscripts for individual examples in the minibatch (e.g. X(1)j ). We will assume that (X (1) i ,X (2) j ) correspond to the same scene if i = j, otherwise they are different. The InfoNCE loss (Oord et al., 2018) is defined as:\nH(1) = ench(T (X(1))), H(2) = ench(T (X(2))), Z(1) = encz(H(1)), Z(2) = encz(H(2)),\nLNCE = 1\nn n∑ i=1 ` (i) NCE,where ` (i) NCE = − log exp(sim(Z(1)i ,Z (2) i )/τ)∑n k=1 exp(sim(Z (1) i ,Z (2) k )/τ) , (3)\nwhere T (·) is some stochastic data augmentation operator (e.g. random crops, flips, colour perturbations) which operates on a per-example basis in the batch. This loss also contains a temperature term τ , which is a hyperparameter to optimise (in practice, we found τ = 0.1 to produce the lowest softmax loss). Since a large number of negative examples is needed to learn good features, we train this encoder on an 8-GPU setup with a combined batch size of 2048. Note that in Chen et al. (2020), the contrastive loss is enforced between stochastic 2D data augmentations of the same image i.e. they contrast T (X(1)) and T (X(2)) (with X(1) = X(2)). We refer to this as 2D only data augmentation, since the contrastive learner does not ever contrast between two different views of a scene. In the case of Equation 3, since X(1) 6= X(2) we call this 2D + 3D data augmentation, and if T is the identity function then we refer to this as 3D only data augmentation. When training of this encoder has converged, we freeze it and use it in place of the pre-trained ImageNet encoder and postprocesor as originally shown in Figure 3, as well as the same pipeline described in Figure 4 and Equation 2." }, { "heading": "3 RELATED WORK", "text": "Several extensions of the CLEVR dataset exist, though mainly through NLP-based extensions such as evaluating systematic generalisation (Bahdanau et al., 2019), adding dialogue (Kottur et al., 2019), and robust captioning of changes between scenes (Park et al., 2019). In terms of visual-based extensions, Yi et al. (2019) proposed a temporal version of CLEVR which looks at VQA in the context of causal and counterfactual reasoning. Concurrent to our work, a version of CLEVR has recently been proposed (Qiu et al., 2020) in the context of reinforcement learning, where an agent is trained to perform viewpoint selection on a scene to be able to answer the question, with each scene consisting of a large occluder object in the center to accentuate occlusions. However, the main difference is that our dataset decouples the camera viewpoint from the viewpoint from which the question must be answered. Furthermore, this dataset has relatively limited question and scene variability (for instance, focusing on only two types of questions and the same occluding object in the center). Lastly, in our work we do not assume the VQA model is an agent that is able to vary its active viewpoint to better answer the question – instead, our model must learn to ‘imagine’ what the same scene should look like from another perspective, conditioned only on a single view.\nUsing rigid transforms to infer latent 3D volumes was loosely inspired by HoloGAN (NguyenPhuoc et al., 2019). Here, they use a GAN to map a randomly sampled noise vector to a 3D latent volume before subsequently rendering using a neural renderer. Several works (Nguyen-Phuoc et al., 2018; Sitzmann et al., 2018; Lombardi et al., 2019) condition on images and camera poses to learn voxels that represent the input, with options to re-render from different camera poses. These methods, however, assume that the camera poses are known and evaluated on single-scene settings. Conversely, our dataset consists of tens of thousands of scenes, which makes any re-rendering task (i.e. autoencoding) significantly more difficult due to the need to reconstruct well on all scenes, all while having a larger computational footprint than decoder-less approaches. Neural rendering is also not limited to just voxel representations; other ways of encoding 3D data can be used such as point clouds (Qi et al., 2017), meshes (Wang et al., 2018; Kato et al., 2018), surfels (Rajeswar et al., 2020), latent codes (Eslami et al., 2018), or even in the weights of a neural network (Mildenhall et al., 2020).\nOther VQA models have been proposed, e.g., MAC (Hudson & Manning, 2018) proposes a memory and attention-based reasoning architecture for more interpretable VQA. While this could in principle be modified to leverage 3D volumes, FILM serves as a simpler architectural choice for analysis. As for learning encoders, contrastive learning is currently state-of-the-art in learning self-supervised representations that are competitive with their supervised variants (Oord et al., 2018; Bachman et al., 2019; Chen et al., 2020; He et al., 2020). Tian et al. (2019) explored contrastive learning of scenes, though the multiview aspect in this setting was applied to different sensory views, rather than camera views. Harley et al. (2019) explored the use of contrastive learning on 2.5D video (i.e. RGB + depth) to predict novel views, with the goal of learning 3D object detectors in a semi-supervised manner.\nThe CLEVR dataset (Johnson et al., 2017) is a VQA dataset consisting of a range of synthetic 3D shapes laid out on a canvas. The dataset consists of a range of questions designed to test various aspects of visual reasoning such as counting (e.g. ‘how many red cubes are in this scene?’), spatial relationships (e.g. ‘what colour is the cylinder to the left of the big brown cube?’) and comparisons\n(e.g. ‘are there an equal number of blue objects as red ones?’). In recent years however, proposed techniques have performed extraordinarily well on the dataset (Perez et al., 2017; Hudson & Manning, 2018), which has inspired us to explore VQA in more difficult contexts. The original CLEVR dataset provided one image for each scene. CLEVR-MRT contains 20 images generated for each scene holding a constant altitude and sampling over azimuthal angle. To ensure that the model would not have any clues as to how the view had been rotated, we replaced the asymmetrical \"photo backdrop\" canvas of the CLEVR dataset with a large plane and centered overhead lighting. To focus on questions with viewpoint dependent answers, we filtered the set of questions to only include those containing spatial relationships (e.g. ‘is X to the right of Y’). From the original 90 question templates, only 44 contained spatial relationships. In total, the training + validation split consists of 45,600 scenes, each containing roughly 10 questions for a total of 455,549 questions. 5% of these scenes were set aside for validation. For the test set, 10,000 scenes were generated with roughly 5 questions each, for a total of 49,670 questions. Figure 1(b) shows an example of one of these scenes." }, { "heading": "4 RESULTS AND ANALYSIS", "text": "For each experiment, we perform a sweep over many of the hyperparameters (which are detailed in the appendix) to find the experiment which performs the best, according to validation set accuracy. We then select the best-performing experiment and run repeats of it with varying seeds (3-6, depending on the overall variance), for a total of N runs. The validation set accuracy reported is the mean over these N runs, and similarly for the test set. It is worth noting that for our 3D FILM experiments, some runs appeared to hit undesirable local minima, exhibiting much lower validation accuracies, which we conjecture is due to a ‘domain mismatch’ between using a encoder that was initially pre-trained for ImageNet classification and our CLEVR dataset. (This appears to be supported by the fact that these outliers do not exist when we use our pre-trained contrastive encoder, in Table 2.) To deal with these outliers, we instead compute the mean/stdev over only the top three runs out of the N = 6 we originally trained.\nOur results for the FILM baselines (Section 2.1) and using 2D-to-3D projections (Section 2.2.1) are shown in Table 1. What we find surprising is that the 2D baseline without camera conditioning is able to achieve a decent accuracy of roughly 70%. On closer inspection the misclassifications do not appear to be related to how far away the viewpoint camera is from the canonical, with misclassified\npoints being distributed more or less evenly around the scene. Given that the question-only baseline (‘GRU-only’) is able to achieve an accuracy significantly greater than that of the majority class baseline (≈ 25% versus ≈ 50%), it seems like it is likely exploiting statistical regularities between the actual question itself and the scene. If we add camera conditioning via FILM (that is, appending the camera embedding to the GRU’s embedding) then we achieve a much greater accuracy of 83.68 ± 1.21. Furthermore, our results demonstrate the efficacy of using rigid transforms with 3D volumes, achieving the highest accuracy of 92.80 ± 0.30 on the test set. If we take the same experiment and freeze the postprocessor (denoted by the small † symbol), then we achieve a much lower accuracy of 69 %. This is to be expected, considering that any camera information that is forward-propagated will contribute gradients back to the postprocessing parameters in the backward propagation, effectively giving the postprocessor supervision in the form of camera extrinsics. (However, we will soon show just as good performance for our contrastive encoder experiments in Table 2, which did not utilise any camera extrinsics in the pre-training phase, and no postprocessor in the FILM stage.) Finally, the last row of the table shows that if one uses the camera for both rigid transforms and embedding, the mean test accuracy is roughly the same as the rigid-transform-only variant (90.86± 0.87 vs 89.68± 1.34). This appears to suggest that simply performing rigid rotations of the volume is sufficient by itself for good performance.\nIn Table 2 we perform an ablation on the type of data augmentation used during the contrastive pre-training stage (described in Section 2.2.3) and find that 3D data augmentation is essential for the encoder to distinguish whether a pair of views come from the same scene or not, as shown in the ‘NCE accuracy’ column. However, both 2D and 3D data augmentation is necessary in order for the FILM task to yield the best results, as seen in the last row. Similar to Table 1, utilising the viewpoint camera for rigid transforms produces the best results, with 86.01 ± 0.69 % test accuracy. While the best result of Table 1 does have a much higher variance (86.60 ± 7.57), we consider our contrastive 2D + 3D experiment to yield our best result due to its almost negligible variance.\nOur results show that our best performing formulation (either 2D-to-3D or contrastive) performs on average only 8 % less than the canonical baseline’s 94%, which can be seen as a rough upper bound on generalisation performance. While we obtained promising results, it may not leave a lot of room to improve on top of our methods, and we identified some ways in which the dataset could be made more difficult. One of them is removing the viewpoint camera coordinates and instead placing a sprite in the scene showing where the canonical camera is. This means that the model has an additional inference task it has to perform, which is inferring the 3D position of the canonical viewpoint from a 2D marker. Another idea is to allow some variation in the elevation of the viewpoint camera. While this can accentuate the effects of occlusion (if the camera is allowed to go lower than its default elevation), it also provides for a more grounded dataset since occlusions are commonplace in real-world datasets. We examine the latter here, generating a version of CLEVR-MRT where the camera elevation is allowed to vary, and with both small and large objects present in the scene (small objects were not present in the original CLEVR-MRT dataset). Concretely, the default elevation in the original dataset was 30◦, whereas now it is randomly sampled from a Uniform(20, 30). An example scene of this dataset is shown in Figure 6.\nSee Table 3 below. Please note that hyperparameter tuning for these experiments are still ongoing, with particular emphasis placed on improving the 3D FILM with camera embedding (third row) as well as the canonical upper bound (first row).\nTable 3: Select experiments from Table 1 but trained on CLEVR-MRT. For all experiments shown in this table, the mean/stdev is computed over the top three runs (out of six in total).\nMethod 3D? camera(embed) camera (rotation) valid acc. (%) test acc. (%)\nUpper bound (canon. views only) 7 7 7 90.00 ± 0.23 89.37 ± 0.19\n2D FILM (Sec 2.1, Fig 2) 7 7 7 67.26 ± 0.78 – 7 3 7 79.69 ± 2.05 79.14 ± 2.35\n3D FILM, projection (Sec 2.2.1, Fig 4)\n3 3 7 65.49 ± 1.46 65.10 ± 1.67 3 7 3 86.92 ± 2.00 86.89 ± 2.04 3 3 3 89.98 ± 0.59 89.91 ± 0.73" }, { "heading": "5 CONCLUSIONS AND BROADER IMPACTS", "text": "We address the problem of answering a question from a single image, posed in a reference frame that is different to the one of the viewer. We illustrate the difficulties here in a controlled setting, proposing a new CLEVR dataset and exploring a 3D FILM-based architecture that operates directly on latent feature volumes (using camera conditioning via FILM or via direct rigid transforms on the volume). We propose two techniques to train volumetric encoders: with 2D-to-3D projection of ImageNet features, or using a self-supervised contrastive approach. In the latter case, we showed that the use of combined 2D+3D data augmentation was crucial to learning a volumetric encoder, without the use of camera extrinsics nor a postprocessor. Through rigorous ablations, we demonstrated that performing 3D FILM was the most effective for CLEVR-MRT, especially when the volume can be subjected to rigid transformations in order to answer the question.\nEndowing intelligent embodied systems with the ability to answer questions regarding properties of a 3D visual scene with respect to the perspective of another agent could make such systems safer. In the case of an autonomous vehicles, better control decisions could eventually be made. If such systems are adversarial in nature, negative outcomes could arise to the adversaries of such systems." } ]
2,020
VISUAL QUESTION ANSWERING FROM ANOTHER PERSPECTIVE: CLEVR MENTAL ROTATION TESTS
SP:8a6de840ca758da3973655a8a478e83f8edde474
[ "This paper studies how over-parameterization plays a role in GAN training. Theoretically, it shows that a GAN with over-parameterized 1-layer neural network generator and a linear discriminator can converge to global saddle point via stochastic optimisation. Similar results are obtained for nonlinear generators and discriminators under some conditions. It also provides empirical results to support its findings. ", "This paper studies the effect of model over-parametrization in GANs. While there is a lot of work on this in the supervised learning setting of classification/regression there is not much in the GAN framework where the minimax objective function complicates such an analysis. This paper considers two types of training of the GAN model, one with the simultaneous gradient descent ascent and one where the discriminator is trained to optimality for every generator update. It provides global convergence results under both algorithms in the case of a generator network with one hidden layer that is large enough and a linear discriminator." ]
A broad class of unsupervised deep learning methods such as Generative Adversarial Networks (GANs) involve training of overparameterized models where the number of parameters of the model exceeds a certain threshold. Indeed, most successful GANs used in practice are trained using overparameterized generator and discriminator networks, both in terms of depth and width. A large body of work in supervised learning have shown the importance of model overparameterization in the convergence of the gradient descent (GD) to globally optimal solutions. In contrast, the unsupervised setting and GANs in particular involve non-convex concave mini-max optimization problems that are often trained using Gradient Descent/Ascent (GDA). The role and benefits of model overparameterization in the convergence of GDA to a global saddle point in non-convex concave problems is far less understood. In this work, we present a comprehensive analysis of the importance of model overparameterization in GANs both theoretically and empirically. We theoretically show that in an overparameterized GAN model with a 1-layer neural network generator and a linear discriminator, GDA converges to a global saddle point of the underlying non-convex concave min-max problem. To the best of our knowledge, this is the first result for global convergence of GDA in such settings. Our theory is based on a more general result that holds for a broader class of nonlinear generators and discriminators that obey certain assumptions (including deeper generators and random feature discriminators). Our theory utilizes and builds upon a novel connection with the convergence analysis of linear timevarying dynamical systems which may have broader implications for understanding the convergence behavior of GDA for non-convex concave problems involving overparameterized models. We also empirically study the role of model overparameterization in GANs using several large-scale experiments on CIFAR-10 and Celeb-A datasets. Our experiments show that overparameterization improves the quality of generated samples across various model architectures and datasets. Remarkably, we observe that overparameterization leads to faster and more stable convergence behavior of GDA across the board.
[ { "affiliations": [], "name": "Yogesh Balaji" }, { "affiliations": [], "name": "Mohammadmahdi Sajedi" }, { "affiliations": [], "name": "Neha Mukund Kalibhat" }, { "affiliations": [], "name": "Mucong Ding" }, { "affiliations": [], "name": "Dominik Stöger" }, { "affiliations": [], "name": "Mahdi Soltanolkotabi" }, { "affiliations": [], "name": "Soheil Feizi" } ]
[ { "authors": [ "Leonard Adolphs", "Hadi Daneshmand", "Aurelien Lucchi", "Thomas Hofmann" ], "title": "Local saddle point optimization: A curvature exploitation approach", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Zhao Song" ], "title": "A convergence theory for deep learning via overparameterization", "venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Wangpeng An", "Haoqian Wang", "Qingyun Sun", "Jun Xu", "Qionghai Dai", "Lei Zhang" ], "title": "A pid controller approach for stochastic optimization of deep networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Martin Arjovsky", "Soumith Chintala", "Léon Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Sanjeev Arora", "Rong Ge", "Yingyu Liang", "Tengyu Ma", "Yi Zhang" ], "title": "Generalization and equilibrium in generative adversarial nets (gans)", "venue": "arXiv preprint arXiv:1703.00573,", "year": 2017 }, { "authors": [ "Kenneth J Arrow", "Leonid Hurwicz", "Hirofumi Uzawa" ], "title": "Studies in linear and non-linear programming", "venue": null, "year": 1958 }, { "authors": [ "Hugo Berard", "Gauthier Gidel", "Amjad Almahairi", "Pascal Vincent", "Simon LacosteJulien" ], "title": "A closer look at the optimization landscapes of generative adversarial networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale GAN training for high fidelity natural image synthesis", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Lenaic Chizat", "Edouard Oyallon", "Francis Bach" ], "title": "On lazy training in differentiable programming", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Aidan Clark", "Jeff Donahue", "Karen Simonyan" ], "title": "Adversarial video generation on complex datasets", "venue": "arXiv preprint arXiv:1907.06571,", "year": 2019 }, { "authors": [ "Constantinos Daskalakis", "Stratis Skoulakis", "Manolis Zampetakis" ], "title": "The complexity of constrained min-max optimization", "venue": "arXiv preprint arXiv:2009.09623,", "year": 2020 }, { "authors": [ "Ishan Deshpande", "Ziyu Zhang", "Alexander G Schwing" ], "title": "Generative modeling using the sliced wasserstein distance", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Simon S. Du", "Xiyu Zhai", "Barnabas Poczos", "Aarti Singh" ], "title": "Gradient descent provably optimizes over-parameterized neural networks", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Farzan Farnia", "Asuman Ozdaglar" ], "title": "Gans may have no nash equilibria", "venue": "arXiv preprint arXiv:2002.09124,", "year": 2020 }, { "authors": [ "Soheil Feizi", "Farzan Farnia", "Tony Ginart", "David Tse" ], "title": "Understanding GANs: the LQG setting", "venue": "arXiv preprint arXiv:1710.10793,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "Advances in Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Ishaan Gulrajani", "Colin Raffel", "Luke Metz" ], "title": "Towards GAN benchmarks which require generalization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Shaul Gutman" ], "title": "Uncertain dynamical systems–a lyapunov min-max approach", "venue": "IEEE Transactions on Automatic Control,", "year": 1979 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in neural information processing systems", "year": 2017 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Michel Ledoux" ], "title": "The concentration of measure phenomenon. Number 89 in Mathematical Surveys and Monographs", "venue": "American Mathematical Soc.,", "year": 2001 }, { "authors": [ "Qi Lei", "Jason D Lee", "Alexandros G Dimakis", "Constantinos Daskalakis" ], "title": "Sgd learns one-layer networks in wgans", "venue": "arXiv preprint arXiv:1910.07030,", "year": 2019 }, { "authors": [ "Chun-Liang Li", "Wei-Cheng Chang", "Yu Cheng", "Yiming Yang", "Barnabas Poczos" ], "title": "Mmd gan: Towards deeper understanding of moment matching network", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Lars Mescheder", "Sebastian Nowozin", "Andreas Geiger" ], "title": "The numerics of GANs", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Lars Mescheder", "Andreas Geiger", "Sebastian Nowozin" ], "title": "Which training methods for gans do actually converge", "venue": "arXiv preprint arXiv:1801.04406,", "year": 2018 }, { "authors": [ "Vaishnavh Nagarajan", "J Zico Kolter" ], "title": "Gradient descent GAN optimization is locally stable", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Samet Oymak", "Mahdi Soltanolkotabi" ], "title": "Overparameterized nonlinear learning: Gradient descent takes the shortest path", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Samet Oymak", "Mahdi Soltanolkotabi" ], "title": "Towards moderate overparameterization: global convergence guarantees for training shallow neural networks", "venue": "IEEE Journal on Selected Areas in Information Theory,", "year": 2020 }, { "authors": [ "Samet Oymak", "Zalan Fabian", "Mingchen Li", "Mahdi Soltanolkotabi" ], "title": "Generalization guarantees for neural networks via harnessing the low-rank structure of the jacobian", "venue": "arXiv preprint arXiv:1906.05392,", "year": 2019 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Wilson J Rugh" ], "title": "Linear system theory", "venue": "Prentice-Hall, Inc.,", "year": 1996 }, { "authors": [ "Mahdi Soltanolkotabi" ], "title": "Structured signal recovery from quadratic measurements: Breaking sample complexity barriers via nonconvex optimization", "venue": "IEEE Transactions on Information Theory,", "year": 2019 }, { "authors": [ "Mahdi Soltanolkotabi", "Adel Javanmard", "Jason D Lee" ], "title": "Theoretical insights into the optimization landscape of over-parameterized shallow neural networks", "venue": "IEEE Transactions on Information Theory,", "year": 2018 }, { "authors": [ "Dave Van Veen", "Ajil Jalal", "Mahdi Soltanolkotabi", "Eric Price", "Sriram Vishwanath", "Alexandros G Dimakis" ], "title": "Compressed sensing with deep image prior and learned regularization", "venue": "arXiv preprint arXiv:1806.06438,", "year": 2018 }, { "authors": [ "John Von Neumann", "Oskar Morgenstern" ], "title": "Theory of games and economic behavior", "venue": "Princeton university press,", "year": 1953 }, { "authors": [ "Kun Xu", "Chongxuan Li", "Huanshu Wei", "Jun Zhu", "Bo Zhang" ], "title": "Understanding and stabilizing gans’ training dynamics with control theory", "venue": null, "year": 1909 }, { "authors": [ "Difan Zou", "Quanquan Gu" ], "title": "An improved analysis of training over-parameterized deep neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 } ]
[ { "heading": null, "text": "A broad class of unsupervised deep learning methods such as Generative Adversarial Networks (GANs) involve training of overparameterized models where the number of parameters of the model exceeds a certain threshold. Indeed, most successful GANs used in practice are trained using overparameterized generator and discriminator networks, both in terms of depth and width. A large body of work in supervised learning have shown the importance of model overparameterization in the convergence of the gradient descent (GD) to globally optimal solutions. In contrast, the unsupervised setting and GANs in particular involve non-convex concave mini-max optimization problems that are often trained using Gradient Descent/Ascent (GDA). The role and benefits of model overparameterization in the convergence of GDA to a global saddle point in non-convex concave problems is far less understood. In this work, we present a comprehensive analysis of the importance of model overparameterization in GANs both theoretically and empirically. We theoretically show that in an overparameterized GAN model with a 1-layer neural network generator and a linear discriminator, GDA converges to a global saddle point of the underlying non-convex concave min-max problem. To the best of our knowledge, this is the first result for global convergence of GDA in such settings. Our theory is based on a more general result that holds for a broader class of nonlinear generators and discriminators that obey certain assumptions (including deeper generators and random feature discriminators). Our theory utilizes and builds upon a novel connection with the convergence analysis of linear timevarying dynamical systems which may have broader implications for understanding the convergence behavior of GDA for non-convex concave problems involving overparameterized models. We also empirically study the role of model overparameterization in GANs using several large-scale experiments on CIFAR-10 and Celeb-A datasets. Our experiments show that overparameterization improves the quality of generated samples across various model architectures and datasets. Remarkably, we observe that overparameterization leads to faster and more stable convergence behavior of GDA across the board." }, { "heading": "1 INTRODUCTION", "text": "In recent years, we have witnessed tremendous progress in deep generative modeling with some state-of-the-art models capable of generating photo-realistic images of objects and scenes (Brock et al., 2019; Karras et al., 2019; Clark et al., 2019). Three prominent classes of deep generative models include GANs (Goodfellow et al., 2014), VAEs (Kingma & Welling, 2014) and normalizing flows (Dinh et al., 2017). Of these, GANs remain a popular choice for data synthesis especially in the image domain. GANs are based on a two player min-max game between a generator network that generates samples from a distribution, and a critic (discriminator) network that discriminates real distribution from the generated one. The networks are optimized using Gradient Descent/Ascent (GDA) to reach a saddle-point of the min-max optimization problem.\n∗First two authors contributed equally. Correspondence to yogesh@cs.umd.edu, sajedi@usc.edu\nOne of the key factors that has contributed to the successful training of GANs is model overparameterization, defined based on the model parameters count. By increasing the complexity of discriminator and generator networks, both in depth and width, recent papers show that GANs can achieve photo-realistic image and video synthesis (Brock et al., 2019; Clark et al., 2019; Karras et al., 2019). While these works empirically demonstrate some benefits of overparameterization, there is lack of a rigorous study explaining this phenomena. In this work, we attempt to provide a comprehensive understanding of the role of overparameterization in GANs, both theoretically and empirically. We note that while overparameterization is a key factor in training successful GANs, other factors such as generator and discriminator architectures, regularization functions and model hyperparameters have to be taken into account as well to improve the performance of GANs.\nRecently, there has been a large body of work in supervised learning (e.g. regression or classification problems) studying the importance of model overparameterization in gradient descent (GD)’s convergence to globally optimal solutions (Soltanolkotabi et al., 2018; Allen-Zhu et al., 2019; Du et al., 2019; Oymak & Soltanolkotabi, 2019; Zou & Gu, 2019; Oymak et al., 2019). A key observation in these works is that, under some conditions, overparameterized models experience lazy training (Chizat et al., 2019) where optimal model parameters computed by GD remain close to a randomly initialized model. Thus, using a linear approximation of the model in the parameter space, one can show the global convergence of GD in such minimization problems.\nIn contrast, training GANs often involves solving a non-convex concave min-max optimization problem that fundamentally differs from a single minimization problem of classification/regression. The key question is whether overparameterized GANs also experience lazy training in the sense that overparameterized generator and discriminator networks remain sufficiently close to their initializations. This may then lead to a general theory of global convergence of GDA for such overparameterized non-convex concave min-max problems.\nIn this paper we first theoretically study the role of overparameterization for a GAN model with a 1-hidden layer generator and a linear discriminator. We study two optimization procedures to solve this problem: (i) using a conventional training procedure in GANs based on GDA in which generator and discriminator networks perform simultaneous steps of gradient descent to optimize their respective models, (ii) using GD to optimize generator’s parameters for the optimal discriminator. The latter case corresponds to taking a sufficiently large number of gradient ascent steps to update discriminator’s parameters for each GD step of the generator. In both cases, our results show that in an overparameterized regime, the GAN optimization converges to a global solution. To the best of our knowledge, this is the first result showing the global convergence of GDA in such settings. While in our results we focus on one-hidden layer generators and linear discriminators, our theory is based on analyzing a general class of min-max optimization problems which can be used to study a much broader class of generators and discriminators potentially including deep generators and deep random feature-based discriminators. A key component of our analysis is a novel connection to exponential stability of non-symmetric time varying dynamical systems in control theory which may have broader implications for theoretical analysis of GAN’s training. Ideas from control theory have\nalso been used for understanding and improving training dynamics of GANs in (Xu et al., 2019; An et al., 2018).\nHaving analyzed overparameterized GANs for relatively simple models, we next provide a comprehensive empirical study of this problem for practical GANs such as DCGAN (Radford et al., 2016) and ResNet GAN (Gulrajani et al., 2017) trained on CIFAR-10 and Celeb-A datasets. For example, the benefit of overparamterization in training DCGANs on CIFAR-10 is illustrated in Figure 1. We have three key observations: (i) as the model becomes more overparameterized (e.g. using wider networks), the training FID scores that measure the training error, decrease. This phenomenon has been observed in other studies as well (Brock et al., 2019). (ii) overparameterization does not hurt the test FID scores (i.e. the generalization gap remains small). This improved test-time performance can also be seen qualitatively in the center panel of Figure 1, where overparameterized models produce samples of improved quality. (iii) Remarkably, overparameterized GANs, with a lot of parameters to optimize over, have significantly improved convergence behavior of GDA, both in terms of rate and stability, compared to small GAN models (see the right panel of Figure 1).\nIn summary, in this paper\n• We provide the first theoretical guarantee of simultaneous GDA’s global convergence for an overparameterized GAN with one-hidden neural network generator and a linear discriminator (Theorem 2.1).\n• By establishing connections with linear time-varying dynamical systems, we provide a theoretical framework to analyze simultaneous GDA’s global convergence for a general overparameterized GAN (including deeper generators and random feature discriminators), under some general conditions (Theorems 2.3 and A.4).\n• We provide a comprehensive empirical study of the role of model overparameterization in GANs using several large-scale experiments on CIFAR-10 and Celeb-A datasets. We observe overparameterization improves GANs’ training error, generalization error, sample qualities as well as the convergence rate and stability of GDA." }, { "heading": "2 THEORETICAL RESULTS", "text": "" }, { "heading": "2.1 PROBLEM FORMULATION", "text": "Given n data points of the form x1,x2, . . . ,xn ∈ Rm, the goal of GAN’s training is to find a generator that can mimic sampling from the same distribution as the training data. More specifically, the goal is to find a generator mapping Gθ(z) : Rd → Rm, parameterized by θ ∈ Rp, so that Gθ(z1),Gθ(z2), . . . ,Gθ(zn) with z1, z2, . . . ,zn generated i.i.d. according toN (0, Id) has a similar empirical distribution to x1,x2, . . . ,xn1. To measure the discrepancy between the data points and the GAN outputs, one typically uses a discriminator mapping Dθ̃ : R\nm → R parameterized with θ̃ ∈ Rp̃. The overall training approach takes the form of the following min-max optimization problem which minimizes the worst-case discrepancy detected by the discriminator\nmin θ max θ̃\n1\nn n∑ i=1 Dθ̃(xi)− 1 n n∑ i=1 Dθ̃(Gθ(zi)) +R(θ̃). (1)\nHere, R(θ̃) is a regularizer that typically ensures the discriminator is Lipschitz. This formulation mimics the popular Wasserstein GAN (Arjovsky et al., 2017) (or, IPM GAN) formulations. This optimization problem is typically solved by running Gradient Descent Ascent (GDA) on the minimization/maximization variables.\nThe generator and discriminator mappings G and D used in practice are often deep neural networks. Thus, the min-max optimization problem above is highly nonlinear and non-convex concave. Saddle point optimization is a classical and fundamental problem in game theory (Von Neumann & Morgenstern, 1953) and control (Gutman, 1979). However, most of the classical results apply to the\n1In general, the number of observed and generated samples can be different. However, in practical GAN implementations, batch sizes of observed and generated samples are usually the same. Thus, for simplicity, we make this assumption in our setup.\nconvex-concave case (Arrow et al., 1958) while the saddle point optimization of GANs is often non convex-concave. If GDA converges to the global (local) saddle points, we say it is globally (locally) stable. For a general min-max optimization, however, GDA can be trapped in a loop or even diverge. Except in some special cases (e.g. (Feizi et al., 2018) for a quadratic GAN formulation or (Lei et al., 2019) for the under-parametrized setup when the generator is a one-layer network), GDA is not globally stable for GANs in general (Nagarajan & Kolter, 2017; Mescheder et al., 2018; Adolphs et al., 2019; Mescheder et al., 2017; Daskalakis et al., 2020).\nNone of these works, however, study the role of model overparameterization in the global/local convergence (stability) of GDA. In particular, it has been empirically observed (as we also demonstrate in this paper) that when the generator/discriminator contain a large number of parameters (i.e. are sufficiently overparameterized) GDA does indeed find (near) globally optimal solutions. In this section we wish to demystify this phenomenon from a theoretical perspective." }, { "heading": "2.2 DEFINITION OF MODEL OVERPARAMETERIZATION", "text": "In this paper, we use overparameterization in the context of model parameters count. Informally speaking, overparameterized models have large number of parameters, that is we assume that the number of model parameters is sufficiently large. In specific problem setups of Section 2, we precisely compute thresholds where the number of model parameters should exceed in order to observe nice convergence properties of GDA. Note that the definition of overparameterization based on model parameters count is related, but distinct from the complexity of the hypothesis class. For instance, in our empirical studies, when we say we overparameterize a neural network, we fix the number of layers in the neural network and increase the hidden dimensions. Our definition does not include the case where the number of layers also increases, which forms a different hypothesis class." }, { "heading": "2.3 RESULTS FOR ONE-HIDDEN LAYER GENERATORS AND RANDOM DISCRIMINATORS", "text": "In this section, we discuss our main results on the convergence of gradient based algorithms when training GANs in the overparameterized regime. We focus on the case where the generator takes the form of a single hidden-layer ReLU network with d inputs, k hidden units, and m outputs. Specifically, G (z) = V ·ReLU (Wz) withW ∈ Rk×d and V ∈ Rm×k denoting the input-to-hidden and hidden-to-output weights. We also consider a linear discriminator of the form D(x) = dTx with an `2 regularizer on the weights i.e. R(d) = −‖d‖2`2 /2. The overall min-max optimization problem (equation 1) takes the form\nmin W∈Rk×d max d∈Rm L(W ,d) := 〈d, 1 n n∑ i=1 (xi − V ReLU (Wzi))〉 − ‖d‖2`2 2 . (2)\nNote that we initialize V at random and keep it fixed throughout the training. The common approach to solve the above optimization problem is to run a Gradient Descent Ascent (GDA) algorithm. At iteration t, GDA takes the form{\ndt+1 = dt + µ∇dL (Wt,dt) Wt+1 = Wt − η∇WL (Wt,dt)\n(3)\nNext, we establish the global convergence of GDA for an overparameterized model. Note that a global saddle point (W ∗,d∗) is defined as\nL(W ∗,d) ≤ L(W ∗,d∗) ≤ L(W ,d∗)\nfor all feasible W and d. If these inequalities hold in a local neighborhood, (W ∗,d∗) is called a local saddle point.\nTheorem 2.1 Let x1,x2, . . . ,xn ∈ Rm be n training data with their mean defined as x̄ := 1 n ∑n i=1 xi. Consider the GAN model with a linear discriminator of the formD(x) = dTx parameterized by d ∈ Rm and a one hidden layer neural network generator of the form G(z) = V φ(W z) parameterized by W ∈ Rk×d with V ∈ Rm×k a fixed matrix generated at random with i.i.d. N (0, σ2v) entries. Also assume the input data to the generator {zi}ni=1 are generated i.i.d. according to∼ N (0, σ2zId). Furthermore, assume the generator weights at initializationW0 ∈ Rk×d\nare generated i.i.d. according to N (0, σ2w). Furthermore, assume the standard deviations above obey σvσwσz ≥ ‖x̄‖`2 /(md 5 2 log d 3 2 ). Then, as long as\nk ≥ C ·md4 log (d)3\nwith C a fixed constant, running GDA updates per equation 3 starting from the random W0 above and d0 = 02 with step-sizes obeying 0 < µ ≤ 1 and η = η̄ µ\n324·k· d+n−1\nπ n ·σ2v·σ2z , with η̄ ≤ 1, satisfies∥∥∥∥∥ 1n n∑ i=1 V ReLU (Wτzi)− x̄ ∥∥∥∥∥ `2 ≤ 5 ( 1− 10−5 · η̄µ )τ ∥∥∥∥∥ 1n n∑ i=1 V ReLU (W0zi)− x̄ ∥∥∥∥∥ `2 . (4)\nThis holds with probability at least 1 − (n+ 5) e− m1500 − 5k · e−c1·n − (2k + 2) e− d216 − ne−c2·md 3 log(d)2 where c1, c2 are fixed numerical constants.\nTo better understand the implications of the above theorem, note that the objective of equation 2 can be simplified by solving the inner maximization in a closed form so that the min-max problem in equation 2 is equivalent to the following single minimization problem:\nmin W L(W ) := 1 2 ∥∥∥∥∥ 1n n∑ i=1 V ReLU (Wzi)− x̄ ∥∥∥∥∥ 2\n`2\n, (5)\nwhich has a global optimum of zero. As a result equation 4 in Theorem 2.1 guarantees that running simultaneous GDA updates achieves the global optimum. This holds as long as the generator network is sufficiently overparameterized in the sense that the number of hidden nodes is polynomially large in its output dimension m and input dimension d. Interestingly, the rate of convergence guaranteed by this result is geometric, guaranteeing fast GDA convergence to the global optima. To the extent of our knowledge, this is the first result that establishes the global convergence of simultaneous GDA for an overparameterized GAN model.\nWhile the result proved above shows the global convergence of GDA for a GAN with 1-hidden layer generator and a linear discriminator, for a general GAN model, local saddle points may not even exist and GDA may converge to approximate local saddle points (Berard et al., 2020; Farnia & Ozdaglar, 2020). For a general min-max problem, (Daskalakis et al., 2020) has recently shown that approximate local saddle points exist under some general conditions on the lipschitzness of the objective function. Understanding GDA dynamics for a general GAN remains an important open problem. Our result in Theorem 2.1 is a first and important step towards that.\nWe acknowledge that the considered GAN formulation of equation 2 is very simpler than GANs used in practice. Specially, since the discriminator is linear, this GAN can be viewed as a momentmatching GAN (Li et al., 2017) pushing first moments of input and generative distributions towards each other. Alternatively, this GAN formulation can be viewed as one instance of the Sliced Wasserstein GAN (Deshpande et al., 2018). Although the maximization on discriminator’s parameters is concave, the minimization over the generator’s parameters is still non-convex due to the use of a neural-net generator. Thus, the overall optimization problem is a non-trivial non-convex concave min-max problem. From that perspective, our result in Theorem 2.1 partially explains the role of model overparameterization in GDA’s convergence for GANs.\nGiven the closed form equation 5, one may wonder what would happen if we run gradient descent on this minimization objective directly. That is running gradient descent updates of the form Wτ+1 = Wτ − η∇L(Wτ ) with L(W ) given by equation 5. This is equivalent to GDA but instead of running one gradient ascent iteration for the maximization iteration we run infinitely many. Interestingly, in some successful GAN implementations (Gulrajani et al., 2017), often more updates on the discriminator’s parameters are run per generator’s updates. This is the subject of the next result.\nTheorem 2.2 Consider the setup of Theorem 2.1. Then as long as\nk ≥ C ·md4 log (d)3\n2The zero initialization of d is merely done for simplicity. A similar result can be derived for an arbitrary initialization of the discriminator’s parameters with minor modifications. See Theorem 2.3 for such a result.\nwith C a fixed numerical constant, running GD updates of the form Wτ+1 = Wτ − η∇L(Wτ ) on the loss given in equation 5 with step-size η = 2η̄\n243k· d+n−1\nπ n ·σ2v·σ2z , with η̄ ≤ 1, satisfies∥∥∥∥∥ 1n n∑ i=1 V ReLU (Wτzi)− x̄ ∥∥∥∥∥ `2 ≤ ( 1− 4× 10−6 · η̄ )τ ∥∥∥∥∥ 1n n∑ i=1 V ReLU (W0zi)− x̄ ∥∥∥∥∥ `2 . (6) This holds with probability at least 1 − (n+ 5) e− m1500 − 5k · e−c1·n − (2k + 2) e− d216 − ne−c2·md 3 log (d)2 with c1, c2 fixed numerical constants.\nThis theorem states that if we solve the max part of equation 2 in closed form and run GD on the loss function per equation 5 with enough overparameterization, the loss will decrease at a geometric rate to zero. This result holds again when the model is sufficiently overparameterized. The proof of Theorem 2.2 relies on a result from (Oymak & Soltanolkotabi, 2020), which was developed in the framework of supervised learning. Also note that the amount of overparameterization required in both Theorems 2.1 and 2.2 is the same." }, { "heading": "2.4 CAN THE ANALYSIS BE EXTENDED TO MORE GENERAL GANS?", "text": "In the previous section, we focused on the implications of our results for one-hidden layer generator and linear discriminator. However, as it will become clear in the proofs, our theoretical results are based on analyzing the convergence behavior of GDA on a more general min-max problem of the form\nmin θ∈Rp max d∈Rm\nh(θ,d) := 〈d, f (θ)− y〉 − ‖d‖2`2\n2 , (7)\nwhere f : Rp → Rm denotes a general nonlinear mapping.\nTheorem 2.3 (Informal version of Theorem A.4) Consider a general nonlinear mapping f : Rp → Rm with the singular values of its Jacobian mapping around initialization obeying certain assumptions (most notably σmin(J (θ0)) ≥ α). Then, running GDA iterations of the form{\ndt+1 = dt + µ∇dh(θt,dt) θt+1 = θt − η∇θh(θt,dt)\n(8)\nwith sufficiently small step sizes η and µ obeys ‖f(θt)− y‖`2 ≤ γ ( 1− ηα 2\n2\n)t√ ‖f(θ0)− y‖2`2 + ‖d0‖ 2 `2 .\nNote that similar to the previous sections one can solve the maximization problem in equation 7 in closed form so that equation 7 is equivalent to the following minimization problem\nmin θ∈Rp L(θ) := 1 2 ‖f(θ)− y‖2`2 , (9)\nwith global optima equal to zero. Theorem 2.3 ensures that GDA converges with a fast geometric rate to this global optima. This holds as soon as the model f(θ) is sufficiently overparameterized which is quantitatively captured via the minimum singular value assumption on the Jacobian at initialization (σmin(J (θ0)) ≥ α which can only hold when m ≤ p). This general result can thus be used to provide theoretical guarantees for a much more general class of generators and discriminators. To be more specific, consider a deep GAN model where the generator Gθ is a deep neural network with parameters θ and the discriminator is a deep random feature model of the form Dd(x) = dTψ(x) parameterized with d and ψ : Rd → Rm a deep neural network with random weights. Then the minmax training optimization problem equation 1 with a regularizer R(d) = −‖d‖2`2 /2 is a special instance of equation 7 with\nf(θ) := 1\nn n∑ i=1 ψ(Gθ(zi)) and y := 1 n n∑ i=1 ψ(xi)\nTherefore, the above result can in principle be used to rigorously analyze global convergence of GDA for an overparameterized GAN problem with a deep generator and a deep random feature discriminator model. However, characterizing the precise amount of overparameterization required for such a result to hold requires a precise analysis of the minimum singular value of the Jacobian of f(θ) at initialization as well as other singular value related conditions stated in Theorem A.4. We defer such a precise analysis to future works.\nNumerical Validations: Next, we numerically study the convergence of GAN model considered in Theorems 2.1 and 2.2 where the discriminator is a linear network while the generator is a one hidden layer neural net. In our experiments, we generate xi’s from an m-dimension Gaussian distribution with mean µ and an identity covariance matrix. The mean vector µ is randomly generated. We train two variants of GAN models using (1) GDA (as considered in Thm 2.1) and (2) GD on generator while solving the discriminator to optimality (as considered in Thm 2.2).\nIn Fig. 2, we plot the converged loss values of GAN models trained using both techniques (1) and (2) as the hidden dimension k of the generator is varied. The MSE loss between the true data mean and the data mean of generated samples is used as our evaluation metric. As this MSE loss approaches 0, the model converges to the global saddle point. We ob-\nserve that overparameterized GAN models show improved convergence behavior than the narrower models. Additionally, the MSE loss converges to 0 for larger values of k which shows that with sufficient overparamterization, GDA converges to a global saddle point." }, { "heading": "3 EXPERIMENTS", "text": "In this section, we demonstrate benefits of overparamterization in large GAN models. In particular, we train GANs on two benchmark datasets: CIFAR-10 (32 × 32 resolution) and Celeb-A (64 × 64 resolution). We use two commonly used GAN architectures: DCGAN and Resnet-based GAN. For both of these architectures, we train several models, each with a different number of filters in each layer, denoted by k. For simplicity, we refer to k as the hidden dimension. Appendix Fig. 8 illustrates the architectures used in our experiments. Networks with large k are more overparameterized.\nWe use the same value of k for both generator and discriminator networks. This is in line with the design choice made in most recent GAN models (Radford et al., 2016; Brock et al., 2019), where the size of generator and discriminator models are roughly maintained the same. We train each model till convergence and evaluate the performance of converged models using FID scores. FID scores measure the Frechet distance between feature distributions of real and generated data\ndistributions (Heusel et al., 2017). A small FID score indicates high-quality synthesized samples. Each experiment is conducted for 5 runs, and mean and the variance of FID scores are reported.\nOverparameterization yields better generative models: In Fig. 4, we show the plot of FID scores as the hidden dimension (k) is varied for DCGAN and Resnet GAN models. We observe a clear trend where the FID scores are high (i.e. poor) for small values of k, while they improve as models become more overparameterized. Also, the FID scores saturate beyond k = 64 for DCGAN models, and k = 128 for Resnet GAN models. Interestingly, these are the standard values used in the existing model architecures (Radford et al., 2016; Gulrajani et al., 2017).This trend is also consistent on MLP GANs trained on MNIST dataset (Fig. 3). We however notice that FID score in MLP GANs increase marginally as k increases from 1024 to 2048. This is potentially due to an increased generalization gap in this regime where it offsets potential benefits of over-parameterization\nOverparameterization leads to improved convergence of GDA: In Fig. 5, we show the plot of FID scores over training iterations for different values for k. We observe that models with larger\nvalues of k converge faster and demonstrate a more stable behavior. This agrees with our theoretical results that overparameterized models have a fast rate of convergence.\nGeneralization gap in GANs: To study the generalization gap, we compute the FID scores by using (1) the training-set of real data, which we call FID train, and (2) a held-out validation set of real data, which we call FID test. In Fig. 4, a plot of FID train (in blue) and FID test (in green) are shown as the hidden dimension k is varied. We observe that FID test values are consistently higher than the the FID train values. Their gap does not increase with increasing overparameterization.\nHowever, as explained in (Gulrajani et al., 2019), the FID score has the issue of assigning low values to memorized samples. To alleviate the issue, (Gulrajani et al., 2019; Arora et al., 2017) proposed Neural Net Divergence (NND) to measure generalization in GANs. In Fig. 6, we plot NND scores by varying the hidden dimensions in DCGAN and Resnet GAN trained on CIFAR-10 dataset. We observe that increasing the value of k decreases the NND score. Interestingly, the NND score of memorized samples are higher than most of the GAN models. This indicates that overparameterized models have not been memorizing training samples and produce better generative models." }, { "heading": "4 CONCLUSION", "text": "In this paper, we perform a systematic study of the importance of overparameterization in training GANs. We first analyze a GAN model with one-hidden layer generator and a linear discriminator optimized using Gradient Descent Ascent (GDA). Under this setup, we prove that with sufficient overparameterization, GDA converges to a global saddle point. Additionally, our result demonstrate that overparameterized models have a fast rate of convergence. We then validate our theoretical findings through extensive experiments on DCGAN and Resnet models trained on CIFAR-10 and Celeb-A datasets. We observe overparameterized models to perform well both in terms of the rate of convergence and the quality of generated samples." }, { "heading": "5 ACKNOWLEDGEMENT", "text": "M. Sajedi would like to thank Sarah Dean for introducing (Rugh, 1996). This project was supported in part by NSF CAREER AWARD 1942230, HR00112090132, HR001119S0026, NIST 60NANB20D134 and Simons Fellowship on “Foundations of Deep Learning.” M. Soltanolkotabi is supported by the Packard Fellowship in Science and Engineering, a Sloan Research Fellowship in Mathematics, an NSF-CAREER under award 1846369, the Air Force Office of Scientific Research Young Investigator Program (AFOSR-YIP) under award FA9550-18-1-0078, DARPA Learning with Less Labels (LwLL) and FastNICS programs, and NSF-CIF awards 1813877 and 2008443." }, { "heading": "Appendix", "text": "" }, { "heading": "A PROOFS", "text": "In this section, we prove Theorems 2.1 and 2.2. First, we provide some notations we use throughout the remainder of the paper in Section A.1. Before proving these specialized results for one hidden layer generators and linear discriminators (Theorems 2.1 and 2.2), we state and prove a more general result (formal version of Theorem 2.3) on the convergence of GDA on a general class of min-max problems in Section A.3. Then we state a few preliminary calculations in Section A.4. Next, we state some key lemmas in Section A.5 and defer their proofs to Appendix B. Finally, we prove Theorems 2.1 and 2.2 in Sections A.6 and A.7, respectively." }, { "heading": "A.1 NOTATION", "text": "We will use C, c, c1, etc. to denote positive absolute constants, whose value may change throughout the paper and from line to line. We use φ (z) = ReLU (z) = max (0, z) and its (generalized) derivative φ′ (z) = 1{z≥0} with 1 being the indicator function. σmin (X) and σmax (X) = ‖X‖ denote the minimum and maximum singular values of matrix X . For two arbitrary matrices A and B,A⊗B denotes their kronecker product. The spectral radius of a matrixA ∈ Cn×n is defined as ρ (A) = max {|λ1|, . . . , |λn|}, where λi’s are the eigenvalues of A. Throughout the proof we shall assume φ := ReLU to avoid unnecessarily long expressions." }, { "heading": "A.2 PROOF SKETCH OF THE MAIN RESULTS", "text": "In this section, we provide a brief overview of our proofs. We focus on the main result in this manuscript, which is about the convergence of GDA (Theorem 2.1). To do this we study the converge of GDA on the more general min-max problem of the form (see Theorem A.4 for a formal statement)\nmin θ∈Rn max d∈Rm\nh(θ,d) := 〈d, f (θ)− y〉 − ‖d‖2`2\n2 . (10)\nIn this case the GDA iterates take the form{ dt+1 = (1− µ)dt + µ (f (θt)− y) θt+1 = θt − ηJ T (θt)dt . (11)\nOur proof for global convergence of GDA on this min-max loss consists of the following steps.\nStep 1: Recasting the GDA updates as a linear time-varying system In the first step we carry out a series of algebraic manipulations to recast the GDA updates (equation 11) into the following form [\nrt+1 dt+1\n] = At [ rt dt ] ,\nwhere rt = f (θt)− y denotes the residuum andAt denotes a properly defined transition matrix.\nStep 2: Approximation by a linear time-invariant system Next, to analyze the behavior of the time-varying dynamical system above we approximate it by the following time-invariant linear dynamical system[\nr̃t+1 d̃t+1\n] = [ I −ηJ T (θ0)J (θ0) µI (1− µ) I ] [ r̃t d̃t ] ,\nwhere θ0 denotes the initialization. The validity of this approximation is ensured by our assumptions on the Jacobian of the function f , which, among others, guarantee that it does not change too much in a sufficiently large neighborhood around the initialization and that the smallest singular value of J (θ0) is bounded from below. Step 3: Analysis of time-invariant linear dynamical system To analyze the time-invariant dynamical system above, we utilize and refine intricate arguments\nfrom the control theory literature involving the spectral radius of the fixed transition matrix above to obtain ∥∥∥∥[r̃td̃t ]∥∥∥∥ `2 . ( 1− ηα2 )t ∥∥∥∥[r̃0d̃0 ]∥∥∥∥ `2 .\nStep 4: Completing the proof via a perturbation argument In the last step of our proof we show that the two sequences [ rt dt ] and [ r̃t d̃t ] will remain close to each other. This is based on a novel perturbation argument. The latter combined with Step 3 allows us to conclude ∥∥∥∥[rtdt ]∥∥∥∥ `2 . ( 1− ηα 2 2 )t ∥∥∥∥[r0d0 ]∥∥∥∥ `2 , which finishes the global convergence of GDA on equation 10 and hence the proof of Theorem A.4.\nIn order to deduce Theorem 2.1 from Theorem A.4, we need to check that the Jacobian at the initialization is bounded from below at the origin and that it does not change too quickly in a large enough neighborhood. In order to prove that we will leverage recent ideas from the deep learning theory literature revolving around the neural tangent kernel. This allows us to guarantee that this conditions are indeed met, if the neural network is sufficiently wide and the initialization is chosen large enough.\nThe second main result of this manuscript, Theorem 2.2, can be deduced more directly from recent results on overparameterized learning (see Oymak & Soltanolkotabi (2020)). Hence, we have deferred its proof to Section A.7." }, { "heading": "A.3 ANALYSIS OF GDA: A CONTROL THEORY PERSPECTIVE", "text": "In this section we will focus on solving a general min-max optimization problem of the form\nmin θ∈Rn max d∈Rm\nh(θ,d) := 〈d, f (θ)− y〉 − ‖d‖2`2\n2 , (12)\nwhere f : Rn → Rm is a general nonlinear mapping. In particular, we focus on analyzing the convergence behavior of Gradient Descent/Ascent (GDA) on the above loss, starting from initial estimates θ0 and d0. In this case the GDA updates take the following form{\ndt+1 = (1− µ)dt + µ (f (θt)− y) θt+1 = θt − ηJ T (θt)dt . (13)\nWe note that solving the inner maximization problem in equation 12 would yield\nmin θ∈Rn\n1 2 ‖f (θ)− y‖2`2 . (14)\nIn this section, our goal is to show that when running the GDA updates of equation 13, the norm of the residual vector defined as rt := f (θt) − y goes to zero and hence we reach a global optimum of equation 14 (and in turn equation 12).\nOur proof will build on ideas from control theory and dynamical systems literature. For that, we are first going to rewrite the equations 13 in a more convenient way. We define the average Jacobian along the path connecting two points x,y ∈ Rn as\nJ (y,x) = ∫ 1\n0\nJ (x+ α (y − x)) dα,\nwhere J (θ) ∈ Rm×n is the Jacobian associated with the nonlinear mapping f . Next, from the fundamental theorem of calculus it follows that\nrt+1 = f (θt+1)− y = f ( θt − ηJ Tt dt ) − y\n= f (θt)− ηJt+1,tJ Tt dt − y = rt − ηJt+1,tJ Tt dt, (15)\nwhere we used the shorthands Jt := J (θt) and Jt+1,t := J (θt+1,θt) for exposition purposes. Next, we combine the updates rt and dt into a state vector of the form zt := [ rt dt ] ∈ R2m. Using this notation the relationship between the state vectors from one iteration to the next takes the form\nzt+1 = [ I −ηJt+1,tJ Tt µI (1− µ) I ] ︸ ︷︷ ︸\n=:At\nzt, t ≥ 0, (16)\nwhich resembles a time-varying linear dynamical system with transition matrix At. Now note that to show convergence of rt to zero it suffices to show convergence of zt to zero. To do this we utilize the following notion of uniform exponential stability, which will be crucial in analyzing the solutions of equation 16. (See Rugh (1996) for a comprehensive overview on stability notions in discrete state equations.)\nDefinition 1 A linear state equation of the form zt+1 = Atzt is called uniformly exponentially stable if for every t ≥ 0 we have ‖zt‖`2 ≤ γλ\nt ‖z0‖`2 , where γ ≥ 1 is a finite constant and 0 ≤ λ < 1.\nUsing the above definition to show the convergence of the state vector zt to zero at a geometric rate it suffices to show the state equation 16 is exponentially stable.3 For that, we are first going to analyze a state equation which results from linearizing the nonlinear function f (θ) around the initialization θ0. In the next step, we are going to show that the behavior of these two problems are similar, provided we stay close to initialization (which we are also going to prove). Specifically, we consider the linearized problem\nmin θ̃∈Rn max d̃∈Rm hlin\n( θ̃, d̃ ) := 〈 d̃, f (θ0) + J0 ( θ̃ − θ0 ) − y 〉 − ∥∥∥d̃∥∥∥2 `2\n2 . (17)\nWe first analyze GDA on this linearized problem starting from the same initialization as the original problem, i.e. θ̃0 = θ0 and d̃0 = d0. The gradient descent update for θ̃t takes the form\nθ̃t+1 = θ̃t − ηJ T0 d̃t, (18)\nand the gradient ascent update for d̃t takes the form d̃t+1 = d̃t + µ ( f (θ0) + J0 ( θ̃t − θ0 ) − y − d̃t ) = (1− µ) d̃t + µr̃t,\n(19)\nwhere we used the linear residual defined as r̃t = f (θ0)+J0 ( θ̃t − θ0 ) −y. Moreover, the residual\nfrom one iterate to the next can be written as follows r̃t+1 = f (θ0) + J0 ( θ̃t+1 − θ0 ) − y\n= f (θ0) + J0 ( θ̃t − ηJ T0 d̃t − θ0 ) − y\n= r̃t − ηJ0J T0 d̃t.\n(20)\nAgain, we define a new vector z̃t = [ r̃t d̃t ] ∈ R2m and by putting together equations 19 and 20 we arrive at\nz̃t+1 = [ I −ηJ0J T0 µI (1− µ) I ] z̃t = Az̃t, t ≥ 0, (21)\n3We note that technically the dynamical system equation 16 is not linear. However, we still use exponential stability with some abuse of notation to refer to the property that ‖zt‖`2 ≤ γλ\nt ‖z0‖`2 holds. As we will see in the forth-coming paragraphs, our formal analysis is via a novel perturbation analysis of a linear dynamical system and therefore keeping this terminology is justified.\nwhich is of the form of a linear time-invariant state equation. As a first step in our proof, we are going to show that the linearized state equations are uniformly exponentially stable. First, recall the following well-known lemma, which characterizes uniform exponential stability in terms of the eigenvalues ofA.\nLemma A.1 (Rugh, 1996, Theorem 22.11) A linear state equation of the form z̃t+1 = Az̃t with A a fixed matrix is uniformly exponentially stable if and only if all eigenvalues of A have magnitudes strictly less than one, i.e. ρ (A) < 1. In this case, it holds for all t ≥ 0 and all z that\n‖Atz‖ ≤ γρ (A)t ‖z‖,\nwhere γ ≥ 1 is an absolute constant, which only depends onA.\nIn the next lemma, we prove that under suitable assumptions on J0 and the step sizes µ and η the state equations 21 indeed fulfill this condition.\nLemma A.2 Assume that α ≤ σmin (J0) ≤ σmax (J0) ≤ β and consider the matrix A =[ I −ηJ0J T0 µI (1− µ) I ] . Suppose that µη ≥ 4β 2. Then it holds that ρ (A) ≤ 1− ηα2.\nProof Suppose that λ is an eigenvalue ofA. Hence, there is an eigenvector [x,y]T 6= 0 such that[ I −ηJ0J T0 µI (1− µ) I ] [ x y ] = λ [ x y ] holds. By a direct calculation we observe that this yields the equation\nηJ0J T0 x =\n( − (1− λ)2\nµ + (1− λ)\n) x.\nIn particular, xmust be an eigenvector of J0J T0 . Denoting the corresponding eigenvalue with s, we obtain the identity\n(1− λ)2\nµ − (1− λ) + ηs = 0.\nHence, we must have\nλ ∈ { 1− µ\n2 +\n√ µ2\n4 − µηs; 1− µ 2 − √ µ2 4 − µηs } .\nNote that the square root is indeed well-defined, since\nµ2\n4 − µηs ≥ µηβ2 − µηs ≥ 0,\nwhere in the first inequality we used the assumption µη ≥ 4β 2 and in the second line we used that s ≤ β2, which is a consequence of our assumption on the singular values of J0. Hence, it follows by the reverse triangle inequality that\n|λ| − (\n1− µ 2\n) ≤ ∣∣∣λ− (1− µ\n2 )∣∣∣ = √(µ 2 )2 − µηs < µ 2 − ηs ≤ µ 2 − ηα2,\nwhere the second inequality is valid as µ2 − ηs ≥ 0 is implied by µ 2 ≥ 2ηβ 2 > ηs. In the last inequality we used the fact that α2 ≤ s, which is a consequence of our assumption on the singular values of J0. By rearranging terms, we obtain that |λ| < 1 − ηα2. Since λ was an arbitrary eigenvalue ofA, the result follows.\nSince the last lemma shows that under suitable conditions it holds that ρ (A) < 1, Lemma A.3 yields uniform exponential stability of our state equations. However, this will not be sufficient for our purposes. The reason is that Lemma A.3 does not specify the constant γ and that in order to deal with the time-varying dynamical system we will need a precise estimate. The next lemma shows that for the state equations 21 we have, under suitable assumptions, γ ≤ 5.\nLemma A.3 Consider the linear, time invariant system of equations\nz̃t+1 = [ I −ηJ0J T0 µI (1− µ) I ] z̃t = Az̃t, t ≥ 0.\nFurthermore, assume that α ≤ σmin (J0) ≤ σmax (J0) ≤ β and suppose that the condition µη ≥ 8β 2 is satisfied. Then there is a constant γ ≤ 5 such that for all t ≥ 0 it holds that\n‖z̃t‖`2 ≤ γ ( 1− ηα2 )t ‖z̃0‖`2 . Proof Denote the SVD decomposition of J0 byWΣV T and note that[\nI −ηJ0J T0 µI (1− µ) I\n] = [ W 0 0 W ] [ I −ηΣΣT µI (1− µ) I ] [ W T 0 0 W T . ] This means we can write\n[ I −ηJ0J T0 µI (1− µ) I ] = [ W 0 0 W ] P C1 0 . . .\n0 Cm\nP T [ W T 0 0 W T ] ,\nwhere P is a permutation matrix and the matrices Ci are of the form Ci = [\n1 −ησ2i µ (1− µ)\n] , for\n1 ≤ i ≤ m, where the σi’s denote the singular values of J0. Using this decomposition we can deduce\n‖z̃t‖`2 = ∥∥Atz̃0∥∥`2 ≤ ∥∥At∥∥ ‖z̃0‖`2 = ( max\n1≤i≤m ∥∥Cti∥∥) ‖z̃0‖`2 . Now suppose that ViDiV −1i is the eigenvalue decomposition of Ci, where the columns of Vi contain the eigenvectors and Di is a diagonal matrix consisting of the eigenvalues. (Note that it follows from our assumptions on µ and η that the matrix Ci is diagonalizable.) We have∥∥Cti∥∥ = ∥∥ViDtiV −1i ∥∥ ≤ ‖Vi‖∥∥Dti∥∥∥∥V −1i ∥∥ = κi · ρ (Ci)t , where we defined κi := ‖Vi‖\n∥∥V −1i ∥∥. From Lemma A.2 we know that the assumption µη ≥ 4β2 results in ρ (A) ≤ 1 − ηα2. Therefore, defining γ := max\n1≤i≤m κi and noting ρ (A) = max 1≤i≤m ρ (Ci),\nwe obtain that\n‖z̃t‖`2 ≤ (\nmax 1≤i≤m ∥∥Cti∥∥) ‖z̃0‖`2 ≤ γ (1− ηα2)t ‖z̃0‖`2 . In order to finish the proof we need to show that γ ≤ 5. For that, note that calculating the eigenvectors of Ci directly reveals that we can represent this matrix as\nVi = 1+√1−4 ησ2iµ 2 1− √ 1−4 ησ2 i µ\n2 1 1 . Since ‖Vi‖ = √ λmax ( ViV Ti ) and\n∥∥V −1i ∥∥ = √λmin (ViV Ti ), we calculate ViV Ti , which yields ViV T i = [ 1− 2ησ 2 i µ 1\n1 2\n] .\nThis representation allows us to directly calculate the two eigenvalues of ViV Ti , which shows that\nκi =\n√ λmax ( ViV Ti ) λmin ( ViV Ti )\n= √√√√√√√3− 2 ησ2i µ + √( 1 + 2 ησ2i µ )2 + 4\n3− 2ησ 2 i µ − √( 1 + 2 ησ2i µ )2 + 4\n= 3− 2ησ\n2 i\nµ +\n√( 1 + 2\nησ2i µ\n)2 + 4\n2 √ 1− 4ησ 2 i\nµ\n≤ 6 2 √ 1− 4ησ 2 i\nµ\n< 5,\nwhere the last inequality holds because of ησ 2 i µ ≤ ηβ2\nµ ≤ 1 8 . Since γ = max1≤i≤m κi, this finishes the\nproof.\nNow that we have shown that the linearized iterates converge to the global optima we turn our attention to showing that the nonlinear iterates 16 are close to its linear counterpart 21. For that, we make the following assumptions.\nAssumption 1: The singular values of the Jacobian at initialization are bounded from below\nσmin (J (θ0)) ≥ α\nfor some positive constants α and β.\nAssumption 2: In a neighborhood with radius R around the initialization, the Jacobian mapping associated with f obeys ‖J (θ)‖ ≤ β for all θ ∈ BR (θ0), where BR (θ0) := {θ ∈ Rp : ‖θ − θ0‖`2 ≤ R}.\nAssumption 3: In a neighborhood with radius R around the initialization, the spectral norm of the Jacobian varies no more than in the sense that\n‖J (θ)− J (θ0)‖ ≤\nfor all θ ∈ BR (θ0).\nWith these assumptions in place, we are ready to state the main theorem.\nTheorem A.4 Consider the GDA updates for the min-max optimization problem 12[ dt+1 θt+1 ] = [ dt + µ∇dh(θt,dt) θt − η∇θh(θt,dt) ] (22)\nand consider the GDA updates of the linearized problem 21[ d̃t+1 θ̃t+1 ] = [ d̃t + µ∇dhlin(θ̃t, d̃t) θ̃t − η∇θhlin(θ̃t, d̃t) ] . (23)\nSet zt := [ rt dt ] and z̃t := [ r̃t d̃t ] , where rt := f (θt) − y and r̃t = f (θ0) + J0 ( θ̃t − θ0 ) − y denote the residuals. Assume that the step sizes of the gradient descent ascent updates satisfy µη ≥ 8β 2 as well as 0 <\nµ ≤ 1. Moreover, assume that the assumptions 1-3 hold for the Jacobian J (θ) of f (θ) around the initialization θ0 ∈ Rn with parameters α, β, , and\nR := 2γ β2\nα2 ∥∥∥∥[J †0 00 J †0 ] z0 ∥∥∥∥ `2 + 18 β2γ2 α4 ‖z0‖`2 , (24)\nwhich satisfy 4γβ ≤ α2. Here, 1 ≤ γ ≤ 5 is a constant, which only depends on µ, η, and J0. By J †0 we denote the pseudo-inverse of the Jacobian at initialization J0. Then, assuming the same initialization θ0 = θ̃0, d0 = d̃0 (and, hence, z0 = z̃0), the following holds for all iterations t ≥ 0.\n• ‖zt‖`2 converges to 0 with a geometric rate, i.e. ‖zt‖`2 ≤ γ ( 1− ηα 2\n2\n)t ‖z0‖`2 . (25)\n• The trajectories of zt and z̃t stay close to each other and converge to the same limit, i.e.\n‖zt − z̃t‖`2 ≤ 2ηγ 2β · t\n( 1− ηα 2\n2\n)t−1 ‖z0‖`2\n≤ 4γ 2β e ( 15 ln 1615 ) α2 ‖z0‖`2 .\n(26)\n• The parameters of the original and linearized problems stay close to each other, i.e.∥∥∥θ̃t − θt∥∥∥ `2 ≤ 9 β 2γ2 α4 ‖z0‖`2 , (27)\n• The parameters of the original problem stay close to the initialization, i.e.\n‖θt − θ0‖`2 ≤ R\n2 . (28)\nTheorem A.4 will be the main ingredient in the proof of Theorem 2.1. However, as discussed in Section 2.4 we believe that this meta theorem can be used to deal with a much richer class of generators and discriminators." }, { "heading": "A.3.1 PROOF OF THEOREM A.4", "text": "We will prove the statements in the theorem by induction. The base case for τ = 0 is trivial. Now assume that the equations equation 25 to equation 28 hold for τ = 0, . . . , t− 1. Our goal is to show that they hold for iteration t as well.\nPart I: First, we are going to show that θt ∈ BR (θ0). Note that by the triangle inequality and the induction assumption we have that\n‖θt − θ0‖`2 ≤ ‖θt − θt−1‖`2 + ‖θt−1 − θ0‖`2\n≤ ‖θt − θt−1‖`2 + R\n2 .\nHence, in order to prove the claim it remains to show that ‖θt − θt−1‖`2 ≤ R 2 . For that, we compute\n1 η ‖θt − θt−1‖`2 = ∥∥J T (θt−1)dt−1∥∥`2 ≤ ∥∥∥J T (θt−1) d̃t−1∥∥∥ `2 + ∥∥J T (θt−1)∥∥ ∥∥∥dt−1 − d̃t−1∥∥∥ `2\n≤ ∥∥∥J T0 d̃t−1∥∥∥\n`2 + ‖J (θt−1)− J0‖ ∥∥∥d̃t−1∥∥∥ `2 + ∥∥J T (θt−1)∥∥ ∥∥∥dt−1 − d̃t−1∥∥∥ `2\n(i) ≤ γ ∥∥∥∥[J T0 00 J T0 ] z0 ∥∥∥∥ `2 + · γ ‖z0‖`2 + 4β2 γ2 e ( 15 ln 1615 ) α2 ‖z0‖`2\n(ii) ≤ γβ2 ∥∥∥∥[J †0 00 J †0 ] z0 ∥∥∥∥ `2 + 3β2 γ2 α2 ‖z0‖`2 ,\nwhere γ ≤ 5 is a constant. Let us verify the last two inequalities. Inequality (ii) holds because 1 ≤ γ, 1 ≤ β 2\nα2 , and\n∥∥∥∥[J T0 00 J T0 ] z0 ∥∥∥∥ `2 = ∥∥∥∥[V ΣTW T 00 V ΣTW T ] z0 ∥∥∥∥ `2\n= √√√√ n∑ i=1 σ2i ( 〈wi, r0〉2 + 〈wi,d0〉2 )\n≤ β2 √√√√ n∑\ni=1\n1\nσ2i\n( 〈wi, r0〉2 + 〈wi,d0〉2 ) = β2 ∥∥∥∥[J †0 00 J †0 ] z0 ∥∥∥∥ `2 . (29)\nAlso (i) follows from assumptions 1-3, ∥∥∥dt−1 − d̃t−1∥∥∥\n`2 ≤ ‖zt−1 − z̃t−1‖`2 together with induction assumption equation 26, ∥∥∥d̃t−1∥∥∥\n`2 ≤ ‖z̃t−1‖`2 ≤ ‖z0‖`2 , and\n∥∥∥J T0 d̃t−1∥∥∥ `2 ≤ ∥∥∥∥[J T0 r̃t−1J T0 d̃t−1 ]∥∥∥∥ `2\n= ∥∥∥∥[ I −ηJ T0 J0µI (1− µ) I ] [ J T0 r̃t−2 J T0 d̃t−2 ]∥∥∥∥ `2\n= ∥∥∥∥∥ [ I −ηJ T0 J0 µI (1− µ) I ]t−1 [J T0 r̃0 J T0 d̃0 ]∥∥∥∥∥ `2 ≤ γ ( 1− ηα2 )t−1 ∥∥∥∥[J T0 00 J T0 ] z0 ∥∥∥∥ `2 ,\n(30)\nwhere in the last inequality we applied Lemma A.3. Finally, by using η ≤ 18β2 we arrive at\n‖θt − θt−1‖`2 ≤ γηβ 2 ∥∥∥∥[J †0 00 J †0 ] z0 ∥∥∥∥ `2 + 3ηβ2 γ2 α2 ‖z0‖`2\n≤ γ 8 ∥∥∥∥[J †0 00 J †0 ] z0 ∥∥∥∥ `2 + 3 γ2 8α2 ‖z0‖`2\n≤ R 2 ,\nwhere the last line is directly due to inequality (24), γ ≤ 5, and α ≤ β. Hence, we have established θt ∈ BR (θ0).\nPart II: In Lemma A.3 we showed that the time invariant system of state equations z̃t+1 = Az̃t is uniformly exponentially stable, i.e. ‖z̃t‖`2 goes down to zero exponentially fast. Now by using the assumption that the Jacobian remains close to the Jacobian at the initialization J0, we aim to show the exponential stability of the time variant system of the state equations 16. For that, we compute\nzt = At−1zt−1 = [ I −ηJt,t−1J Tt−1 µI (1− µ) I ] zt−1\n= [ I −ηJ0J T0 µI (1− µ) I ] zt−1 + [ η ( J0J T0 − Jt,t−1J Tt−1 ) dt−1 0 ] =: Azt−1 + ∆t−1.\nNow set λ := 1−ηα2. By induction, we obtain the relation zt = Atz0 + ∑t−1 i=0 A t−1−i∆i. Hence,\n‖zt‖`2 = ∥∥∥∥∥Atz0 + t−1∑ i=0 At−1−i∆i ∥∥∥∥∥ `2\n≤ ∥∥Atz0∥∥`2 + t−1∑\ni=0 ∥∥At−1−i∆i∥∥`2 ≤ γλt ‖z0‖`2 + t−1∑ i=0 γλt−1−i ∥∥η (J0J T0 − Ji+1,iJ Ti )∥∥ ‖di‖`2\n≤ γλt ‖z0‖`2 + t−1∑ i=0 ηγλt−1−i (2β ) ‖zi‖`2 . (31)\nThe second inequality holds because of Lemma A.3. The last inequality holds because by combining our assumptions 1 to 3 with θt ∈ BR (θ0) and the induction assumption 28 for 0 ≤ i ≤ t − 1, we have that ∥∥J0J T0 − Ji+1,iJ Ti ∥∥ = ∥∥J0J T0 − J0J Ti + J0J Ti − Ji+1,iJ Ti ∥∥\n≤ ‖J0‖ ‖J0 − Ji‖+ ‖J0 − Ji+1,i‖ ‖Ji‖ ≤ β ‖J0 − Ji‖+ β ‖J0 − Ji+1,i‖ ≤ 2β . (32)\nIn order to deal with inequality 31, we will rely on the following lemma.\nLemma A.5 (Rugh, 1996, Lemma 24.5) Consider two real sequences p (t) and φ (t), where p (t) ≥ 0 for all t ≥ 0 and\nφ (t) ≤ ψ, if t = 0 ψ + η\nt−1∑ i=0 p (i)φ (i) , if t ≥ 1\nwhere η and ψ are constants with η ≥ 0. Then for all t ≥ 1 we have\nφ (t) ≤ ψ t−1∏ i=0 (1 + η · p (i)) .\nNow we define φt = ‖zt‖`2 λt and rewrite inequality 31 as\nφt ≤ γφ0 + t−1∑ i=0 2ηγβ λ φi.\nHence, Lemma A.5 yields that\nφt ≤ γφ0 t−1∏ i=0 ( 1 + 2ηγβ λ )\n= γφ0\n( 1 + 2ηγβ\nλ )t (i)\n≤ γφ0 ( 1 + ηα2\n2λ )t (ii) = γφ0 ( 1− ηα 2 2\n1− ηα2\n)t ,\nwhere (i) follows from 4γβ ≤ α2 and (ii) holds by inserting λ = 1− ηα2. Inserting the definition of φ0 and φt we obtain that\n‖zt‖`2 ≤ γ ( 1− ηα 2\n2\n)t ‖z0‖`2 .\nThis completes the proof of Part II.\nPart III: In this part, our aim is to show that the error vector et := zt − z̃t obeys inequality 26. First, note that\net = zt − z̃t (∗) = (Azt−1 + ∆t−1)−Az̃t−1 = Aet−1 + ∆t−1,\nwhere in (∗) we used the same notation as in Part II for ∆t−1. Using a recursive argument as well as e0 = 0 we obtain that\n‖et‖`2 = ∥∥∥∥∥ t−1∑ i=0 At−1−i∆i ∥∥∥∥∥ `2\n≤ t−1∑ i=0 ηγ ( 1− ηα2 )t−1−i ‖∆i‖`2 (i)\n≤ t−1∑ i=0 ηγ ( 1− ηα2 )t−1−i ∥∥J0J T0 − Ji+1,iJ Ti ∥∥ ‖di‖`2 (ii)\n≤ t−1∑ i=0 2ηβ γ ( 1− ηα2 )t−1−i ‖zi‖`2 . The first inequality follows from the triangle inequality and Lemma A.3. Inequality (i) follows from the definition of ∆i. Inequality (ii) follows from inequality 32. Setting c := 2ηβ we continue\n‖et‖`2 ≤ t−1∑ i=0 cγ ( 1− ηα2 )t−i−1 ‖zi‖`2 (iii)\n≤ t−1∑ i=0 cγ2 ( 1− ηα2 )t−i−1( 1− ηα 2 2 )i ‖z0‖`2\n(iv) ≤ t−1∑ i=0 cγ2 ( 1− ηα 2 2 )t−1 ‖z0‖`2\n= 2ηγ2β · t ( 1− ηα 2\n2\n)t−1 ‖z0‖`2 .\nHere (iii) holds because of our induction hypothesis 25 and (iv) follows simply from 1 − ηα2 ≤ 1 − ηα 2\n2 . This shows the first part of equation 26 for iteration t. Finally, to derive the second part of equation 26 we observe that for all t ≥ 0 and 0 < x ≤ 116 we have t (1− x)t−1 ≤ 1\ne(15 ln 1615 )x . Since ηα\n2 2 ≤ µα2 16β2 ≤ 1 16 we can use this estimate, which yields\n‖et‖`2 ≤ 2ηγ 2β · t\n( 1− ηα 2\n2\n)t−1 ‖z0‖`2\n≤ 4γ 2β e ( 15 ln 1615 ) α2 ‖z0‖`2 .\nHence, we have shown equation 26.\nPart IV: In this part, we aim to show that the parameters of the original and linearized problems are close. For that, we compute that\n1\nη ∥∥∥θt − θ̃t∥∥∥ `2 = ∥∥∥∥∥ t−1∑ i=0 ∇θh (θi,di)−∇θhlin (θi,di) ∥∥∥∥∥ `2\n= ∥∥∥∥∥ t−1∑ i=0 J T (θi)di − J T0 d̃i ∥∥∥∥∥ `2\n≤ t−1∑ i=0 ∥∥∥(J T (θi)− J T0 ) d̃i∥∥∥ `2 + t−1∑ i=0 ∥∥∥J T (θi)(di − d̃i)∥∥∥ `2\n(i) ≤ t−1∑ i=0 ‖z̃i‖`2 + β t−1∑ i=0 ‖ei‖`2\n(ii) ≤ γ t−1∑ i=0 ( 1− ηα2 )i ‖z0‖`2 + 2ηγ2β2 t−1∑ i=0 i ( 1− ηα 2 2 )i−1 ‖z0‖`2 .\nHere (i) follows from assumptions 2 and 3, and (ii) holds because of Lemma A.3 and our induction hypothesis 26. Hence, using the formula ∑t i=0 ix i = x(1+txt+1−(t+1)xt) (x−1)2 we obtain that\n1\nη ∥∥∥θt − θ̃t∥∥∥ `2 ≤ γ ‖z0‖`2 1− (1− ηα2)t ηα2 + 2ηβ2γ 1− t ( 1− ηα 2 2 )t−1 + (t− 1) ( 1− ηα 2 2 )t ( ηα2\n2\n)2 \n≤ γ ‖z0‖`2 1 ηα2 + 2ηβ2γ 1(\nηα2\n2\n)2 \n(iii)\n≤ γ ‖z0‖`2\n( β2γ\nηα4 +\n8β2γ\nηα4 ) = 9 β2γ2\nηα4 ‖z0‖`2 ,\nwhere (iii) holds due to 1 ≤ γ and 1 ≤ β 2\nα2 . Hence, we have established inequality 27 for iteration t.\nPart V: In this part, we are going to prove equation 28 for iteration t. First, it follows from the triangle inequality that\n‖θt − θ0‖`2 ≤ ∥∥∥θ̃t − θ0∥∥∥ `2 + ∥∥∥θt − θ̃t∥∥∥ `2\n≤ ∥∥∥θ̃t − θ0∥∥∥\n`2 +\n9 β2γ2\nα4 ‖z0‖`2 ,\nwhere in the second inequality we have used Part IV. Now we bound ∥∥∥θ̃t − θ0∥∥∥\n`2 from above as\nfollows ∥∥∥θ̃t − θ0∥∥∥ `2 = η ∥∥∥∥∥ t−1∑ i=0 J T0 d̃i ∥∥∥∥∥ `2\n≤ η t−1∑ i=0 ∥∥∥J T0 d̃i∥∥∥ `2\n(i) ≤ ηγ t−1∑ i=0 ( 1− ηα2 )i ∥∥∥∥[J T0 00 J T0 ] z0 ∥∥∥∥ `2\n= ηγ 1−\n( 1− ηα2 )t ηα2 ∥∥∥∥[J T0 00 J T0 ] z0 ∥∥∥∥ `2\n(ii) ≤ γ β 2\nα2 ∥∥∥∥[J †0 00 J †0 ] z0 ∥∥∥∥ `2 ,\nwhere (i) holds by equation 30 and (ii) holds by equation 29. Hence, it follows from the definition of R (equation 24) that\n‖θt − θ0‖`2 ≤ γ β2\nα2 ∥∥∥∥[J †0 00 J †0 ] z0 ∥∥∥∥ `2 + 9 β2γ2 α4 ‖z0‖`2\n= R\n2 .\nThis completes the proof." }, { "heading": "A.4 PRELIMINARIES FOR PROOFS OF RESULTS WITH ONE-HIDDEN LAYER GENERATOR AND LINEAR DISCRIMINATOR", "text": "In this section, we gather some preliminary results that will be useful in proving the main results i.e. Theorems 2.1 and 2.2. We begin by noting that Theorem 2.1 is an instance of Theorem A.4 with f (W ) = 1n ∑n i=1 V · φ (Wzi). We thus begin this section by noting that f (W ) can be rewritten as follows\nf (W ) = V · 1 n ∑n i=1 φ ( wT1 zi ) . . .\n1 n ∑n i=1 φ ( wTk zi\n) .\nFurthermore, the Jacobian of this mapping f (W ) takes the form\nJ (W ) = 1 n n∑ i=1 (V · diag (φ′ (Wzi)))⊗ zTi . (33)\nTo characterize the spectral properties of this Jacobian it will be convenient to write down the expression for J (W )J (W )T which has a compact form\nJ (W )J (W )T (i)= 1 n2 n∑ i,j=1 ( (V · diag (φ′ (Wzi)))⊗ zTi )( diag (φ′ (Wzj))V T ⊗ zj ) (ii) = 1\nn2 n∑ i,j=1 ( V diag (φ′ (Wzi)) diag (φ′ (Wzj))V T ) ⊗ ( zTi zj )\n= 1\nn2 V diag `=1,...,k ∥∥∥∥∥ n∑ i=1 ziφ ′ (wT` zi) ∥∥∥∥∥ 2\n`2 V T = 1\nn2 V ·D2 · V T ,\nwhereD is a diagonal matrix with entries\nD`` = ∥∥∥∥∥ n∑ i=1 ziφ ′(wT` zi) ∥∥∥∥∥ `2 = ∥∥ZTφ′(Zw`)∥∥`2 , (34)\nand Z ∈ Rn×d contains the zi’s in its rows. Note that we used simple properties of kronecker product in (i) and (ii), namely (A ⊗ B)T = AT ⊗ BT and (A ⊗ B)(C ⊗ D) = (AC) ⊗ (BD). The next lemma establishes concentration of the diagonal entries of matrix D2 around their mean, which will be used in the future lemmas regarding the spectrum of the Jacobian mapping. The proof is deferred to Appendix B.1.\nLemma A.6 Supposew ∈ Rd is a fixed vector, z1, z2, · · · , zn ∈ Rd are distributed asN (0, σ2zId) and constitute the rows of Z ∈ Rn×d. Then for any 0 ≤ δ ≤ 32 the random variable D =∥∥ZTφ′ (Zw)∥∥\n`2 satisfies (1− δ)E ( D2 ) ≤ D2 ≤ (1 + δ)E ( D2 )\nwith probability at least 1−2 ( e− nδ2 18 + e− dδ2 54 + e−c1nδ )\nwhere c1 is a positive constant. Moreover we have\nE ( D2 )\n= σ2z\n( nd\n2 + n(n− 1) 2π\n) .\nFurthermore, using the above equation we have\nE [ J (W )J (W )T ] = σ2z ( d+ n−1π ) 2n V V T ." }, { "heading": "A.5 LEMMAS REGARDING THE INITIAL MISFIT AND THE SPECTRUM OF THE JACOBIAN", "text": "In this section, we state some lemmas regarding the spectrum of the Jacobian mapping and the initial misfit, and defer their proofs to Appendix B. First, we state a result on the minimum singular value of the Jacobian mapping at initialization.\nLemma A.7 (Minimum singular value of the Jacobian at initialization) Consider our GAN model with a linear discriminator and the generator being a one hidden layer neural network of the form z V φ(W z), where we have n independent data points z1, z2, · · · , zn ∈ Rd distributed as N (0, σ2zId) and aggregated as the rows of a matrix Z ∈ Rn×d, and V ∈ Rm×k has i.i.d N (0, σ2v) entries. We also assume that W0 ∈ Rk×d has i.i.d N (0, σ2w) entries and all entries of W0, V , and Z are independent. Then the Jacobian matrix at the initialization point obeys σmin (J (W0)) ≥ (√ (1− δ)2 k − (1 + δ)2 − √ m (1 + η) (1 + δ) ) σvσz √ d+ n−1π\n2n , 0 ≤ δ ≤ 3 2\nwith probability at least 1 − 3e− η2m 8 − 2k · ( e− nδ2 18 + e− dδ2 54 + e−c1nδ )\n, where c1 is a positive constant.\nNext lemma helps us bound the spectral norm of the Jacobian at initialization, which will be used later to derive upper bounds on Jacobian at every point near initialization.\nLemma A.8 (spectral norm of the Jacobian at initialization) Following the setup of previous lemma, the operator norm of the Jacobian matrix at initialization pointW0 ∈ Rk×d satisfies\n‖J (W0)‖ ≤ (1 + δ)σvσz (√ k + 2 √ m )√d+ n−1π\n2n , 0 ≤ δ ≤ 3 2\nwith probability at least 1− e−m2 − k · ( e− nδ2 18 + e− dδ2 54 + e−c1nδ )\n, with c1 a positive constant.\nThe next lemma is adapted from Van Veen et al. (2018) and allows us to bound the variations in the Jacobian matrix around initialization.\nLemma A.9 (single-sample Jacobian perturbation) Let V ∈ Rm×k be a matrix with i.i.d. N ( 0, σ2v ) entries, W ∈ Rk×d, and define the Jacobian mapping J (W ; z) =\n(V diag (φ′ (W z))) ⊗ zT . Then, by taking W0 to be a random matrix with i.i.d. N ( 0, σ2w ) entries, we have\n‖J (W ; z)− J (W0; z)‖ ≤ σv ‖z‖`2 2√m+ √√√√√√6(2kRσw ) 2 3 log k 3 (\n2kR σw\n) 2 3 \nfor allW ∈ Rk×d obeying ‖W −W0‖ ≤ R with probability at least 1− e− m 2 − e−\n( 2kRσw ) 2 3\n6 .\nOur final key lemma bounds the initial misfit f(W0)− y := 1n ∑n i=1 V φ (W0zi)− x̄.\nLemma A.10 (Initial misfit) Consider our GAN model with a linear discriminator and the generator being a one hidden layer neural network of the form z V φ(W z), where we have n independent data points z1, z2, · · · , zn ∈ Rd distributed as N (0, σ2zId) and aggregated as the rows of a matrix Z ∈ Rn×d, and V ∈ Rm×k has i.i.d N (0, σ2v) entries. We also assume that the initialW0 ∈ Rk×d has i.i.d N (0, σ2w) entries. Then the following event∥∥∥∥∥ 1n n∑ i=1 V φ (W0zi)− x̄ ∥∥∥∥∥ `2 ≤ (1 + δ) 1√ 2π σvσwσz √ kdm+ ‖x̄‖`2 , 0 ≤ δ ≤ 3\nholds with probability at least 1 − ( k · e−c2n(δ/27)2 + e− (δ/9)2m 2 + e− (δ/3)2kd 2 ) , with c2 a fixed\nconstant." }, { "heading": "A.6 PROOF OF THEOREM 2.1", "text": "In this section, we prove Theorem 2.1 by using our general meta Theorem A.4. To do this we need to check that Assumptions 1-3 are satisfied with high probability. Specifically, in our case the parameter θ is the matrix W and the non-linear mapping f is given by f (W ) = 1n ∑n i=1 V φ (Wzi). We note that in our result d0 = 0 and thus ‖z0‖`2 = ‖r0‖`2 , which simplifies our analysis. To prove Assumption 1 note that by setting δ = 12 and η = 1 3 in Lemma A.7, we have\nσmin (J (W0)) ≥ σvσz ( 1\n2\n√ k − 9− 2 √ m\n)√ d+ n−1π\n2n\n=: α.\nThis holds with probability at least 1 − 3e−m72 − 4k · e−c·n − 2k · e− d216 , concluding the proof of Assumption 1. Next, by setting δ = 12 in Lemma A.8 we have\n‖J (W0)‖ ≤ ζ := 3\n2 σvσz\n(√ k + 2 √ m )√d+ n−1π\n2n\nwith probability at least 1− e−m2 − 2k · e−c·n− k · e− d216 . Now to bound spectral norm of Jacobian at W where ‖W −W0‖ ≤ R (the value of R is defined in the proof of assumption 3 below), we use triangle inequality to get\n‖J (W )‖ ≤ ‖J (W0)‖+ ‖J (W )− J (W0)‖ .\nThis last inequality together with assumption 3, which we will prove below, yields\n‖J (W )‖ ≤ ‖J (W0)‖+ ≤ ‖J (W0)‖+ α2\n4γβ ≤ ‖J (W0)‖+\n‖J (W0)‖2\n4β .\nTherefore by choosing β = 2ζ we arrive at\n‖J (W )‖ ≤ ‖J (W0)‖+ ‖J (W0)‖2\n4β\n= ‖J (W0)‖+ ‖J (W0)‖2\n8ζ\n≤ ‖J (W0)‖+ ‖J (W0)‖2 8 ‖J (W0)‖ ≤ 2 ‖J (W0)‖ ≤ 2ζ = β,\nestablishing that assumption 2 holds with\nβ = 3σvσz\n(√ k + 2 √ m )√d+ n−1π\n2n\nwith probability at least 1− e−m2 − 2k · e−c·n − k · e− d216 . Finally to show that Assumption 3 holds, we use the single-sample Jacobian perturbation result of Lemma A.9 combined with the triangle inequality to conclude that\n‖J (W )− J (W0)‖ = ∥∥∥∥∥ 1n ( n∑ i=1 J (W ; zi)− J (W0; zi) )∥∥∥∥∥ ≤ 1 n n∑ i=1 ‖J (W ; zi)− J (W0; zi)‖\n≤ σv n ( n∑ i=1 ‖zi‖`2 )2√m+ √√√√√√6(2kRσw ) 2 3 log k 3 (\n2kR σw\n) 2 3 \n(i) ≤ σv ‖Z‖F√\nn 2√m+ √√√√√√6(2kRσw ) 2 3 log k 3 (\n2kR σw\n) 2 3 \n(ii)\n≤ 5 4 σvσz\n√ d 2√m+ √√√√√√6(2kRσw ) 2 3 log k 3 (\n2kR σw\n) 2 3 , (35)\nwhere (i) holds by Cauchy–Schwarz inequality, and (ii) holds because for a Gaussian matrix Z ∈ Rn×d with N (0, σ2z) entries the following holds\nP ( ‖Z‖F ≤ 5 4 σz √ nd ) ≥ P ( ‖Z‖2F ≤ 3 2 σ2znd ) ≥ 1− e−nd24 .\nNow we set = α 2\n4γβ and show that Assumption 3 holds with this choice of and with radius R̃, whose value will be defined later in the proof. First, note that\n= α2\n4γβ\n= σ2vσ 2 z\n( 1 2 √ k − 9− 2 √ m )2 (d+n−1π 2n ) 12γσvσz (√ k + 2 √ m )√ d+n−1π 2n\n(i) ≥ σvσz\n( 1 8 √ k )2 · √ 1 4π\n60 ( 3 √ k )\n≥ σvσz √ k\n42000 ,\nwhere (i) holds by assuming k ≥ C ·m with C being a large positive constant. Combining the last inequality with equation 35, we observe that a sufficient condition for assumption 3 to hold is\n5 4 σvσz\n√ d 2√m+ √√√√√√6(2kRσw ) 2 3 log k 3 (\n2kR σw\n) 2 3 ≤ σvσz √ k 42000 ,\nwhich is equivalent to\n105000 √ md+ 52500 · √ d · √√√√√√6(2kRσw ) 2 3 log k 3 (\n2kR σw\n) 2 3 ≤ √k. Now the first term in the L.H.S. is upper bounded by 12 √ k if k ≥ (210000)2md, and for the second term we need\n105000 · √ d · √√√√√√6(2kRσw ) 2 3 log k 3 (\n2kR σw\n) 2 3\n ≤ √k,\nwhich by defining x = 3d (\n2R σw √ k\n) 2 3\nis equivalent to\nx log d x ≤ 1 2 · 1050002 .\nThis last inequality holds for x ≤ clog d with c < 1 a sufficiently small positive constant, which translates into\nR ≤ c σw √ k\n(d log d) 3 2\n. (36)\nSo far we have shown that Assumption 3 holds with = α 2\n4γβ and with radius R̃ defined as R̃ :=\nc σw √ k\n(d log d) 3 2\n, and we conclude that it holds for any radius R less than R̃ as well. Now we work with\nthe definition of R in equation 24 to show that R ≤ R̃: R\n2 = γ\nβ2\nα2 ∥∥∥∥[J †0 00 J †0 ] z0 ∥∥∥∥ `2 + 9 β2γ2 α4 ‖z0‖`2\n(i) ≤ γ β 2\nα3 ‖r0‖`2 +\n9 α 2\n4γββ 2γ2\nα4 ‖r0‖`2\n= γ ‖r0‖`2\n( β2\nα3 +\n9\n4\nβ\nα2 ) (ii)\n≤ 20β 2\nα3 ‖r0‖`2\n= 20\n( 3σvσz (√ k + 2 √ m )√ d+n−1π 2n )2 ( σvσz ( 1 2 √ k − 9− 2 √ m )√d+n−1π 2n\n)3 ‖r0‖`2 (iii)\n≤ C · 1 σvσz √ k · ( 2 3 σvσwσz √ k · d ·m+ ‖x̄‖`2 ) where (i) holds because\n∥∥∥J †0 ∥∥∥ ≤ 1α and 4γβ = α2, (ii) holds as 1 ≤ βα and as we substitute γ = 5 from Lemma A.3, and (iii) follows from k ≥ C · m and using δ = 13 in Lemma A.10. Now a sufficient condition for equation 36 to hold is that\n1 σvσz √ k · ( 2 3 σvσwσz √ k · d ·m+ ‖x̄‖`2 ) ≤ c σw √ k (d log d) 3 2 ,\nwhich is equivalent to 2\n3 σvσwσz · (d log d)\n3 2 √ k · d ·m+ (d log d)\n3 2 ‖x̄‖`2 ≤ c · kσvσwσz.\nFinally, this inequality is satisfied by assuming k ≥ C · md4 log (d)3 and setting σvσwσz ≥ ‖x̄‖`2 md 5 2 log d 3 2 . This shows that assumption 3 holds with probability at least 1 − ne−m2 − ne−c·md 3 log(d)2 − k · e−c·n − e− m1500 − e− kd162 , concluding the proof of Theorem 2.1." }, { "heading": "A.7 PROOF OF THEOREM 2.2", "text": "Consider a nonlinear least-squares optimization problem of the form\nmin θ∈Rp L(θ) := 1 2 ‖f (θ)− y‖2`2\nwith f : Rp 7→ Rm and y ∈ Rm. Suppose the Jacobian mapping associated with f satisfies the following three assumptions.\nAssumption 1 We assume σmin (J (θ0)) ≥ 2α for a fixed point θ0 ∈ Rp.\nAssumption 2 Let ‖ · ‖ be a norm dominated by the Frobenius norm i.e. ‖θ‖ ≤ ‖θ‖F holds for all θ ∈ Rp. Fix a point θ0 and a number R > 0. For any θ satisfying ‖θ − θ0‖ ≤ R, we have ‖J (θ)− J (θ0) ‖ ≤ α3 .\nAssumption 3 We assume for all θ ∈ Rp obeying ‖θ − θ0‖ ≤ R, we have ‖J (θ)‖ ≤ β.\nWith these assumptions in place we are now ready to state the following result from Oymak & Soltanolkotabi (2020):\nTheorem A.11 Given θ0 ∈ Rp, suppose assumptions 1, 2, and 3 hold with\nR = 3 ‖f (θ0)− y‖`2\nα . (37)\nThen, using a learning rate η ≤ 13β2 , all gradient descent updates obey ‖f (θτ )− y‖`2 ≤ ( 1− ηα2 )τ ‖f (θ0)− y‖`2 . (38) We are going to apply this theorem in our case where the parameter isW , the nonlinear mapping is f (W ) = 1n ∑n i=1 V φ (Wzi) with φ = ReLU , and the norm ‖ · ‖ set to the operator norm.\nSimilar to previous part, by using Lemma A.7 we conclude that with probability at least 1− 3e−m72 − 4k · e−c·n − 2k · e− d216 , assumption 1 is satisfied with\n2α := σvσz\n( 1\n2\n√ k − 9− 2 √ m\n)√ d+ n−1π\n2n .\nNext we show that assumption 2 is valid for α as defined in the previous line and for radius R̃ defined later. First we note that\nα 3 ≥ c · σvσz ·\n√ k,\nwhere the inequality holds by assumingK ≥ C ·m for a sufficiently large positive constant C. Now by using equation 35 assumption 2 holds if\n5 4 σvσz\n√ d 2√m+ √√√√√√6(2kRσw ) 2 3 log k 3 (\n2kR σw\n) 2 3 ≤ c · σvσz · √k,\nwhich is equivalent to\nC √ md+ C √ d · √√√√√√6(2kRσw ) 2 3 log k 3 (\n2kR σw\n) 2 3 ≤ √k. The first term in the L.H.S. of the inequality above is upper bounded by 12 √ k if k ≥ C ·md. For upper bounding the second term it is sufficient to show that\nC √ d √√√√√√6(2kRσw ) 2 3 log k 3 · (\n2kR σw\n) 2 3 ≤ √k, which by defining x = 3d ( 2R\nσw √ k\n) 2 3 is equivalent to x · log ( d x ) ≤ C. Now this last inequality\nholds if we have x ≤ clog(d) for a sufficiently small constant c, which by rearranging terms amounts to showing that R ≤ c · σw √ k\n(d·log(d)) 3 2\n. Hence up to this point, we have shown that assumption 2\nholds with radius R̃ := c · σw √ k\n(d·log(d)) 3 2\n, and this implies that it holds for all values of R less than R̃.\nTherefore, we work with the definition of R in equation 37 to show that R ≤ R̃ as follows:\nR = 3 ‖f (θ0)− y‖`2\nα (i) ≤ 3 α\n( 2\n3 σvσwσz\n√ k · d ·m+ ‖x̄‖`2\n)\n= 2 ( 2σvσwσz √ k ·m · d+ 3 ‖x̄‖`2 ) σvσz ( 1 2 √ k − 9− 2 √ m )√d+n−1π 2n ,\nwhere in (i) we used Lemma A.10 with δ = 13 . Hence for showing R ≤ R̃ it suffices to show that 2 (\n2σvσwσz √ k ·m · d+ 3 ‖x̄‖`2 ) σvσz ( 1 2 √ k − 9− 2 √ m )√d+n−1π 2n ≤ c · σw √ k (d · log (d)) 3 2 ,\nwhich by assuming k ≥ C ·m simplifies to\nσvσwσz (d · log (d)) 3 2 √ k ·m · d+ (d · log (d)) 3 2 ‖x̄‖`2 ≤ C · k · σvσwσz.\nNow this last inequality holds if k ≥ C · md4 log (d)3 and by setting σvσwσz ≥ ‖x̄‖`2\nmd 5 2 log d 3 2\n.\nTherefore Assumption 2 holds for radius R defined in equation 37 with probability at least 1 − ne− m 2 − ne−c·md3 log (d)2 − k · e−c·n − e− m1500 − e− kd162 .\nFinally to show assumption 3 holds, we note that for all W satisfying ‖W −W0‖ ≤ R, where the value of R is defined in equation 37, it holds that\n‖J (W )‖ ≤ ‖J (W0)‖+ ‖J (W )− J (W0)‖\n≤ ‖J (W0)‖+ α\n3\n≤ ‖J (W0)‖+ σmin (J (W0)) 6 ≤ 2 ‖J (W0)‖ ≤ 3σvσz (√ k + 2 √ m )√d+ n−1π\n2n ,\nwhere the last inequality holds by using lemma A.8, hence establishing that assumption 3 holds with\nβ = 3σvσz\n(√ k + 2 √ m )√d+ n−1π\n2n\nwith probability at least 1− e−m2 − 2k · e−c·n − k · e− d216 , finishing the proof of Theorem 2.2." }, { "heading": "B PROOFS OF THE AUXILIARY LEMMAS", "text": "In this section, we first provide a proof of Lemma A.6 and next go over the proofs of the key lemmas stated in Section A.5." }, { "heading": "B.1 PROOF OF LEMMA A.6", "text": "Recall that\nJ (W )J (W )T = 1 n2 n∑ i,j=1 ( V diag(φ′(Wzi))diag(φ′(Wzj)V T ) (zTi zj)\n= 1\nn2 V diag `=1,...,k ∥∥∥∥∥ n∑ i=1 ziφ ′(wT` zi) ∥∥∥∥∥ 2\n`2\nV T = 1 n2 V ·D2 · V T ,\nwhereD is a diagonal matrix with entries\nD`` = ∥∥∥∥∥ n∑ i=1 ziφ ′(wT` zi) ∥∥∥∥∥ `2 = ∥∥ZTφ′(Zw`)∥∥`2 . (39)\nThe matrix Z ∈ Rn×d contains the zi’s in its rows. In order to proceed we are going to analyze the entries of the diagonal matrixD2. We observe that∥∥ZTφ′(Zw)∥∥2\n`2 = ∥∥∥∥(I − wwT||w||2 )ZTφ′(Zw) ∥∥∥∥2 `2︸ ︷︷ ︸\nA\n+ ∥∥∥∥wwT||w||2ZTφ′(Zw) ∥∥∥∥2 `2︸ ︷︷ ︸\nB\n.\nFirst, we compute the expectation of A. We observe that\nA = ∥∥∥∥∥ n∑ i=1 (I − ww T ||w||2 )ziφ ′(wTzi) ∥∥∥∥∥ 2\n`2\n.\nConditioned onw, ( I − ww T\n‖w‖2\n) zi is distributed asN ( 0, σ2z ( I − ww T\n‖w‖2\n)) andwTzi has distribu-\ntion N ( 0, σ2z‖w‖2 ) . Moreover, these two random variables are independent, because w is in the null space of I − ww T\n||w||2 . This observation yields\nE(A) = E ∥∥∥∥∥ n∑ i=1 (I − ww T ||w||2 )ziφ ′(wTzi) ∥∥∥∥∥ 2\n`2\n= n∑ i=1 n∑ j=1 E (〈 (I − ww T ||w||2 )zi, (I − wwT ||w||2 )zj 〉 φ′(wTzi)φ ′(wTzj) )\n= n∑ i=1 E (∥∥∥∥(I − wwT||w||2 )zi ∥∥∥∥2 `2 ) E ([ φ′(wTzi) ]2) =\nn∑ i=1 1 2 (d− 1)σ2z = n 2 (d− 1)σ2z .\nNext we show that A is concentrated around its mean. Because ( I − ww T\n||w||2\n) zi is independent from\nwTzi, we use z′i as an independent copy of zi. Hence we can write\nA = ∥∥∥∥(I − wwT||w||2 )ZTφ′(Zw) ∥∥∥∥2 `2\n= ∥∥∥∥(I − wwT||w||2 )ZTφ′(Z′w) ∥∥∥∥2 `2\n= ∥∥∥∥∥ n∑ i=1 (I − ww T ||w||2 )ziφ ′ (wTz′i) ∥∥∥∥∥ 2\n`2\n= ∥∥∥∥∥ n∑ i=1 giui ∥∥∥∥∥ 2\n`2\n,\nwhere gi = ( I − ww T\n‖w‖2\n) zi ∼ N ( 0, σ2z ( I − ww T\n‖w‖2\n)) and ui = φ′ ( wTz′i ) ∼ bern( 12 ), 4 and\nthese are all independent from each other. Note that ‖ ∑n i=1 giui‖ 2 `2 has the same distribution as ‖g‖2`2 · ‖u‖ 2 `2 , where g ∼ N ( 0, σ2z ( I − ww T ‖w‖2 )) and u is a vector with entries ui. Note that for the norm of u, the event\nn 2 (1− δ) ≤ ‖u‖2`2 ≤ n 2 (1 + δ)\nholds with probability at least 1− 2e−nδ 2\n2 . Recall that for g ∼ N (0, σ2Id) and 0 < δ ≤ 12 we have\nP ( ‖g‖2`2 ≥ (1 + δ)E ( ‖g‖2`2 )) ≤ e− dδ 2 6 ,\nP ( ‖g‖2`2 ≤ (1− δ)E ( ‖g‖2`2 )) ≤ e− dδ 2 4 . (40)\nBy applying the union bound and noting that E ( ‖g‖2`2 ) = (d− 1)σ2z , for 0 < δ ≤ 32 , we obtain that the event\n| ‖g‖2`2 ‖u‖ 2 `2 − n 2 (d− 1)σ2z | ≤ δ n 2 (d− 1)σ2z\n4Here, bern( 1 2 ) means that the random variable takes values 0 and 1 each with probability 1/2.\nholds with probability at least 1− 2e−nδ 2 18 − 2e− dδ 2\n54 . In order to analyze B, we first note that\nB = ∥∥∥∥wwT||w||2ZTφ′(Zw) ∥∥∥∥2 `2 = ∣∣∣∣ wT||w||ZTφ′(Zw) ∣∣∣∣2\n= ∣∣∣∣〈Z w||w|| , φ′(Zw) 〉∣∣∣∣2\n= ∣∣∣ 〈g, φ′(||w||g)〉 ∣∣∣2\n= ∣∣∣∣∣ n∑ i=1 gi · 1(gi≥0) ∣∣∣∣∣ 2 = ( n∑ i=1 ReLU(gi) )2 ,\nwhere gi = zTi w ‖w‖ ∼ N ( 0, σ2z ) . It follows that\nE (B) = E ( n∑ i=1 ReLU(gi) )2\n= n∑ i=1 E ( ReLU2(gi) ) + ∑ i 6=j E ( ReLU(gi)ReLU(gj) ) = σ2z ( n\n2 + n(n− 1) 2π\n) ,\nwhich results in E ( D2`` ) = E (A) + E (B) = σ2z\n( nd\n2 + n(n− 1) 2π\n) , 1 ≤ ` ≤ k.\nNext, in order to show that B concentrates around its mean, we note that ReLU (gi) is a subGaussian random variable with ψ2−norm Cσz , where C is a fixed constant. Therefore X =∑n i=1ReLU (gi) is sub-Gaussian with ψ2−norm C √ nσz . By the sub-exponential tail bound for\nX2 − E(X2) we obtain P ( |B − E (B)| ≥ t ) ≤ 2e−c t nσ2z .\nFinally by putting these results together and using union bounds we have P {∣∣D2`` − E (D2``)∣∣ ≥ δE (D2``)} ≤ 2e−nδ218 + 2e− dδ254 + 2e−c1nδ, 0 ≤ δ ≤ 32 ,\nfinishing the proof of Lemma A.6." }, { "heading": "B.2 PROOF OF LEMMA A.7", "text": "Our main tool for bounding the minimum singular value of the Jacobian mapping will be the following lemma from Soltanolkotabi (2019):\nLemma B.1 Let d ∈ Rk be a fixed vector with nonzero entries and D = diag (d) . Also, let A ∈ Rk×m have i.i.d. N (0, 1) entries and T ⊆ Rm. Define\nbk (d) = E [ ‖Dg‖`2 ] ,\nwhere g ∼ N (0, Ik) . Also define σ (T ) := max\nv∈T ‖v‖`2 .\nThen for all u ∈ T we have∣∣‖DAu‖`2 − bk (d) ‖u‖`2∣∣ ≤ ‖d‖`∞ ω (T ) + η with probability at least 1− 6e −η2 8‖d‖2 `∞ σ2(T ) .\nIn order to apply this lemma, we set the elements of d to be D`` as in equation 39 and choose T = Sm−1 andA = V T ∈ Rk×m with N (0, σ2v) entries. It follows that\nbk (d) = E ‖Dg‖`2 = √ E ( ‖Dg‖2`2 ) − Var ( ‖Dg‖`2 ) ,\nwhere\nE ( ‖Dg‖2`2 ) = ‖d‖2`2 = k∑ `=1 D2``.\nWe are going to use the fact that for a B-Lipschitz function φ and normal random variable g ∼ N (0, 1), based on the Poincare inequality (Ledoux, 2001) we have Var(φ (g)) ≤ B2. By noting that for a diagonal matrixD∣∣‖Dx‖`2 − ‖Dy‖`2 ∣∣ ≤ ‖Dx−Dy‖`2 ≤ ‖d‖`∞ ‖x− y‖`2 , we get\nE ‖Dg‖`2 = √ E ( ‖Dg‖2`2 ) − Var(‖Dg‖`2)\n≥ √ ‖d‖2`2 − ‖d‖ 2 `∞ .\nThis combined with ω ( Sm−1 ) ≤ √ m and Lemma B.1 yields that the event\nσmin (V D) ≥ σv (√ ‖d‖2`2 − ‖d‖ 2 `∞ − ‖d‖`∞ √ m− η ) (41)\nholds with probability at least 1− 3e −η2 8‖d‖2 `∞ . Next, using the concentration bound for D2``, which we obtained in Section B.1, we bound ‖d‖ 2 `2 and ‖d‖`∞ , where we have set di = Dii for 1 ≤ i ≤ k. For 0 ≤ δ ≤ 3 2 we compute that\nP (\nmax 1≤i≤k\ndi ≥ (1 + δ) √ E [d2i ] ) = P ( k⋃ i=1 d2i ≥ (1 + δ) 2 E [ d2i ])\n≤ k · P ( d2i ≥ (1 + δ) 2 E [ d2i ])\n≤ k · P ( d2i ≥ (1 + δ)E [ d2i ]) ≤ k · ( e− nδ2 18 + e− dδ2 54 + e−c1nδ ) ,\n(42) as well as\nP ( ‖d‖`2 ≤ (1− δ) √ k √ E [d2i ] ) ≤ P ( k⋃ i=1 d2i ≤ (1− δ) 2 E [ d2i ])\n≤ k · P ( d2i ≤ (1− δ) 2 E [ d2i ])\n≤ k · P ( d2i ≤ (1− δ)E [ d2i ]) ≤ k · ( e− nδ2 18 + e− dδ2 54 + e−c1nδ ) .\n(43)\nFinally by replacing η with η ‖d‖`∞ √ m in equation 41, combined with equation 42 and equation 43, for a randomW0 with i.i.d. N ( 0, σ2w ) entries we have:\nσmin (J (W0)) = 1\nn σmin (V D)\n≥ σv n\n(√ (1− δ)2 k − (1 + δ)2 − √ m (1 + η) (1 + δ) )√ E [d2i ]\n= (√ (1− δ)2 k − (1 + δ)2 − √ m (1 + η) (1 + δ) ) σvσz √ d+ n−1π\n2n , 0 ≤ δ ≤ 3 2 ,\nwith probability at least 1− 3e− η2m 8 − 2k · ( e− nδ2 18 + e− dδ2 54 + e−c1nδ )\n. This completes the proof of Lemma A.7." }, { "heading": "B.3 PROOF OF LEMMA A.8", "text": "Recall that\nJ (W )J (W )T = 1 n2 V diag `=1,...,k ∥∥∥∥∥ n∑ i=1 ziφ ′(wT` zi) ∥∥∥∥∥ 2\n`2\nV T = 1 n2 V ·D2 · V T ,\nwhich implies that\n‖J (W0)‖ = 1 n ‖V ·D‖ ≤ 1 n ‖V ‖ ‖D‖ .\nFor matrix V ∈ Rm×k with i.i.d N (0, σ2v) the event ‖V ‖ ≤ σv (√ k + 2 √ m )\nholds with probability at least 1−e−m2 . Regarding matrixD by repeating equation 42 the following event\n‖D‖ = max 1≤i≤k\nDii ≤ (1 + δ) √ E[D2ii] = (1 + δ)σz √ nd 2 + n(n− 1) 2π , 0 ≤ δ ≤ 3 2\nholds with probability at least 1 − k · ( e− nδ2 18 + e− dδ2 54 + e−c1nδ )\n. Putting these together it yields that the event\n‖J (W0)‖ ≤ (1 + δ)σvσz (√ k + 2 √ m )√d+ n−1π\n2n , 0 ≤ δ ≤ 3 2\nholds with probability at least 1 − e−m2 − k · ( e− nδ2 18 + e− dδ2 54 + e−c1nδ )\n, finishing the proof of Lemma A.8." }, { "heading": "B.4 PROOF OF LEMMA A.10", "text": "First, note that if W has i.i.d. N ( 0, σ2w ) entries and V ,W ,Z are all independent, then ‖f (W )‖`2 = 1 n\n∥∥V φ (WZT )1n×1∥∥`2 has the same distribution as ‖v‖`2 ‖a‖`2 , where v ∼ N ( 0, σ2vIm ) and a = 1nφ ( WZT ) 1 has independent sub-Gaussian entries, so its `2-norm is\nconcentrated. Note that conditioned on W , ai = 1n ∑n j=1ReLU ( zTj wi ) is sub-Gaussian with ‖ai‖ψ2 = C ‖wi‖`2σz√ n , and it is concentrated around Eai = 1√2π ‖wi‖`2 σz . This gives\nP {ai ≤ (1 + δ)Eai} ≥ 1− e −c δ\n2(Eai) 2 ‖ai‖2ψ2 = 1− e−cnδ 2 ,\nwhich implies that P { a2i ≤ (1 + 3δ)(Eai)2 } ≥ P { a2i ≤ (1 + δ)2(Eai)2 } ≥ 1− e−cnδ 2 , 0 ≤ δ ≤ 1.\nDue to the union bound we get that\nP { ‖a‖2`2 ≥ (1 + δ) k∑ i=1 (Eai)2 } ≤ P { k⋃ i=1 a2i ≥ (1 + δ) (Eai)2 }\n≤ k∑ i=1 P { a2i ≥ (1 + δ) (Eai)2 } ≤ k · e−cn(δ/3) 2 , 0 ≤ δ ≤ 3.\nBy substituting ∑k i=1(Eai)2 = 1 2πσ 2 z ‖W ‖ 2 F this shows\nP { ‖a‖`2 ≤ (1 + δ)\n1√ 2π σz ‖W ‖F\n} ≥ P { ‖a‖2`2 ≤ (1 + δ) 1 2π σ2z ‖W ‖ 2 F } ≥ 1− k · e−cn(δ/3) 2 , 0 ≤ δ ≤ 3.\nWe also have the following result for v ∼ N (0, σ2vIm) P { ‖v‖`2 ≤ (1 + δ)σv √ m } ≥ 1− e− δ 2m 2 . By combining the above results we obtain\nP { ‖a‖`2 ‖v‖`2 ≤ (1 + δ)\n1√ 2π σvσz √ m ‖W ‖F\n} ≥ P { ‖a‖`2 ‖v‖`2 ≤ (1 + δ/3)\n2 1√ 2π σvσz √ m ‖W ‖F } ≥ 1− k · e−cn(δ/9) 2 − e− (δ/3)2m\n2 , 0 ≤ δ ≤ 3. Furthermore, we can bound ‖W ‖F by the tail inequality\nP { ‖W ‖F ≤ (1 + δ)σw √ kd } ≥ 1− e− δ 2kd 2 .\nHence, by combining the last two results we have that P { ‖a‖`2 ‖v‖`2 ≤ (1 + δ)\n1√ 2π σvσzσw\n√ k · d ·m } ≥ P { ‖a‖`2 ‖v‖`2 ≤ (1 + δ/3)\n2 1√ 2π σvσzσw\n√ k · d ·m } ≥ 1− k · e−cn(δ/27) 2 − e− (δ/9)2m 2 − e− (δ/3)2kd\n2 , 0 ≤ δ ≤ 3. Therefore, due to the triangle inequality the event\n‖f(W0)− x̄‖`2 ≤ (1 + δ) 1√ 2π σvσwσz\n√ k · d ·m+ ‖x̄‖`2 , 0 ≤ δ ≤ 3\nholds with probability at least 1−k ·e−c2n(δ/27)2−e− (δ/9)2m 2 −e− (δ/3)2kd\n2 for some positive constant c2, completing the proof of Lemma A.10." }, { "heading": "C ADDITIONAL EXPERIMENTS", "text": "Effect of single component overparameterization: In Section 3 of the main paper, we performed experiments in the setting where the size of generator and discriminator are held roughly the same (both discriminator and generator uses the same value of k). In this part, we analyze singlecomponent overparameterization where we study the effect of overparameterization when one of the components (generator / discriminator) has varying k, while the other component uses the standard value of k (64 for DCGAN and 128 for Resnet GAN). The FID variation of single-component overparameterization are shown in Fig. 7. We observe similar trends as the previous case where increasing overparameterization leads to improved FID scores. Interestingly, increasing the value of k beyond the default value used in the other component leads to a slight drop in performance. Hence, choosing comparable sizes of discriminator and generator models is recommended." }, { "heading": "D EXPERIMENTAL DETAILS", "text": "The model architectures we use this in the experiments are shown in Figure 8. In both DCGAN and Resnet-based GANs, the papemeter k controls the number of convolutional filters in each layer. The larger the value of k is, the more overparameterized the models are.\nOptimization: Both DCGAN and Resnet-based GAN models are optimized using the commonly used hyper-parameters: Adam with learning rate 0.0001 and betas (0.5, 0.999) for DCGAN, gradient penalty of 10 and 5 critic iterations per generator’s iteration for both DCGAN and Resnet-based GAN models. Models are trained for 300, 000 iterations with a batch size of 64." }, { "heading": "E NEAREST NEIGHBOR VISUALIZATION", "text": "In this section, we visualize the nearest neighbors of samples generated using GAN models trained with different levels of overparameterization. More specifically, we trained a DCGAN model with k = 8 and k = 128, synthesize random samples from the trained model and query the nearest neighbors in the training set. The plot of obtained samples is shown in Figure. 10. We observe that overparameterized models generate samples with high diversity." } ]
2,021
UNDERSTANDING OVERPARAMETERIZATION IN GENERATIVE ADVERSARIAL NETWORKS
SP:65e92cbe15e2f0237433a41149d1d68ded0cc51c
[ "In this paper, the authors provide a new interpretation of existing video compression models. Their perspective is that a video decoder is a stochastic temporal autoregressive model with latent variables. The introduced latent variables could be either used for providing more expressive power for 1) motion estimation&compensation modeling and 2) residual noise modeling, which are two key components of traditional video codecs. The proposed method shows favorable results when the bitrate is higher than 0.12 bits per pixel on the public benchmarks.", "In this paper, the authors focus on the problem of lossy video compression. To this end they propose the application of latent variable sequential generative models, specifically autoregressive flows to compress video streams. They evaluate variations of these models quantitatively including their own proposed version of scale space flow. They also introduce a new dataset named Youtube-NT and show promising quantitative performance." ]
Recent work by Marino et al. (2020) showed improved performance in sequential density estimation by combining masked autoregressive flows with hierarchical latent variable models. We draw a connection between such autoregressive generative models and the task of lossy video compression. Specifically, we view recent neural video compression methods (Lu et al., 2019; Yang et al., 2020b; Agustsson et al., 2020) as instances of a generalized stochastic temporal autoregressive transform, and propose avenues for enhancement based on this insight. Comprehensive evaluations on large-scale video data show improved rate-distortion performance over both state-of-the-art neural and conventional video compression methods.
[ { "affiliations": [], "name": "Ruihan Yang" }, { "affiliations": [], "name": "Yibo Yang" }, { "affiliations": [], "name": "Joseph Marino" }, { "affiliations": [], "name": "Stephan Mandt" } ]
[ { "authors": [ "Eirikur Agustsson", "David Minnen", "Nick Johnston", "Johannes Balle", "Sung Jin Hwang", "George Toderici" ], "title": "Scale-space flow for end-to-end optimized video compression", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Mohammad Babaeizadeh", "Chelsea Finn", "Dumitru Erhan", "Roy H Campbell", "Sergey Levine" ], "title": "Stochastic variational video prediction", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Johannes Ballé", "Valero Laparra", "Eero P Simoncelli" ], "title": "End-to-end optimized image compression", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Johannes Ballé", "David Minnen", "Saurabh Singh", "Sung Jin Hwang", "Nick Johnston" ], "title": "Variational image compression with a scale hyperprior", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Nicholas Léonard", "Aaron Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "arXiv preprint arXiv:1308.3432,", "year": 2013 }, { "authors": [ "Joao Carreira", "Andrew Zisserman" ], "title": "Quo vadis, action recognition? a new model and the kinetics dataset", "venue": "In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "T. Chen", "H. Liu", "Q. Shen", "T. Yue", "X. Cao", "Z. Ma" ], "title": "Deepcoder: A deep neural network based video compression", "venue": "IEEE Visual Communications and Image Processing (VCIP),", "year": 2017 }, { "authors": [ "Zhibo Chen", "Tianyu He", "Xin Jin", "Feng Wu" ], "title": "Learning for video compression", "venue": "IEEE Transactions on Circuits and Systems for Video Technology,", "year": 2019 }, { "authors": [ "Junyoung Chung", "Kyle Kastner", "Laurent Dinh", "Kratarth Goel", "Aaron C Courville", "Yoshua Bengio" ], "title": "A recurrent latent variable model for sequential data", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Cassius C Cutler" ], "title": "Differential quantization of communication", "venue": "signals, July", "year": 1952 }, { "authors": [ "Emily Denton", "Rob Fergus" ], "title": "Stochastic video generation with a learned prior", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "Nice: Non-linear independent components estimation", "venue": "arXiv preprint arXiv:1410.8516,", "year": 2014 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "arXiv preprint arXiv:1605.08803,", "year": 2016 }, { "authors": [ "A. Djelouah", "J. Campos", "S. Schaub-Meyer", "C. Schroers" ], "title": "Neural inter-frame compression for video coding", "venue": "IEEE/CVF International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Alexey Dosovitskiy", "Philipp Fischer", "Eddy Ilg", "Philip Hausser", "Caner Hazirbas", "Vladimir Golkov", "Patrick Van Der Smagt", "Daniel Cremers", "Thomas Brox" ], "title": "Flownet: Learning optical flow with convolutional networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Gergely Flamich", "Marton Havasi", "José Miguel Hernández-Lobato" ], "title": "Compression without quantization", "venue": "In OpenReview,", "year": 2019 }, { "authors": [ "Adam Golinski", "Reza Pourreza", "Yang Yang", "Guillaume Sautiere", "Taco S Cohen" ], "title": "Feedback recurrent autoencoder for video compression", "venue": "arXiv preprint arXiv:2004.04342,", "year": 2020 }, { "authors": [ "Amirhossein Habibian", "Ties van Rozendaal", "Jakub M Tomczak", "Taco S Cohen" ], "title": "Video compression with rate-distortion autoencoders", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Jun Han", "Salvator Lombardo", "Christopher Schroers", "Stephan Mandt" ], "title": "Deep generative video compression", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Nick Johnston", "Damien Vincent", "David Minnen", "Michele Covell", "Saurabh Singh", "Troy Chinen", "Sung Jin Hwang", "Joel Shor", "George Toderici" ], "title": "Improved lossy image compression with priming and spatially adaptive bit rates for recurrent networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Durk P Kingma", "Tim Salimans", "Rafal Jozefowicz", "Xi Chen", "Ilya Sutskever", "Max Welling" ], "title": "Improved variational inference with inverse autoregressive flow", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Alex X Lee", "Richard Zhang", "Frederik Ebert", "Pieter Abbeel", "Chelsea Finn", "Sergey Levine" ], "title": "Stochastic adversarial video prediction", "venue": "arXiv preprint arXiv:1804.01523,", "year": 2018 }, { "authors": [ "Yingzhen Li", "Stephan Mandt" ], "title": "Disentangled sequential autoencoder", "venue": "In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Haojie Liu", "Han Shen", "Lichao Huang", "Ming Lu", "Tong Chen", "Zhan Ma" ], "title": "Learned video compression via joint spatial-temporal correlation exploration", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Guo Lu", "Wanli Ouyang", "Dong Xu", "Xiaoyun Zhang", "Chunlei Cai", "Zhiyong Gao" ], "title": "Dvc: An endto-end deep video compression framework", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Joseph Marino", "Lei Chen", "Jiawei He", "Stephan Mandt" ], "title": "Improving sequential latent variable models with autoregressive flows", "venue": "In Symposium on Advances in Approximate Bayesian Inference,", "year": 2020 }, { "authors": [ "Alexandre Mercat", "Marko Viitanen", "Jarno Vanne" ], "title": "Uvg dataset: 50/120fps 4k sequences for video codec analysis and development", "venue": "In Proceedings of the 11th ACM Multimedia Systems Conference,", "year": 2020 }, { "authors": [ "David Minnen", "Saurabh Singh" ], "title": "Channel-wise autoregressive entropy models for learned image compression", "venue": "IEEE International Conference on Image Processing (ICIP),", "year": 2020 }, { "authors": [ "David Minnen", "Johannes Ballé", "George D Toderici" ], "title": "Joint autoregressive and hierarchical priors for learned image compression", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "George Papamakarios", "Theo Pavlakou", "Iain Murray" ], "title": "Masked autoregressive flow for density estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "In Proceedings of the 32nd International Conference on International Conference on Machine Learning-Volume", "year": 2015 }, { "authors": [ "Florian Schmidt", "Thomas Hofmann" ], "title": "Deep state space models for unconditional word generation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Florian Schmidt", "Stephan Mandt", "Thomas Hofmann" ], "title": "Autoregressive text generation beyond feedback loops", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Gary J Sullivan", "Jens-Rainer Ohm", "Woo-Jin Han", "Thomas Wiegand" ], "title": "Overview of the high efficiency video coding (hevc) standard", "venue": "IEEE Transactions on circuits and systems for video technology,", "year": 2012 }, { "authors": [ "Lucas Theis", "Wenzhe Shi", "Andrew Cunningham", "Ferenc Huszár" ], "title": "Lossy image compression with compressive autoencoders", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "George Toderici", "Damien Vincent", "Nick Johnston", "Sung Jin Hwang", "David Minnen", "Joel Shor", "Michele Covell" ], "title": "Full resolution image compression with recurrent neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Carl Vondrick", "Hamed Pirsiavash", "Antonio Torralba" ], "title": "Generating videos with scene dynamics", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Haiqiang Wang", "Weihao Gan", "Sudeng Hu", "Joe Yuchieh Lin", "Lina Jin", "Longguang Song", "Ping Wang", "Ioannis Katsavounidis", "Anne Aaron", "C-C Jay Kuo" ], "title": "Mcl-jcv: a jnd-based h. 264/avc video quality assessment dataset", "venue": "IEEE International Conference on Image Processing (ICIP),", "year": 2016 }, { "authors": [ "Zhou Wang", "Eero P Simoncelli", "Alan C Bovik" ], "title": "Multiscale structural similarity for image quality assessment", "venue": "In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers,", "year": 2003 }, { "authors": [ "Thomas Wiegand", "Gary J Sullivan", "Gisle Bjontegaard", "Ajay Luthra" ], "title": "Overview of the h. 264/avc video coding standard", "venue": "IEEE Transactions on circuits and systems for video technology,", "year": 2003 }, { "authors": [ "Chao-Yuan Wu", "Nayan Singhal", "Philipp Krahenbuhl" ], "title": "Video compression through image interpolation", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Tianfan Xue", "Baian Chen", "Jiajun Wu", "Donglai Wei", "William T Freeman" ], "title": "Video enhancement", "venue": null, "year": 2021 }, { "authors": [ "Agustsson" ], "title": "2020) uses a simple implementation of scale-space flow by convolving the previous reconstructed frame x̂t−1 with a sequence of Gaussian kernel σ2", "venue": null, "year": 2020 }, { "authors": [ "Agustsson" ], "title": "Figure 7 shows the encoder-decoder flowchart for wt and vt separately, as well as their corresponding entropy models (priors), in the STAT-SSF-SP model", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent advances in deep generative modeling have seen a surge of applications, including learningbased compression. Generative models have already demonstrated empirical improvements in image compression, outperforming classical codecs (Minnen et al., 2018; Yang et al., 2020d), such as BPG (Bellard, 2014). In contrast, the less developed area of neural video compression remains challenging due to complex temporal dependencies operating at multiple scales. Nevertheless, recent neural video codecs have shown promising performance gains (Agustsson et al., 2020), in some cases on par with current hand-designed, classical codecs, e.g., HEVC. Compared to hand-designed codecs, learnable codecs are not limited to specific data modality, and offer a promising approach for streaming specialized content, such as sports or video chats. Therefore, improving neural video compression is vital for dealing with the ever-growing amount of video content being created.\nSource compression fundamentally involves decorrelation, i.e., transforming input data into white noise distributions that can be easily modeled and entropy-coded. Thus, improving a model’s capability to decorrelate data automatically improves its compression performance. Likewise, we can improve the associated entropy model (i.e., the model’s prior) to capture any remaining dependencies. Just as compression techniques attempt to remove structure, generative models attempt to model structure. One family of models, autoregressive flows, maps between less structured distributions, e.g., uncorrelated noise, and more structured distributions, e.g., images or video (Dinh et al., 2014; 2016). The inverse mapping can remove dependencies in the data, making it more amenable for compression. Thus, a natural question to ask is how autoregressive flows can best be utilized in compression, and if mechanisms in existing compression schemes can be interpreted as flows.\nThis paper draws on recent insights in hierarchical sequential latent variable models with autoregressive flows (Marino et al., 2020). In particular, we identify connections between this family of models and recent neural video codecs based on motion estimation (Lu et al., 2019; Agustsson et al., 2020). By interpreting this technique as an instantiation of a more general autoregressive flow transform, we propose various alternatives and improvements based on insights from generative modeling.\nIn more detail, our main contributions are as follows:\n1. A new framework. We interpret existing video compression methods through the more general framework of generative modeling, variational inference, and autoregressive flows, allowing us to readily investigate extensions and ablations. In particular, we compare fully data-driven approaches with motion-estimation-based neural compression schemes, and\nconsider a more expressive prior model for better entropy coding (described in the second bullet point below). This framework also provides directions for future work.\n2. A new model. Following the predictive coding paradigm of video compression (Wiegand et al., 2003), Scale-Space Flow (SSF)(Agustsson et al., 2020) uses motion estimation to predict the frame being compressed, and further compresses the residual obtained by subtraction. Our proposed model extends the SSF model with a more flexible decoder and prior, and improves over the state of the art in rate-distortion performance. Specifically, we\n• Incorporate a learnable scaling transform to allow for more expressive and accurate reconstruction. Augmenting a shift transform by scale-then-shift is inspired by improvements from extending NICE (Dinh et al., 2014) to RealNVP (Dinh et al., 2016).\n• Introduce a structured prior over the two sets of latent variables in the generative model of SSF, corresponding to jointly encoding the motion information and residual information. As the two tend to be spatially correlated, encoding residual information conditioned on motion information results in a more informed prior, and thus better entropy model, for the residual information; this cuts down the bit-rate for the latter that typically dominates the overall bit-rate.\n3. A new dataset. The neural video compression community currently lacks large, highresolution benchmark datasets. While we extensively experimented on the publicly available Vimeo-90k dataset (Xue et al., 2019), we also collected and utilized a larger dataset, YouTube-NT1, available through executable scripts. Since no training data was publicly released for the previous state-of-the-art method (Agustsson et al., 2020), YouTube-NT would be a useful resource for making and comparing further progress in this field." }, { "heading": "2 RELATED WORK", "text": "We divide related work into three categories: neural image compression, neural video compression, and sequential generative models.\nNeural Image Compression. Considerable progress has been made by applying neural networks to image compression. Early works proposed by Toderici et al. (2017) and Johnston et al. (2018) leveraged LSTMs to model spatial correlations of the pixels within an image. Theis et al. (2017) first proposed an autoencoder architecture for image compression and used the straight-through estimator (Bengio et al., 2013) for learning a discrete latent representation. The connection to probabilistic generative models was drawn by Ballé et al. (2017), who firstly applied variational autoencoders (VAEs) (Kingma & Welling, 2013) to image compression. In subsequent work, Ballé et al. (2018) encoded images with a two-level VAE architecture involving a scale hyper-prior, which can be further improved by autoregressive structures (Minnen et al., 2018; Minnen & Singh, 2020) or by optimization at encoding time (Yang et al., 2020d). Yang et al. (2020e) and Flamich et al. (2019) demonstrated competitive image compression performance without a pre-defined quantization grid.\nNeural Video Compression. Compared to image compression, video compression is a significantly more challenging problem, as statistical redundancies exist not only within each video frame (exploited by intra-frame compression) but also along the temporal dimension. Early works by Wu et al. (2018); Djelouah et al. (2019) and Han et al. (2019) performed video compression by predicting future frames using a recurrent neural network, whereas Chen et al. (2019) and Chen et al. (2017) used convolutional architectures within a traditional block-based motion estimation approach. These early approaches did not outperform the traditional H.264 codec and barely surpassed the MPEG-2 codec. Lu et al. (2019) adopted a hybrid architecture that combined a pre-trained Flownet (Dosovitskiy et al., 2015) and residual compression, which leads to an elaborate training scheme. Habibian et al. (2019) and Liu et al. (2020) combined 3D convolutions for dimensionality reduction with expressive autoregressive priors for better entropy modeling at the expense of parallelism and runtime efficiency. Our method extends a low-latency model proposed by Agustsson et al. (2020), which allows for end-to-end training, efficient online encoding and decoding, and parallelism.\n1https://github.com/privateyoung/Youtube-NT\nSequential Deep Generative Models. We drew inspiration from a body of work on sequential generative modeling. Early deep learning architectures for dynamics forecasting involved RNNs (Chung et al., 2015). Denton & Fergus (2018) and Babaeizadeh et al. (2018) used VAE-based stochastic models in conjunction with LSTMs to model dynamics. Li & Mandt (2018) introduced both local and global latent variables for learning disentangled representations in videos. Other video generation models used generative adversarial networks (GANs) (Vondrick et al., 2016; Lee et al., 2018) or autoregressive models and normalizing flows (Rezende & Mohamed, 2015; Dinh et al., 2014; 2016; Kingma & Dhariwal, 2018; Kingma et al., 2016; Papamakarios et al., 2017). Recently, Marino et al. (2020) proposed to combine latent variable models with autoregressive flows for modeling dynamics at different levels of abstraction, which inspired our models and viewpoints." }, { "heading": "3 VIDEO COMPRESSION THROUGH DEEP AUTOREGRESSIVE MODELING", "text": "We identify commonalities between hierarchical autoregressive flow models (Marino et al., 2020) and state-of-the-art neural video compression architectures (Agustsson et al., 2020), and will use this viewpoint to propose improvements on existing models." }, { "heading": "3.1 BACKGROUND", "text": "We first review VAE-based compression schemes (Ballé et al., 2017) and formulate existing lowlatency video codecs in this framework; we then review the related autoregressive flow model.\nGenerative Modeling and Source Compression. Given a a sequence of video frames x1:T , lossy compression seeks a compact description of x1:T that simultaneously minimizes the description length R and information loss D. The distortion D measures the reconstruction error caused by encoding x1:T into a lossy representation z̄1:T and subsequently decoding it back to x̂1:T , while R measures the bit rate (file size). In learned compression methods (Ballé et al., 2017; Theis et al., 2017), the above process is parameterized by flexible functions f (“encoder”) and g (“decoder”) that map between the video and its latent representation z̄1:T = f(x1:T ). The goal is to minimize a rate-distortion loss, with the tradeoff between the two controlled by a hyperparameter β > 0:\nL = D(x1:T , g(bz̄1:T e)) + βR(bz̄1:T e). We adopt the end-to-end compression approach of Ballé et al. (2017), which approximates the rounding operations b·e (required for entropy coding) by uniform noise injection to enable gradientbased optimization during training. With an appropriate choice of probability model p(z1:T ), the relaxed version of above R-D (rate-distortion) objective then corresponds to the VAE objective:\nL̃ = Eq(z1:T |x1:T )[− log p(x1:T |z1:T )− log p(z1:T )]. (1) In this model, the likelihood p(x1:T |z1:T ) follows a Gaussian distribution with mean x̂1:T = g(z1:T ) and diagonal covariance β2 log 2I, and the approximate posterior q is chosen to be a unit-width uniform distribution (thus has zero differential entropy) whose mean z̄1:T is predicted by an amortized inference network f . The prior density p(z1:T ) interpolates its discretized version, so that it measures the code length of discretized z̄1:T after entropy-coding.\nLow-Latency Sequential Compression We specialize Eq. 1 to make it suitable for low-latency video compression, widely used in both conventional and recent neural codecs (Rippel et al., 2019; Agustsson et al., 2020). To this end, we encode and decode individual frames xt in sequence. Given the ground truth current frame xt and the previously reconstructed frames x̂<t, the encoder is restricted to be of the form z̄t = f(xt, x̂<t), and similarly the decoder computes reconstruction sequentially based on previous reconstructions and the current encoding, x̂t = g(x̂<t, bz̄te)). Existing codecs usually condition on a single reconstructed frame, substituting x̂<t by x̂t−1 in favor of efficiency. In the language of variational inference, the sequential encoder corresponds to a variational posterior of the form q(zt|xt, z<t), i.e., filtering, and the sequential decoder corresponds to the likelihood p(xt|z≤t) = N (x̂t, β2 log 2I); in both distributions, the probabilistic conditioning on z<t is based on the observation that x̂t−1 is a deterministic function of z<t, if we identify bz̄te with the random variable zt and unroll the recurrence x̂t = g(x̂<t, zt). As we show, all sequential compression approaches considered in this work follow this paradigm, and the form of the reconstruction transform x̂ determines the lowest hierarchy of the corresponding generative process of video x.\nMasked Autoregressive Flow (MAF). As a final component in neural sequence modeling, we discuss MAF (Papamakarios et al., 2017), which models the joint distribution of a sequence p(x1:T ) in terms of a simpler distribution of its underlying noise variables y1:T through the following autoregressive transform and its inverse:\nxt = hµ(x<t) + hσ(x<t) yt; ⇔ yt = xt−hµ(x<t)hσ(x<t) . (2)\nThe noise variable yt usually comes from a standard normal distribution. While the forward MAF transforms a sequence of standard normal noises into a data sequence, the inverse flow “whitens” the data sequence and removes temporal correlations. Due to its invertible nature, MAF allows for exact likelihood computations, but as we will explain in Section 3.3, we will not exploit this aspect in compression but rather draw on its expressiveness in modeling conditional likelihoods." }, { "heading": "3.2 A GENERAL FRAMEWORK FOR GENERATIVE VIDEO CODING", "text": "We now describe a general framework that captures several existing low-latency neural compression methods as specific instances and gives rise to the exploration of new models. To this end, we combine latent variable models with autoregressive flows into a joint framework. We consider a sequential decoding procedure of the following form:\nx̂t = hµ(x̂t−1,wt) + hσ(x̂t−1,wt) gv(vt,wt). (3)\nEq. 3 resembles the definition of the MAF in Eq. 2, but augments this transform with two sets of latent variables wt,vt ∼ p(wt,vt). Above, hµ and hσ are functions that transform the previous reconstructed data frame x̂t−1 along with wt into a shift and scale parameter, respectively. The function gv(vt,wt) converts these latent variables into a noise variable that encodes residuals with respect to the mean next-frame prediction hµ(x̂t−1,wt).\nThis stochastic decoder model has several advantages over existing generative models for compression, such as simpler flows or sequential VAEs. First, the stochastic autoregressive transform hµ(x̂t−1,wt) involves a latent variable wt and is therefore more expressive than a deterministic transform (Schmidt & Hofmann, 2018; Schmidt et al., 2019). Second, compared to MAF, the additional nonlinear transform gv(vt,wt) enables more expressive residual noise, reducing the burden on the entropy model. Finally, as visualized in Figure 2, the scale parameter hσ(x̂t−1,wt) effectively acts as a gating mechanism, determining how much variance is explained in terms of the autoregressive transform and the residual noise distribution. This provides an added degree of flexibility, in a similar fashion to how RealNVP improves over NICE (Dinh et al., 2014; 2016).\nOur approach is inspired by Marino et al. (2020) who analyzed a restricted version of the model in Eq. 3, aiming to hybridize autoregressive flows and sequential latent variable models for video prediction. In contrast to Eq. 3, their model involved deterministic transforms as well as residual noise that came from a sequential VAE." }, { "heading": "3.3 EXAMPLE MODELS AND EXTENSIONS", "text": "Next, we will show that the general framework expressed by Eq. 3 captures a variety of state-of-theart neural video compression schemes and gives rise to extensions and new models.\nTemporal Autoregressive Transform (TAT). The first special case among the class of models that are captured by Eq. 3 is the autoregressive neural video compression model by Yang et al. (2020b), which we refer to as temporal autoregressive transform (TAT). Shown in Figure 1(a), the decoder g implements a deterministic scale-shift autoregressive transform of decoded noise yt,\nx̂t = g(zt, x̂t−1) = hµ(x̂t−1) + hσ(x̂t−1) yt, yt = gz(zt). (4)\nThe encoder f inverts the transform to decorrelate the input frame xt into ȳt and encodes the result as z̄t = f(xt, x̂t−1) = fz(ȳt), where ȳt =\nxt−hµ(x̂t−1) hσ(x̂t−1)\n. The shift hµ and scale hσ transforms are parameterized by neural networks, fz is a convolutional neural network (CNN), and gz is a deconvolutional neural network (DNN) that approximately inverts fz .\nThe TAT decoder is a simple version of the more general stochastic autoregressive transform in Eq 3, where hµ and hσ lack latent variables. Indeed, interpreting the probabilistic generative process of\nx̂, TAT implements the model proposed by Marino et al. (2020), as the transform from y to x̂ is a MAF. However, the generative process corresponding to compression (reviewed in Section 3.1) adds additional white noise to x̂, with x := x̂ + , ∼ N (0, β2 log 2I). Thus, the generative process from y to x is no longer an autoregressive flow. Regardless, TAT was shown to better capture the low-level dynamics of video frames than the autoencoder (fz, gz) alone, and the inverse transform decorrelates raw video frames to simplify the input to the encoder fz (Yang et al., 2020b).\nDVC (Lu et al., 2019) and Scale-Space Flow (SSF, Agustsson et al. (2020)). The second class of models captured by Eq. 3 belong to the conventional video compression framework based on predictive coding (Cutler, 1952; Wiegand et al., 2003; Sullivan et al., 2012); both models make use of two sets of latent variables z1:T = {w1:T ,v1:T } to capture different aspects of information being compressed, where w captures estimated motion information used in warping prediction, and v helps capture residual error not predicted by warping.\nLike most classical approaches to video compression by predictive coding, the reconstruction transform in the above models has the form of a prediction shifted by residual error (decoded noise), and lacks the scaling factor hσ compared to the autoregressive transform in Eq. 3\nx̂t = hwarp(x̂t−1, gw(wt)) + gv(vt,wt). (5)\nAbove, gw and gv are DNNs, ot := gw(wt) has the interpretation of an estimated optical flow (motion) field, hwarp is the computer vision technique of warping, and the residual rt := gv(vt,wt) = x̂t − hwarp(x̂t−1,ot) represents the prediction error unaccounted for by warping. Lu et al. (2019) only makes use of vt in the residual decoder gv , and performs simple 2D warping by bi-linear interpretation; SSF (Agustsson et al., 2020) augments the optical flow (motion) field ot with an additional scale field, and applies scale-space-warping to the progressively blurred versions of x̂t−1 to allow for uncertainty in the warping prediction. The encoding procedure in the above models compute the variational mean parameters as w̄t = fw(xt, x̂t−1), v̄t = fv(xt − hwarp(x̂t−1, gw(wt))), corresponding to a structured posterior q(zt|xt, z<t) = q(wt|xt, z<t)q(vt|wt,xt, z<t). We illustrate the above generative and inference procedures in Figure 1(b).\nProposed: models based on Stochastic Temporal Autoregressive Transform. Finally, we consider the most general models as described by the stochastic autoregressive transform in Eq. 3, shown in Figure 1(c). We study two main variants, categorized by how they implement hµ and hσ:\nSTAT uses DNNs for hµ and hσ as in (Yang et al., 2020b), but complements it with the latent variable wt that characterizes the transform. In principle, more flexible transforms should give better compression performance, but we find the following variant more parameter efficient in practice: STAT-SSF: a less data-driven variant of the above that still uses scale-space warping (Agustsson\net al., 2020) in the shift transform, i.e., hµ(x̂t−1,wt) = hwarp(x̂t−1, gw(wt)). This can also be seen as an extended version of the SSF model, whose shift transform hµ is preceded by a new learned scale transform hσ .\nStructured Prior (SP). Besides improving the autoregressive transform (affecting the likelihood model for xt), one variant of our approach also improves the topmost generative hierarchy in the form of a more expressive latent prior p(z1:T ), affecting the entropy model for compression. We observe that motion information encoded in wt can often be informative of the residual error encoded in vt. In other words, large residual errors vt incurred by the mean prediction hµ(x̂t−1,wt) (e.g., the result of warping the previous frame hµ(x̂t−1)) are often spatially collocated with (unpredictable) motion as encoded by wt. The original SSF model’s prior factorizes as p(wt,vt) = p(wt)p(vt) and does not capture such correlation. We therefore propose a structured prior by introducing conditional dependence between wt and vt, so that p(wt,vt) = p(wt)p(vt|wt). At a high level, this can be implemented by introducing a new neural network that maps wt to parameters of a parametric distribution of p(vt|wt) (e.g., mean and variance of a diagonal Gaussian distribution). This results in variants of the above models, STAT-SP and STAT-SSF-SP, where the structured prior is applied on top of the proposed STAT and STAT-SSF models." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we train our models both on the existing dataset and our new YouTube-NT dataset. Our model also improves over state-of-the-art neural and classical video compression methods when evaluated on several publicly available benchmark datasets. Lower-level modules and training scheme for our models largely follow Agustsson et al. (2020); we provide detailed model diagrams and schematic implementation, including the proposed scaling transform and structured prior, in Appendix A.4. We also implement a more computationally efficient version of scale-space warping (Agustsson et al., 2020) based on Gaussian pyramid and interpolation (instead of naive Gaussian blurring of Agustsson et al. (2020)); pseudocode is available in Appendix A.3." }, { "heading": "4.1 TRAINING DATASETS", "text": "Vimeo-90k (Xue et al., 2019) consists of 90,000 clips of 7 frames at 448x256 resolution collected from vimeo.com, which has been used in previous works (Lu et al., 2019; Yang et al., 2020a; Liu et al., 2020). While other publicly-available video datasets exist, they typically have lower\nresolution and/or specialized content. e.g., Kinetics (Carreira & Zisserman, 2017) only contains human action videos, and previous methods that trained on Kinetics (Wu et al., 2018; Habibian et al., 2019; Golinski et al., 2020) generally report worse rate-distortion performance on diverse benchmarks (such as UVG, to be discussed below), compared to Agustsson et al. (2020) who trained on a significantly larger and higher-resolution dataset collected from youtube.com.\nYouTube-NT. This is our new dataset. We collected 8,000 nature videos and movie/video-game trailers from youtube.com and processed them into 300k high-resolution (720p) clips, which we refer to as YouTube-NT. We release YouTube-NT in the form of customizable scripts to facilitate future compression research. Table 1 compares the current version of YouTube-NT with Vimeo-90k (Xue et al., 2019) and with Google’s closed-access training dataset (Agustsson et al., 2020). Figure 5b shows the evaluation performance of the SSF model architecture after training on each dataset." }, { "heading": "4.2 TRAINING AND EVALUATION PROCEDURES", "text": "Training. All models are trained on three consecutive frames and batchsize 8, which are randomly selected from each clip, then randomly cropped to 256x256. We trained on MSE loss, following similar procedure to Agustsson et al. (2020) (see Appendix A.2 for details).\nEvaluation. We evaluate compression performance on the widely used UVG (Mercat et al., 2020) and MCL JCV (Wang et al., 2016) datasets, both consisting of raw videos in YUV420 format. UVG is widely used for testing the HEVC codec and contains seven 1080p videos at 120fps with smooth and mild motions or stable camera movements. MCL JCV contains thirty 1080p videos at 30fps, which are generally more diverse, with a higher degree of motion and a more unstable camera.\nWe compute the bit rate (bits-per-pixel, BPP) and the reconstruction quality (measured in PSNR) averaged across all frames. We note that PSNR is a more challenging metric than MS-SSIM (Wang et al., 2003) for learned codecs (Lu et al., 2019; Agustsson et al., 2020; Habibian et al., 2019; Yang et al., 2020a;c). Since existing neural compress methods assume video input in RGB format (24bits/pixel), we follow this convention in our evaluations for meaningful comparisons. We note that HEVC also has special support for YUV420 (12bits/pixel), allowing it to exploit this more compact file format and effectively halve the input bitrate on our test videos (which were coded in YUV420 by default), giving it an advantage over all neural methods. Regardless, we report the performance of HEVC in YUV420 mode (in addition to the default RGB mode) for reference." }, { "heading": "4.3 BASELINE ANALYSIS", "text": "We trained our models on Vimeo-90k to compare with the published results of baseline models listed in Table 2. Figure 3a compares our proposed models (STAT-SSF, STAT-SSF-SP) with previous state-of-the-art classical codec HEVC and neural codecs on the UVG test dataset. Our STAT-SSFSP model provides superior performance at bitrates≥ 0.07 BPP, outperforming conventional HEVC\neven in its favored YUV 420 mode and state-of-the-art neural method SSF (Agustsson et al., 2020), as well as the established DVC (Lu et al., 2019) model, which leverages a more complicated model and multi-stage training procedure. We also note that, as expected, our proposed STAT model improves over TAT (Yang et al., 2020b), with the latter lacking stochasticity in the autoregressive transform compared to our proposed STAT and its variants.\nFigure 3a shows that the performance ranking on MCL JCV is similar to on UVG, despite MCL JCV having more diverse and challenging (e.g., animated) content (Agustsson et al., 2020). We provide qualitative results in Figure 2 and 4, offering insight into the behavior of the proposed scaling transform and structured prior, as well as visual qualities of the top-performing methods." }, { "heading": "4.4 ABLATION ANALYSIS", "text": "Using the baseline SSF (Agustsson et al., 2020) model, we study the performance contribution of each of our proposed components, stochastic temporal autoregressive transform (STAT) and structured prior (SP), in isolation. We trained on YouTube-NT and evaluated on UVG. As shown in Figure 5a, STAT improves performance to a greater degree than SP, while SP alone does not provide noticeable improvement over vanilla SSF (however, note that when combined with STAT, SP offers higher improvement over STAT alone, as shown by STAT-SSF-SP v.s. STAT-SSF in Figure 3a).\nTo quantify the effect of training data on performance, we compare the test performance (on UVG) of the SSF model trained on Vimeo-90k (Xue et al., 2019) and YouTube-NT. We also provide the reported results from Agustsson et al. (2020), which trained on a larger (and unreleased) dataset. As seen from the R-D curves in Figure 5b, training on YouTube-NT improves rate-distortion performance over Vimeo-90k, in many cases bridging the gap with the performance from the larger closed-access training dataset of Agustsson et al. (2020). At a higher bitrate, the model trained on Vimeo-90k(Xue et al., 2019) tends to have a similar performance to YouTube-NT. This is likely because YouTube-NT currently only covers 8000 videos, limiting the diversity of the short clips." }, { "heading": "5 DISCUSSION", "text": "We provide a unifying perspective on sequential video compression and temporal autoregressive flows (Marino et al., 2020), and elucidate the relationship between the two in terms of their underlying generative hierarchy. From this perspective, we consider several video compression methods, particularly a state-of-the-art method Scale-Space-Flow (Agustsson et al., 2020), as sequential variational autoencoders that implement a more general stochastic temporal autoregressive transform, which allows us to naturally extend the Scale-Space-Flow model and obtain improved ratedistortion performance on standard public benchmark datasets. Further, we provide scripts to generate a new high-resolution video dataset, YouTube-NT, which is substantially larger than current publicly-available datasets. Together, we hope that this new perspective and dataset will drive further progress in the nascent yet highly impactful field of learned video compression." }, { "heading": "6 ACKNOWLEDGEMENTS", "text": "We gratefully acknowledge extensive contributions from Yang Yang (Qualcomm), which were indispensable to this work. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0021. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA). Yibo Yang acknowledges funding from the Hasso Plattner Foundation. Furthermore, this work was supported by the National Science Foundation under Grants 1928718, 2003237 and 2007719, as well as Intel and Qualcomm." }, { "heading": "A APPENDIX", "text": "A.1 COMMAND FOR HEVC CODEC\nTo avoid FFmpeg package taking the advantage of the input file color format (YUV420), we first need to dump the video.yuv file to a sequence of lossless png files:\nffmpeg -i video.yuv -vsync 0 video/%d.png\nThen we use the default low-latency setting in ffmpeg to compress the dumped png sequences:\nffmpeg -y -i video/%d.png -c:v libx265 -preset medium \\ -x265-params bframes=0 -crf {crf} video.mkv\nwhere crf is the parameter for quality control. The compressed video is encoded by HEVC with RGB color space.\nTo get the result of HEVC (YUV420), we directly execute:\nffmpeg -pix_fmt yuv420p -s 1920x1080 -i video.yuv \\ -c:v libx265 -crf {crf} -x265-params bframes=0 video.mkv\nA.2 TRAINING SCHEDULE\nTraining time is about four days on an NVIDIA Titan RTX. Similar to Agustsson et al. (2020), we use the Adam optimizer (Kingma & Ba, 2015), training the models for 1,050,000 steps. The initial learning rate of 1e-4 is decayed to 1e-5 after 900,000 steps, and we increase the crop size to 384x384 for the last 50,000 steps. All models are optimized using MSE loss.\nA.3 EFFICIENT SCALE-SPACE-FLOW IMPLEMENTATION\nAgustsson et al. (2020) uses a simple implementation of scale-space flow by convolving the previous reconstructed frame x̂t−1 with a sequence of Gaussian kernel σ2 = {0, σ20 , (2σ0)2, (4σ0)2, (8σ0)2, (16σ0)2}. However, this leads to a large kernel size when σ is large, which can be computationally expensive. For example, a Gaussian kernel with σ2 = 256 usually requires kernel size 97x97 to avoid artifact (usually kernel size = (6 ∗ σ + 1)2). To alleviate the problem, we leverage an efficient version of Gaussian scale-space by using Gaussian pyramid with upsampling. In our implementation, we use σ2 = {0, σ20 , σ20 + (2σ0)2, σ20 + (2σ0)2 + (4σ0)2, σ20 +\n(2σ0) 2 + (4σ0) 2 + (8σ0) 2, σ20 + (2σ0) 2 + (4σ0) 2 + (8σ0) 2 + (16σ0) 2}, because by using Gaussian pyramid, we can always use Gaussian kernel with σ = σ0 to consecutively blur and downsample the image to build a pyramid. At the final step, we only need to upsample all the downsampled images to the original size to approximate a scale-space 3D tensor. Detailed algorithm is described in Algorithm 1.\nAlgorithm 1: An efficient algorithm to build a scale-space 3D tensor Result: ssv: Scale-space 3D tensor Input: input input image; σ0 base scale; M scale depth; ssv = [input]; kernel = Create Gaussian Kernel(σ0); for i=0 to M-1 do\ninput = GaussianBlur(input, kernel); if i == 0 then\nssv.append(input); else\ntmp = input; for j=0 to i-1 do\ntmp = UpSample2x(tmp); {step upsampling for smooth interpolation}; end ssv.append(tmp);\nend input = DownSample2x(input);\nend return Concat(ssv)\nA.4 LOWER-LEVEL ARCHITECTURE DIAGRAMS\nFigure 6 illustrates the low-level encoder, decoder and hyper-en/decoder modules used in our proposed STAT-SSF and STAT-SSF-SP models, as well as in the baseline TAT and SSF models, based on Agustsson et al. (2020). Figure 7 shows the encoder-decoder flowchart for wt and vt separately, as well as their corresponding entropy models (priors), in the STAT-SSF-SP model." } ]
2,021
null
SP:98a52d7970d0d39f8e14f6b5679f8383a3f0e8b1
[ "The paper proposes a method to calibrate the underlying distribution of a few samples in the few-shot classification scenario. The idea is to estimate a feature distribution of a few samples of a novel class from base class distributions. The authors assume that every dimension in the feature vector follows a Gaussian distribution. Based on the observation that the mean and variance of the distribution with respect to each class are correlated to the semantic similarity of each class, base class distribution can be transferred to the novel class distribution. After distribution calibration, features can be generated from the calibrated distribution and the generated features are used to train classifiers. SVM and logistic regression classifier are used to verify the approach on the mini-imagenet and CUB datasets.", "This paper identifies the problem of biased distributions in few-shot learning and proposes to fix it. In few-shot learning, only a few samples per class are available; this makes estimating the class distribution difficult. The paper proposes a distribution calibration algorithm that makes use of the meta-train class distributions to calibrate the few-shot class distributions. Once calibrated, more samples are drawn from this distribution to learn a classifier that generalizes better. This approach does not require additional learnable parameters and can be (potentially) built on-top of any pre-trained feature extractor. Empirical results show that this approach achieves state-of-the-art results on Mini-ImageNet and CUB." ]
Learning from a limited number of samples is challenging since the learned model can easily become overfitted based on the biased distribution formed by only a few training examples. In this paper, we calibrate the distribution of these fewsample classes by transferring statistics from the classes with sufficient examples. Then an adequate number of examples can be sampled from the calibrated distribution to expand the inputs to the classifier. We assume every dimension in the feature representation follows a Gaussian distribution so that the mean and the variance of the distribution can borrow from that of similar classes whose statistics are better estimated with an adequate number of samples. Our method can be built on top of off-the-shelf pretrained feature extractors and classification models without extra parameters. We show that a simple logistic regression classifier trained using the features sampled from our calibrated distribution can outperform the state-of-the-art accuracy on three datasets (5% improvement on miniImageNet compared to the next best). The visualization of these generated features demonstrates that our calibrated distribution is an accurate estimation. The code is available at: https://github.com/ShuoYang-1998/Few_ Shot_Distribution_Calibration
[ { "affiliations": [], "name": "Shuo Yang" }, { "affiliations": [], "name": "Lu Liu" }, { "affiliations": [], "name": "Min Xu" } ]
[ { "authors": [ "Antreas Antoniou", "Amos J. Storkey" ], "title": "Assume, augment and learn: Unsupervised few-shot metalearning via random labels and data augmentation", "venue": null, "year": 2019 }, { "authors": [ "Wei-Yu Chen", "Yen-Cheng Liu", "Zsolt Kira", "Yu-Chiang Frank Wang", "Jia-Bin Huang" ], "title": "A closer look at few-shot classification", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Zitian Chen", "Yanwei Fu", "Yinda Zhang", "Yu-Gang Jiang", "Xiangyang Xue", "Leonid Sigal" ], "title": "Multilevel semantic feature augmentation for one-shot learning", "venue": null, "year": 2019 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Hang Gao", "Zheng Shou", "Alireza Zareian", "Hanwang Zhang", "Shih-Fu Chang" ], "title": "Low-shot learning via covariance-preserving adversarial augmentation networks", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "NeurIPS,", "year": 2014 }, { "authors": [ "Bharath Hariharan", "Ross Girshick" ], "title": "Low-shot visual recognition by shrinking and hallucinating features", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Zhenguo Li", "Fengwei Zhou", "Fei Chen", "Hang Li" ], "title": "Meta-sgd: Learning to learn quickly for few shot learning", "venue": null, "year": 2017 }, { "authors": [ "Bin Liu", "Yue Cao", "Yutong Lin", "Qi Li", "Zheng Zhang", "Mingsheng Long", "Han Hu" ], "title": "Negative margin matters: Understanding margin in few-shot classification", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Jialun Liu", "Yifan Sun", "Chuchu Han", "Zhaopeng Dou", "Wenhui Li" ], "title": "Deep representation learning on long-tailed data: A learnable embedding augmentation perspective", "venue": null, "year": 2020 }, { "authors": [ "Lu Liu", "Tianyi Zhou", "Guodong Long", "Jing Jiang", "Lina Yao", "Chengqi Zhang" ], "title": "Prototype propagation networks (PPN) for weakly-supervised few-shot learning on category graph", "venue": "In IJCAI,", "year": 2019 }, { "authors": [ "Lu Liu", "Tianyi Zhou", "Guodong Long", "Jing Jiang", "Chengqi Zhang" ], "title": "Learning to propagate for graph meta-learning", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Yaoyao Liu", "Bernt Schiele", "Qianru Sun" ], "title": "An ensemble of epoch-wise empirical bayes for fewshot learning", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Puneet Mangla", "Nupur Kumari", "Abhishek Sinha", "Mayank Singh", "Balaji Krishnamurthy", "Vineeth N Balasubramanian" ], "title": "Charting the right manifold: Manifold mixup for few-shot learning", "venue": "In WACV,", "year": 2020 }, { "authors": [ "Seong-Jin Park", "Seungju Han", "Ji-won Baek", "Insoo Kim", "Juhwan Song", "Hae Beom Lee", "Jae-Joon Han", "Sung Ju Hwang" ], "title": "Meta variance transfer: Learning to augment from the others", "venue": "In ICML,", "year": 2020 }, { "authors": [ "F. Pedregosa", "G. Varoquaux", "A. Gramfort", "V. Michel", "B. Thirion", "O. Grisel", "M. Blondel", "P. Prettenhofer", "R. Weiss", "V. Dubourg", "J. Vanderplas", "A. Passos", "D. Cournapeau", "M. Brucher", "M. Perrot", "E. Duchesnay" ], "title": "Scikit-learn: Machine learning in Python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Tiexin Qin", "Wenbin Li", "Yinghuan Shi", "Yang Gao" ], "title": "Diversity helps: Unsupervised few-shot learning via distribution shift-based data augmentation, 2020", "venue": null, "year": 2020 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Mengye Ren", "Eleni Triantafillou", "Sachin Ravi", "Jake Snell", "Kevin Swersky", "Joshua B Tenenbaum", "Hugo Larochelle", "Richard S Zemel" ], "title": "Meta-learning for semi-supervised few-shot classification", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "David E. Rumelhart", "Geoffrey E. Hinton", "Ronald J. Williams" ], "title": "Learning Representations by Back-propagating Errors", "venue": "Nature, 323:533–536,", "year": 1986 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael S. Bernstein", "Alexander C. Berg", "Fei-Fei Li" ], "title": "Imagenet large scale visual recognition challenge", "venue": "CoRR, abs/1409.0575,", "year": 2014 }, { "authors": [ "Andrei A. Rusu", "Dushyant Rao", "Jakub Sygnowski", "Oriol Vinyals", "Razvan Pascanu", "Simon Osindero", "Raia Hadsell" ], "title": "Meta-learning with latent embedding optimization", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Ruslan Salakhutdinov", "Joshua Tenenbaum", "Antonio Torralba" ], "title": "One-shot learning with a hierarchical nonparametric bayesian model", "venue": "In ICML workshop,", "year": 2012 }, { "authors": [ "Eli Schwartz", "Leonid Karlinsky", "Joseph Shtok", "Sivan Harary", "Mattias Marder", "Abhishek Kumar", "Rogerio Feris", "Raja Giryes", "Alex Bronstein" ], "title": "Delta-encoder: an effective sample synthesis method for few-shot object recognition", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard S. Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "John W Tukey" ], "title": "Exploratory data analysis. Addison-Wesley Series in Behavioral Science", "venue": null, "year": 1977 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-SNE", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Tim Lillicrap", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Yu-Xiong Wang", "Ross Girshick", "Martial Hebert", "Bharath Hariharan" ], "title": "Low-shot learning from imaginary data", "venue": null, "year": 2018 }, { "authors": [ "Yulin Wang", "Xuran Pan", "Shiji Song", "Hong Zhang", "Gao Huang", "Cheng Wu" ], "title": "Implicit semantic data augmentation for deep networks", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "P. Welinder", "S. Branson", "T. Mita", "C. Wah", "F. Schroff", "S. Belongie", "P. Perona" ], "title": "Caltech-UCSD Birds 200", "venue": "Technical Report CNS-TR-2010-001, California Institute of Technology,", "year": 2010 }, { "authors": [ "Yongqin Xian", "Tobias Lorenz", "Bernt Schiele", "Zeynep Akata" ], "title": "Feature generating networks for zero-shot learning", "venue": null, "year": 2018 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "In BMVC,", "year": 2016 }, { "authors": [ "Chi Zhang", "Yujun Cai", "Guosheng Lin", "Chunhua Shen" ], "title": "Deepemd: Few-shot image classification with differentiable earth mover’s distance and structured classifiers", "venue": null, "year": 2020 }, { "authors": [ "Jian Zhang", "Chenglong Zhao", "Bingbing Ni", "Minghao Xu", "Xiaokang Yang" ], "title": "Variational few-shot learning", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Ruixiang Zhang", "Tong Che", "Zoubin Ghahramani", "Yoshua Bengio", "Yangqiu Song" ], "title": "Metagan: An adversarial approach to few-shot learning", "venue": "In NeurIPS,", "year": 2018 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nTable 1: The class mean similarity (“mean sim”) and class variance similarity (“var sim”) between Arctic fox and different classes.\nArctic fox mean sim var sim\nwhite wolf 97% 97% malamute 85% 78%\nlion 81% 70% meerkat 78% 70% jellyfish 46% 26% orange 40% 19% beer bottle 34% 11%\nLearning from a limited number of training samples has drawn increasing attention due to the high cost of collecting and annotating a large amount of data. Researchers have developed algorithms to improve the performance of models that have been trained with very few data. Finn et al. (2017); Snell et al. (2017) train models in a meta-learning fashion so that the model can adapt quickly on tasks with only a few training samples available. Hariharan & Girshick (2017); Wang et al. (2018) try to synthesize data or features by learning a generative model to alleviate the data insufficiency problem. Ren et al. (2018) propose to leverage unlabeled data and predict pseudo labels to improve the performance of fewshot learning.\nWhile most previous works focus on developing stronger models, scant attention has been paid to the property of the data itself. It is natural that when the number of data grows, the ground truth distribution can be more accurately uncovered. Models trained with a wide coverage of data can generalize well during evaluation. On the other hand, when training a model with only a few training data, the model tends to overfit on these few samples by minimizing the training loss over these samples. These phenomena are illustrated in Figure 1. This biased distribution based on a few examples can damage the generalization ability of the model since it is far from mirroring the ground truth distribution from which test cases are sampled during evaluation.\n∗Corresponding author.\nHere, we consider calibrating this biased distribution into a more accurate approximation of the ground truth distribution. In this way, a model trained with inputs sampled from the calibrated distribution can generalize over a broader range of data from a more accurate distribution rather than only fitting itself to those few samples. Instead of calibrating the distribution of the original data space, we try to calibrate the distribution in the feature space, which has much lower dimensions and is easier to calibrate (Xian et al. (2018)). We assume every dimension in the feature vectors follows a Gaussian distribution and observe that similar classes usually have similar mean and variance of the feature representations, as shown in Table 1. Thus, the mean and variance of the Gaussian distribution can be transferred across similar classes (Salakhutdinov et al. (2012)). Meanwhile, the statistics can be estimated more accurately when there are adequate samples for this class. Based on these observations, we reuse the statistics from many-shot classes and transfer them to better estimate the distribution of the few-shot classes according to their class similarity. More samples can be generated according to the estimated distribution which provides sufficient supervision for training the classification model.\nIn the experiments, we show that a simple logistic regression classifier trained with our strategy can achieve state-of-the-art accuracy on three datasets. Our distribution calibration strategy can be paired with any classifier and feature extractor with no extra learnable parameters. Training with samples selected from the calibrated distribution can achieve 12% accuracy gain compared to the baseline which is only trained with the few samples given in a 5way1shot task. We also visualize the calibrated distribution and show that it is an accurate approximation of the ground truth that can better cover the test cases." }, { "heading": "2 RELATED WORKS", "text": "Few-shot classification is a challenging machine learning problem and researchers have explored the idea of learning to learn or meta-learning to improve the quick adaptation ability to alleviate the few-shot challenge. One of the most general algorithms for meta-learning is the optimizationbased algorithm. Finn et al. (2017) and Li et al. (2017) proposed to learn how to optimize the gradient descent procedure so that the learner can have a good initialization, update direction, and learning rate. For the classification problem, researchers proposed simple but effective algorithms based on metric learning. MatchingNet (Vinyals et al., 2016) and ProtoNet (Snell et al., 2017) learned to classify samples by comparing the distance to the representatives of each class. Our distribution calibration and feature sampling procedure does not include any learnable parameters and the classifier is trained in a traditional supervised learning way.\nAnother line of algorithms is to compensate for the insufficient number of available samples by generation. Most methods use the idea of Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) or autoencoder (Rumelhart et al., 1986) to generate samples (Zhang et al. (2018); Chen et al. (2019b); Schwartz et al. (2018); Gao et al. (2018)) or features (Xian et al. (2018); Zhang et al. (2019)) to augment the training set. Specifically, Zhang et al. (2018) and Xian et al. (2018) proposed to synthesize data by introducing an adversarial generator conditioned on tasks. Zhang et al. (2019) tried to learn a variational autoencoder to approximate the distribution and predict labels based on the estimated statistics. The autoencoder can also augment samples by projecting between the visual\nspace and the semantic space (Chen et al., 2019b) or encoding the intra-class deformations (Schwartz et al., 2018). Liu et al. (2019b) and Liu et al. (2019a) propose to generate features through the class hierarchy. While these methods can generate extra samples or features for training, they require the design of a complex model and loss function to learn how to generate. However, our distribution calibration strategy is simple and does not need extra learnable parameters.\nData augmentation is a traditional and effective way of increasing the number of training samples. Qin et al. (2020) and Antoniou & Storkey (2019) proposed the used of the traditional data augmentation technique to construct pretext tasks for unsupervised few-shot learning. Wang et al. (2018) and Hariharan & Girshick (2017) leveraged the general idea of data augmentation, they designed a hallucination model to generate the augmented version of the image with different choices for the model’s input, i.e., an image and a noise (Wang et al., 2018) or the concatenation of multiple features (Hariharan & Girshick, 2017). Park et al. (2020); Wang et al. (2019); Liu et al. (2020b) tried to augment feature representations by leveraging intra-class variance. These methods learn to augment from the original samples or their feature representation while we try to estimate the classlevel distribution and thus can eliminate the inductive bias from a single sample and provide more diverse generations from the calibrated distribution." }, { "heading": "3 MAIN APPROACH", "text": "In this section, we introduce the few-shot classification problem definition in Section 3.1 and details of our proposed approach in Section 3.2." }, { "heading": "3.1 PROBLEM DEFINITION", "text": "We follow a typical few-shot classification setting. Given a dataset with data-label pairs D = {(xi, yi)} where xi ∈ Rd is the feature vector of a sample and yi ∈ C, where C denotes the set of classes. This set of classes is divided into base classes Cb and novel classes Cn, where Cb∩Cn = ∅ and Cb ∪ Cn = C. The goal is to train a model on the data from the base classes so that the model can generalize well on tasks sampled from the novel classes. In order to evaluate the fast adaptation ability or the generalization ability of the model, there are only a few available labeled samples for each task T . The most common way to build a task is called an N-way-K-shot task (Vinyals et al. (2016)), where N classes are sampled from the novel set and only K (e.g., 1 or 5) labeled samples are provided for each class. The few available labeled data are called support set S = {(xi, yi)}N×Ki=1 and the model is evaluated on another query setQ = {(xi, yi)}N×K+N×qi=N×K+1 , where every class in the task has q test cases. Thus, the performance of a model is evaluated as the averaged accuracy on (the query set of) multiple tasks sampled from the novel classes." }, { "heading": "3.2 DISTRIBUTION CALIBRATION", "text": "As introduced in Section 3.1, the base classes have a sufficient amount of data while the evaluation tasks sampled from the novel classes only have a limited number of labeled samples. The statistics of the distribution for the base class can be estimated more accurately compared to the estimation based on few-shot samples, which is an ill-posed problem. As shown in Table 1, we observe that if we assume the feature distribution is Gaussian, the mean and variance with respect to each class are correlated to the semantic similarity of each class. With this in mind, the statistics can be transferred from the base classes to the novel classes if we learn how similar the two classes are. In the following sections, we discuss how we calibrate the distribution estimation of the classes with only a few samples (Section 3.2.2) with the help of the statistics of the base classes (Section 3.2.1). We will also elaborate on how do we leverage the calibrated distribution to improve the performance of few-shot learning (Section 3.2.3).\nNote that our distribution calibration strategy is over the feature-level and is agnostic to any feature extractor. Thus, it can be built on top of any pretrained feature extractors without further costly finetuning. In our experiments, we use the pretrained WideResNet Zagoruyko & Komodakis (2016) following previous work (Mangla et al. (2020)). The WideResNet is trained to classify the base classes, along with a self-supervised pretext task to learn the general-purpose representations suitable for image understanding tasks. Please refer to their paper for more details on training the feature extractor.\nAlgorithm 1 Training procedure for an N-way-K-shot task\nRequire: Support set features S = (xi, y)N×Ki=1 Require: Base classes’ statistics {µi}|Cb|i=1 , {Σi} |Cb| i=1\n1: Transform (xi)N×Ki=1 with Tukey’s Ladder of Powers as Equation 3 2: for (xi, yi) ∈ S do 3: Calibrate the mean µ′ and the covariance Σ′ for class yi using xi with Equation 6 4: Sample features for class yi from the calibrated distribution as Equation 7 5: end for 6: Train a classifier using both support set features and all sampled features as Equation 8" }, { "heading": "3.2.1 STATISTICS OF THE BASE CLASSES", "text": "We assume the feature distribution of base classes is Gaussian. The mean of the feature vector from a base class i is calculated as the mean of every single dimension in the vector:\nµi =\n∑ni j=1 xj\nni , (1)\nwhere xj is a feature vector of the j-th sample from the base class i and ni is the total number of samples in class i. As the feature vector xj is multi-dimensional, we use covariance for a better representation of the variance between any pair of elements in the feature vector. The covariance matrix Σi for the features from class i is calculated as:\nΣi = 1\nni − 1 ni∑ j=1 (xj − µi) (xj − µi)T . (2)" }, { "heading": "3.2.2 CALIBRATING STATISTICS OF THE NOVEL CLASSES", "text": "Here, we consider an N-way-K-shot task sampled from the novel classes.\nTukey’s Ladder of Powers Transformation\nTo make the feature distribution more Gaussian-like, we first transform the features of the support set and query set in the target task using Tukey’s Ladder of Powers transformation (Tukey (1977)). Tukey’s Ladder of Powers transformation is a family of power transformations which can reduce the skewness of distributions and make distributions more Gaussian-like. Tukey’s Ladder of Powers transformation is formulated as:\nx̃ = { xλ if λ 6= 0 log(x) if λ = 0 (3)\nwhere λ is a hyper-parameter to adjust how to correct the distribution. The original feature can be recovered by setting λ as 1. Decreasing λ makes the distribution less positively skewed and vice versa.\nCalibration through statistics transfer\nUsing the statistics from the base classes introduced in Section 3.2.1, we transfer the statistics from the base classes which are estimated more accurately on sufficient data to the novel classes. The transfer is based on the Euclidean distance between the feature space of the novel classes and the mean of the features from the base classes µi as computed in Equation 1. Specifically, we select the top k base classes with the closest distance to the feature of a sample x̃ from the support set:\nSd = {−‖µi − x̃‖2 | i ∈ Cb}, (4) SN = {i | − ‖µi − x̃‖2 ∈ topk(Sd)}, (5)\nwhere topk(·) is an operator to select the top elements from the input distance set Sd. SN stores the k nearest base classes with respect to a feature vector x̃. Then, the mean and covariance of the distribution is calibrated by the statistics from the nearest base classes:\nµ′ =\n∑ i∈SN µi + x̃\nk + 1 ,Σ′ =\n∑ i∈SN Σi\nk + α, (6)\nwhere α is a hyper-parameter that determines the degree of dispersion of features sampled from the calibrated distribution.\nFor few-shot learning with more than one shot, the aforementioned procedure of the distribution calibration should be undertaken multiple times with each time using one feature vector from the support set. This avoids the bias provided by one specific sample and potentially achieves more diverse and accurate distribution estimation. Thus, for simplicity, we denote the calibrated distribution as a set of statistics. For a class y ∈ Cn, we denote the set of statistics as Sy = {(µ ′ 1,Σ ′ 1), ..., (µ ′ K ,Σ ′\nK)}, where µ ′\ni, Σ ′\ni are the calibrated mean and covariance, respectively, computed based on the i-th feature in the support set of class y. Here, the size of the set is the value of K for an N-way-K-shot task." }, { "heading": "3.2.3 HOW TO LEVERAGE THE CALIBRATED DISTRIBUTION?", "text": "With a set of calibrated statistics Sy for class y in a target task, we generate a set of feature vectors with label y by sampling from the calibrated Gaussian distributions:\nDy = {(x, y)|x ∼ N (µ,Σ),∀(µ,Σ) ∈ Sy}. (7) Here, the total number of generated features per class is set as a hyperparameter and they are equally distributed for every calibrated distribution in Sy . The generated features along with the original support set features for a few-shot task is then served as the training data for a task-specific classifier. We train the classifier for a task by minimizing the cross-entropy loss over both the features of its support set S and the generated features Dy:\n` = ∑\n(x,y)∼S̃∪Dy,y∈YT\n− log Pr(y|x; θ), (8)\nwhere YT is the set of classes for the task T . S̃ denotes the support set with features transformed by Turkey’s Ladder of Powers transformation and the classifier model is parameterized by θ." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we answer the following questions:\n• How does our distribution calibration strategy perform compared to the state-of-the-art methods?" }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "" }, { "heading": "4.1.1 DATASETS", "text": "We evaluate our distribution calibration strategy on miniImageNet (Ravi & Larochelle (2017)), tieredImageNet (Ren et al. (2018)) and CUB (Welinder et al. (2010)). miniImageNet and tieredImageNet have a brand range of classes including various animals and objects while CUB is a more fine-grained dataset that includes various species of birds. Datasets with different levels of granularity may have different distributions for their feature space. We want to show the effectiveness and generality of our strategy on all three datasets.\nminiImageNet is derived from ILSVRC-12 dataset (Russakovsky et al., 2014). It contains 100 diverse classes with 600 samples per class. The image size is 84 × 84 × 3. We follow the splits used in previous works (Ravi & Larochelle, 2017), which split the dataset into 64 base classes, 16 validation classes, and 20 novel classes.\ntieredImageNet is a larger subset of ILSVRC-12 dataset (Russakovsky et al., 2014), which contains 608 classes sampled from hierarchical category structure. Each class belongs to one of 34 higherlevel categories sampled from the high-level nodes in the ImageNet. The average number of images in each class is 1281. We use 351, 97, and 160 classes for training, validation, and test, respectively.\nCUB is a fine-grained few-shot classification benchmark. It contains 200 different classes of birds with a total of 11,788 images of size 84 × 84 × 3. Following previous works (Chen et al., 2019a), we split the dataset into 100 base classes, 50 validation classes, and 50 novel classes." }, { "heading": "4.1.2 EVALUATION METRIC", "text": "We use the top-1 accuracy as the evaluation metric to measure the performance of our method. We report the accuracy on 5way1shot and 5way5shot settings for miniImageNet, tieredImageNet and CUB. The reported results are the averaged classification accuracy over 10,000 tasks." }, { "heading": "4.1.3 IMPLEMENTATION DETAILS", "text": "For feature extractor, we use the WideResNet (Zagoruyko & Komodakis, 2016) trained following previous work (Mangla et al. (2020)). For each dataset, we train the feature extractor with base classes and test the performance using novel classes. Note that the feature representation is extracted from the penultimate layer (with a ReLU activation function) from the feature extractor, thus the values are all non-negative so that the inputs to Tukey’s Ladder of Powers transformation in Equation 3 are valid. At the distribution calibration stage, we compute the base class statistics and transfer them to calibrate novel class distribution for each dataset. We use the LR and SVM implementation of scikit-learn (Pedregosa et al. (2011)) with the default settings. We use the same hyperparameter value for all datasets except for α. Specifically, the number of generated features\nis 750; k = 2 and λ = 0.5. α is 0.21, 0.21 and 0.3 for miniImageNet, tieredImageNet and CUB, respectively." }, { "heading": "4.2 COMPARISION TO STATE-OF-THE-ART", "text": "Table 2 and Table 3 presents the 5way1shot and 5way5shot classification results of our method on miniImageNet, tieredImageNet and CUB. We compare our method with the three groups of the fewshot learning method, optimization-based, metric-based, and generation-based. Our method can be built on top of any classifier, and we use two popular and simple classifiers, namely SVM and LR to prove the effectiveness of our method. Simple linear classifiers equipped with our method perform better than the state-of-the-art few-shot classification method and achieve the best performance on 1-shot and 5-shot settings of miniImageNet, tieredImageNet and CUB. The performance of our distribution calibration surpasses the state-of-the-art generation-based method by 10% for the 5way1shot setting, which proves that our method can handle extremely low-shot classification tasks better. Compared to other generation-based methods, which require the design of a generative model with extra training costs on the learnable parameters, simple machine learning classifier with DC is much more simple, effective and flexible and can be equipped with any feature extractors and classifier model structures. Specifically, we show three variants, i.e, Maximum likelihood with DC, SVM with DC, Logistic Regression with DC in Table 2 and Table 3. A simple maximum likelihood classifier based on the calibrated distribution can outperform previous baselines and training a SVM classifier or Logistic Regression classifier using the samples from the calibrated distribution can further improve the performance." }, { "heading": "4.3 VISUALIZATION OF GENERATED SAMPLES", "text": "We show what the calibrated distribution looks like by visualizing the generated features sampled from the distribution. In Figure 2, we show the t-SNE representation (van der Maaten & Hinton\n(2008)) of the original support set (a), the generated features (b,c) as well as the query set (d). Based on the calibrated distribution, the sampled features form a Gaussian distribution and more samples (c) can have a more comprehensive representation of the distribution. Due to the limited number of examples in the support set, only 1 in this case, the samples from the query set usually cover a greater area and are a mismatch with the support set. This mismatch can be fixed to some extent by the generated features, i.e., the generated features in (c) can overlap areas of the query set. Thus, training with these generated features can alleviate the mismatch between the distribution estimated only from the few-shot samples and the ground truth distribution." }, { "heading": "4.4 APPLICABILITY OF DISTRIBUTION CALIBRATION", "text": "Applying distribution calibration on different backbones\nOur distribution calibration strategy is agnostic to backbones / feature extractors. Table 5 shows the consistent performance boost when applying distribution calibration on different feature extractors, i.e, four convolutional layers (conv4), six convolutional layers (conv6), resnet18, WRN28 and WRN28 trained with rotation loss. Distribution calibration achieves around 10% accuracy improvement compared to the backbones trained with different baselines.\nApplying distribution calibration on other baselines\nA variety of works can benefit from training with the features generated by our distribution calibration strategy. We apply our distribution calibration strategy on two simple few-shot classification algorithms, Baseline (Chen et al., 2019a) and Baseline++ (Chen et al., 2019a). Table 6 shows that our distribution calibration brings over 10% of accuracy improvement on both." }, { "heading": "4.5 EFFECTS OF FEATURE TRANSFORMATION AND TRAINING WITH GENERATED FEATURES", "text": "Ablation Study\nTable 4 shows the performance when our model is trained without Tukey’s Ladder of Powers transformation for the features as in Equation 3 and when it is trained without the generated features as in Equation 7. It is clear that there is a severe decline in performance of over 10% if both are not used in the 5way1shot setting. The ablation of either one results in a performance drop of around 5% in the 5way1shot setting.\nChoices of Power for Tukey’s Ladder of Powers Transformation\nThe left side of Figure 3 shows the 5way1shot accuracy when choosing different powers for the Tukey’s transformation in Equation 3 when training the classifier with the generated features (red) and without (blue). Note that when the power λ equals 1, the transformation keeps the original feature representations. There is a consistent general tendency for training with and without the\ngenerated features and in both cases, we found λ = 0.5 is the optimum choice. With the Tukey’s transformation, the distribution of query set features in target tasks become more aligned to the calibrated Gaussian distribution, thus benefits the classifier which is trained on features sampled from the calibrated distribution.\nNumber of generated features\nThe right side of Figure 3 analyzes whether more generated features results in consistent improvement in both cases, namely when the features of support and query set are transformed by Tukey’s transformation (red) and when they are not (blue). We found that when the number of generated features is below 500, both cases can benefit from more generated features. However, when more features are sampled, the performance of the classifier tested on untransformed features begins to decline. By training with the generated samples, the simple logistic regression classifier has a 12% relative performance improvement in a 1-shot classification setting." }, { "heading": "4.6 OTHER HYPER-PARAMETERS", "text": "We select the hyperparameters based on the performance of the validation set. The k base class statistics to calibrate the novel class distribution in Equation 5 is set to 2. Figure 4 shows the effect of different values of k. The α in Equation 6 is a constant added on each element of the estimated covariance matrix, which can determine the degree of dispersion of features sampled from the calibrated distributions. An appropriate value of α can ensure a good decision boundary for the classifier. Different datasets have different statistics and an appropriate value of α may vary for different datasets. Figure 5 explores the effect of α on all three datasets, i.e. miniImageNet, tieredImageNet and CUB. We observe that in each dataset, the performance of the validation set and the novel (testing) set generally has the same tendency, which indicates that the variance is dataset-dependent and is not overfitting to a specific set.\nAc cu\nra cy\n(5 w\nay -1\nsh ot\n)\nNumber of retrieved base class statistics k\nFigure 4: The effect of different values of k." }, { "heading": "5 CONCLUSION AND FUTURE WORKS", "text": "We propose a simple but effective distribution calibration strategy for few-shot classification. Without complex generative models, training loss and extra parameters to learn, a simple logistic regression trained with features generated by our strategy outperforms the current state-of-the-art methods by ∼ 5% on miniImageNet. The calibrated distribution is visualized and demonstrates an accurate estimation of the feature distribution. Future works will explore the applicability of distribution calibration on more problem settings, such as multi-domain few-shot classification, and more methods, such as metric-based meta-learning algorithms." }, { "heading": "A AUGMENTATION WITH NEAREST CLASS FEATURES", "text": "Instead of sampling from the calibrated distribution, we can simply retrieve examples from the nearest class to augment the support set. Table 7 shows the comparison of training using samples from the calibrated distribution, the different number of retrieved features from the nearest class, and only using the support set. We found the retrieved features can improve the performance compared to only using the support set but can damage the performance when increasing the number of retrieved features, where the retrieved samples probably serve as noisy data for tasks targeting different classes." }, { "heading": "B DISTRIBUTION CALIBRATION WITHOUT NOVEL FEATURE", "text": "We calibrate the novel class mean by averaging the novel class mean and the retrieved base class means in Equation 6. Table 8 shows the distribution calibration without averaging novel feature, in which the calibrated mean is calculated as µ′ = Σi∈SNµik ." }, { "heading": "C THE EFFECTS OF TUKEY’S TRANSFORMATION", "text": "Figure 6 shows the distribution of 5 base classes and 5 novel classes before/after Tukey’s transformation. It is observed that the base class distribution satisfies Gaussian assumption well (left) while the novel class distribution is more skew (middle). The novel class distribution after Tukey’s transformation (right) is more aligned with the Gaussian-like base class distribution." }, { "heading": "D THE SIMILARITY LEVEL ANALYSIS", "text": "We found that the higher similarities between the retrieved base class distribution and the novel class ground-truth distribution, the higher the performance improvement our method will bring as shown in Table 9. The results in the table are under 5-way-1-shot setting." } ]
2,021
FREE LUNCH FOR FEW-SHOT LEARNING: DISTRIBUTION CALIBRATION
SP:5e3798130b00275f58f296666d614d56147ec57a
[ "This paper offers an interesting viewpoint of adversarial robustness by comparing neural networks with skip connections such as ResNet with their Neural ODE counterparts. The authors analyze the different behaviors of the networks through their Lipschitz constants. They also try to support their claims that Neural ODEs are more robust due to their continuity (small step sizes) through experiments. ", "This paper uses theoretical grounding, starting with Lipshitz continuity-based assumptions on residual connections, to show why such architectures are more susceptible to adversarial inputs. In the process, the authors draw a parallel between these residual connections and neural ODEs, showing how the latter can circumvent the main reason that leads to adversarial susceptibility for the former. Finally, via empirical evaluations, they show how neural ODEs have \"natural\" robustness to adversarial examples: they have a non-trivial performance on adversarial inputs, despite not being explicitly trained for robustness." ]
Recent studies have shown that deep neural networks are vulnerable to adversarial examples, but most of the methods proposed to defense adversarial examples cannot solve this problem fundamentally. In this paper, we theoretically prove that there is an upper bound for neural networks with identity mappings to constrain the error caused by adversarial noises. However, in actual computations, this kind of neural network no longer holds any upper bound and is therefore susceptible to adversarial examples. Following similar procedures, we explain why adversarial examples can fool other deep neural networks with skip connections. Furthermore, we demonstrate that a new family of deep neural networks called Neural ODEs (Chen et al., 2018) holds a weaker upper bound. This weaker upper bound prevents the amount of change in the result from being too large. Thus, Neural ODEs have natural robustness against adversarial examples. We evaluate the performance of Neural ODEs compared with ResNet under three white-box adversarial attacks (FGSM, PGD, DI-FGSM) and one black-box adversarial attack (Boundary Attack). Finally, we show that the natural robustness of Neural ODEs is even better than the robustness of neural networks that are trained with adversarial training methods, such as TRADES and YOPO.
[]
[ { "authors": [ "Naveed Akhtar", "Jian Liu", "Ajmal Mian" ], "title": "Defense against universal adversarial perturbations", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Wenjun Bai", "Changqin Quan", "Zhiwei Luo" ], "title": "Alleviating adversarial attacks via convolutional autoencoder", "venue": "18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD),", "year": 2017 }, { "authors": [ "Wieland Brendel", "Jonas Rauber", "Matthias Bethge" ], "title": "Decision-based adversarial attacks: Reliable attacks against black-box machine learning models", "venue": "arXiv preprint arXiv:1712.04248,", "year": 2017 }, { "authors": [ "Ricky TQ Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Moustapha Cisse", "Piotr Bojanowski", "Edouard Grave", "Yann Dauphin", "Nicolas Usunier" ], "title": "Parseval networks: Improving robustness to adversarial examples", "venue": "arXiv preprint arXiv:1704.08847,", "year": 2017 }, { "authors": [ "John R Dormand", "Peter J Prince" ], "title": "A family of embedded runge-kutta formulae", "venue": "Journal of computational and applied mathematics,", "year": 1980 }, { "authors": [ "Emilien Dupont", "Arnaud Doucet", "Yee Whye Teh" ], "title": "Augmented neural odes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Gintare Karolina Dziugaite", "Zoubin Ghahramani", "Daniel M Roy" ], "title": "A study of the effect of jpg compression on adversarial images", "venue": "arXiv preprint arXiv:1608.00853,", "year": 2016 }, { "authors": [ "Erwin Fehlberg" ], "title": "Low-order classical Runge-Kutta formulas with stepsize control and their application to some heat transfer problems, volume 315", "venue": "National aeronautics and space administration,", "year": 1969 }, { "authors": [ "Chris Finlay", "Adam M Oberman", "Bilal Abbasi" ], "title": "Improved robustness to adversarial examples using lipschitz regularization of the loss", "venue": null, "year": 2018 }, { "authors": [ "Aidan N Gomez", "Mengye Ren", "Raquel Urtasun", "Roger B Grosse" ], "title": "The reversible residual network: Backpropagation without storing activations", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Eldad Haber", "Lars Ruthotto" ], "title": "Stable architectures for deep neural networks", "venue": "Inverse Problems,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Dmitry Krotov", "John Hopfield" ], "title": "Dense associative memory is robust to adversarial inputs", "venue": "Neural computation,", "year": 2018 }, { "authors": [ "Wilhelm Kutta" ], "title": "Beitrag zur naherungsweisen integration totaler differentialgleichungen", "venue": "Z. Math. Phys.,", "year": 1901 }, { "authors": [ "Gustav Larsson", "Michael Maire", "Gregory Shakhnarovich" ], "title": "Fractalnet: Ultra-deep neural networks without residuals", "venue": "arXiv preprint arXiv:1605.07648,", "year": 2016 }, { "authors": [ "Hyeungill Lee", "Sungyeob Han", "Jungwoo Lee" ], "title": "Generative adversarial trainer: Defense to adversarial perturbations with gan", "venue": "arXiv preprint arXiv:1705.03387,", "year": 2017 }, { "authors": [ "Yiping Lu", "Aoxiao Zhong", "Quanzheng Li", "Bin Dong" ], "title": "Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: a simple and accurate method to fool deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Aran Nayebi", "Surya Ganguli" ], "title": "Biologically inspired protection of deep networks from adversarial attacks", "venue": "arXiv preprint arXiv:1703.09202,", "year": 2017 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Xi Wu", "Somesh Jha", "Ananthram Swami" ], "title": "Distillation as a defense to adversarial perturbations against deep neural networks", "venue": "In 2016 IEEE Symposium on Security and Privacy (SP),", "year": 2016 }, { "authors": [ "Andrew Slavin Ross", "Finale Doshi-Velez" ], "title": "Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients", "venue": "arXiv preprint arXiv:1711.09404,", "year": 2017 }, { "authors": [ "Lars Ruthotto", "Eldad Haber" ], "title": "Deep neural networks motivated by partial differential equations", "venue": "Journal of Mathematical Imaging and Vision,", "year": 2019 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Florian Tramèr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses", "venue": "arXiv preprint arXiv:1705.07204,", "year": 2017 }, { "authors": [ "E Weinan" ], "title": "A proposal on machine learning via dynamical systems", "venue": "Communications in Mathematics and Statistics,", "year": 2017 }, { "authors": [ "Cihang Xie", "Zhishuai Zhang", "Yuyin Zhou", "Song Bai", "Jianyu Wang", "Zhou Ren", "Alan L Yuille" ], "title": "Improving transferability of adversarial examples with input diversity", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Weilin Xu", "David Evans", "Yanjun Qi" ], "title": "Feature squeezing: Detecting adversarial examples in deep neural networks", "venue": "arXiv preprint arXiv:1704.01155,", "year": 2017 }, { "authors": [ "Valentina Zantedeschi", "Maria-Irina Nicolae", "Ambrish Rawat" ], "title": "Efficient defenses against adversarial attacks", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "Dinghuai Zhang", "Tianyuan Zhang", "Yiping Lu", "Zhanxing Zhu", "Bin Dong" ], "title": "You only propagate once: Accelerating adversarial training via maximal principle", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P Xing", "Laurent El Ghaoui", "Michael I Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "arXiv preprint arXiv:1901.08573,", "year": 2019 }, { "authors": [ "Xingcheng Zhang", "Zhizhong Li", "Chen Change Loy", "Dahua Lin" ], "title": "Polynet: A pursuit of structural diversity in very deep networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks have made great progress in numerous domains of machine learning, especially in computer vision. But Szegedy et al. (2013) found that most of the existing state-of-theart neural networks are easily fooled by adversarial examples that generated by putting only very small perturbations to the input images. Since realizing the unstability of deep neural networks, researchers have proposed different kinds of methods to defense adversarial examples, such as adversarial training (Goodfellow et al., 2014), data compression (Dziugaite et al., 2016), and distillation defense (Papernot et al., 2016). But each of these methods is a remedy for the original problem, and none of these methods can solve it fundamentally. For example, Moosavi-Dezfooli et al. (2016) showed that no matter how much adversarial examples are added to training sets, there are new adversarial examples that can successfully attack the adversarial trained deep neural network.\nSo, avoiding adversarial examples technically cannot solve the most essential problem: why such subtle change in adversarial examples can beat deep neural networks? Meanwhile, it leads to a more important question: how to make deep neural networks have natural robustness so that they can get rid of malicious adversarial examples.\nEarly explanations for adversarial examples considered that a smoothness prior is typically valid for kernel methods that imperceptibly tiny perturbations of a given image do not normally change the underlying class, while the smoothness assumption does not hold for deep neural networks due to its high non-linearity (Szegedy et al., 2013). This analysis underlies plain deep neural networks like AlexNet (Krizhevsky et al., 2012). But later than that, Goodfellow et al. (2014) claim adversarial examples are a result of models being too linear rather than too non-linear, they can be explained as a property of high-dimensional dot products. Unfortunately, both of these explanations seem to imply that adversarial examples are inevitable for deep neural networks.\nOn the other hand, we notice that skip connections are widely used in current deep neural networks after the appearance of Highway Network (Srivastava et al., 2015) and ResNet (He et al., 2016). It turns out that the identity mapping in ResNet is formally equivalent to one step of Euler’s method\nwhich has been used to solve ordinary differential equations (Weinan, 2017). More than that, other kinds of skip connections used by different network architectures can be considered as different numerical methods for solving ordinary differential equations. The link between numerical ordinary differential equations with deep neural networks can bring us a whole new perspective to explain adversarial examples through the numerical stability analysis.\nIn this paper, we attempt to utilize the natural property of neural networks to defense adversarial examples. We first analyze how adversarial examples affect the output of neural networks with identity mappings, obtain an upper bound for this kind of neural networks, and find that this upper bound is impractical in actual computations. In the same way, we figure out why adversarial examples can fool commonly used deep neural networks with skip connections. Then, we demonstrate that Neural ODEs hold a weaker upper bound and verify the natural robustness of Neural ODEs under four types of perturbations. Finally, we compare Neural ODEs with three types of adversarial training methods to show that the natural robustness of Neural ODEs is better than the robustness of neural networks that are trained with adversarial training. The main contributions of our work are as follows:\n• We introduce and formalize the numerical stability analysis for deep neural networks with identity mappings, prove that there is an upper bound for neural networks with identity mappings to constrain the error caused by adversarial noises.\n• We provide a new reason why commonly used deep neural networks with skip connections cannot resist adversarial examples.\n• We demonstrate that Neural ODEs hold a weaker upper bound which limits the amount of change in the result from being too large. Compare with ResNet and three types of adversarial training methods, we show the natural robustness of Neural ODEs." }, { "heading": "2 RELATED WORKS", "text": "" }, { "heading": "2.1 ADVERSARIAL DEFENSE", "text": "Adversarial training typically uses a robust optimization to generate adversarial examples for training deep neural networks. Madry et al. (2017) take the optimization as a saddle point problem. Zhang et al. (2019a) cast adversarial training as a discrete time differential game. Adversarial training can be seen as a data augmentation particularly enhance the robustness to white-box attacks (Tramèr et al., 2017). Zantedeschi et al. (2017) augmented the training sets with examples perturbed using Gaussian noises which can also enhance the robustness to black-box attacks. Lee et al. (2017) proposed a novel adversarial training method using a generative adversarial network framework. Besides, Finlay et al. (2018) augmented adversarial training with worst case adversarial training which improves adversarial robustness in the `2 norm on CIFAR10.\nModifying the neural networks by using auto encoders, input gradient regularization and distillation can result in robustness against adversarial attacks (Bai et al., 2017; Ross & Doshi-Velez, 2017; Papernot et al., 2016). Besides, there are some biologically inspired deep learning models designed to have natural robustness against adversarial examples. Nayebi & Ganguli (2017) developed a scheme similar to nonlinear dendritic computation to train deep neural networks to make them robust to adversarial attacks. Krotov & Hopfield (2018) proposed Dense Associative Memory (DAM) models and suggested that DAM with higher order energy functions are closer to human visual perception than deep neural networks with ReLUs.\nIn addition to augmenting datasets or modifying original neural networks, there exist adversarial defense methods that rely on using external models and detecting adversarial examples. Akhtar et al. (2018) presented Perturbation Rectifying Network (PRN) as ‘pre-input’ layers to a targeted model, if a perturbation is detected, the output of the PRN is used for label prediction instead of the actual image. Xu et al. (2017) proposed a strategy called feature squeezing to reduce the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample." }, { "heading": "3 NUMERICAL STABILITY ANALYSIS FOR DNNS", "text": "" }, { "heading": "3.1 RESNET AND EULER’S METHOD", "text": "The building block of ResNet is defined as\nyn+1 = yn + f(yn; θn) (1)\nWhere n ∈ {0...N(h)− 1}, yn and yn+1 are input and output vectors of the layers considered. The function f(yn; θn) represents the residual mapping and θn are weights to be learned.\nIn numerical methods for solving ordinary differential equations, Euler’s method is defined by taking this to be exact:\nyn+1 = yn + hf(tn,yn; θn) (2)\nIt can be easily seen that Eqn.(1) is a special case of Euler’s method when the step size h = 1. The iterative updates of ResNet can be seen as an Euler discretization of a continuous transformation (Lu et al., 2018; Haber & Ruthotto, 2017; Ruthotto & Haber, 2019). When the input data is perturbed by adversarial noises , we denote the adversarial example by\nz0 = y0 + (3)\nTo perform stability analysis for ResNet with adversarial example z0 as input, we define a numerical solution zn by zn+1 = zn + hf(tn, zn; θn) (4) and provide following theorem to show that the amount of change between yn and zn holds an upper bound by some assumptions. Theorem 3.1. Let f(t,y; θ) be a continuous function for t0 ≤ t ≤ b and −∞ < y < ∞, and further assume that f(t,y; θ) satisfies the Lipschitz condition. Then compare the two numerical solutions yn and zn as h → 0, there is a constant c ≥ 0, such that amount of change between yn and zn satisfies\nmax 0≤n≤N(h)\n|zn − yn| ≤ c| | (5)\nProof. Let en = zn − yn, n ≥ 0. Then e0 = , and subtracting Eqn.(2) from Eqn.(4), we obtain\nen+1 = en + h[f(tn, zn; θn)− f(tn,yn; θn)] (6)\nAssume that the derivative function f(t,y; θ) satisfied the following Lipschitz condition: there exists K ≥ 0 such that |f(t,y1; θ)− f(t,y2; θ)| ≤ K|y1 − y2| (7) Taking bounds using Eqn.(7), we obtain\n|en+1| ≤ |en|+ hK̂|zn − yn| (8)\n|en+1| ≤ (1 + hK̂)|en| (9) where K̂ is the largest Lipschitz constant.\nApply this recursively to obtain |en| ≤ (1 + hK̂)n|e0| (10)\nUsing the inequality (1 + t)m ≤ emt, for any t ≥ −1, any m ≥ 0, we obtain\n(1 + hK)n ≤ enhK̂ = e(b−t0)K̂ (11)\nand this implies the main result Eqn.(5).\nRoughly speaking, theorem 3.1 means that a small adversarial perturbation initial value of the problem leads to a small change in the solution, provided that the function f(t,y; θ) is continuous and the step size h is sufficiently small.\nObviously, ResNet does not satisfied the assumption of continuity and its step size h always equal to 1. Particularly, the functions f(t,y; θ) in ResNet are composite function which contains ReLU\nactivation functions. Although ReLU activation functions g(x) = max(0, x) break the continuity, due to their contractive property, i.e. satisfies ‖g(x)− g(x+ )‖ ≤ | | for all x, ; it follows that\n‖gn(x; θn)− gn(x+ ; θn)‖ = ‖max(0, θnx+ bn)−max(0, θn(x+ ) + bn))‖ ≤ ‖θn ‖ ≤ ‖θn‖‖ ||\n(12)\nThis provides a weaker upper bound for ReLU to mitigate the loss of continuity.\nEven if the step size h in ResNet can be adjusted by multiply outputs of convolution layers by a chosen constant, but unfortunately, the step size h in ResNet cannot be too small since a very small step size decreases the efficiency in actual computations. We can experimentally show that when step size h is small (such as 10−1, 10−2 and 10−3), ResNet has no obvious robustness against adversarial examples, and when step size is very small (such as 10−8, 10−9, 10−10), ResNet is difficult to train, so not only the classification accuracy of adversarial examples but also the accuracy of clean examples is quite low.\nTo summarize, the main reason for ResNet’s failure in adversarial examples is the step size h = 1 destroys the upper bound given by Eqn.(5) and we cannot find a proper way to deal with it. Thus, when the input of ResNet is perturbed by adversarial noises, the amount of change in the result will become unpredictably large so that ResNet can no longer correctly classify the input." }, { "heading": "3.2 NEURAL ODES", "text": "The residual block can be described as yn+1 = yn+hf(yn; θn) with step size h = 1. When taking smaller steps and add more layers, which in the limit, Neural ODEs parameterize the continuous dynamics of hidden units using an ordinary differential equation specified by a neural network\nlim h→0 yn+h − yn h = dy dh = f(t,y; θ) (13)\nThe solution of Eqn.(13) can be computed using modern ODE solvers such as Runge–Kutta (Runge, 1895; Kutta, 1901). Euler’s method only uses f(tn,yn; θn), which means from any point on a curve, we can find an approximation of a nearby point on the curve by moving a short distance along a tangent line to the curve. Instead of using a single tangent line, modern numerical ODE methods often use lots of different tangent lines and weighted sum all of them to generate a super tangent line, we denote the super tangent line by F (tn,yn; θn, f).\nWe can show that modern ODE solvers based on Runge–Kutta are stable under some assumptions, the amount of change between yn and zn holds an upper bound similar to theorem 3.1.\nTheorem 3.2. Let f(t,y; θ) be a continuous function for t0 ≤ t ≤ b and −∞ < y < ∞, and further assume that f(t,y; θ), F (t,y; θ, f) satisfies the Lipschitz condition. Then compare the two numerical solutions yn and zn as h → 0, there is a constant ĉ ≥ 0, such that amount of change between yn and zn satisfies\nmax 0≤n≤N(h)\n|zn − yn| ≤ ĉ| | (14)\nProof. This result can be obtained in analogy with Eqn.(5) in theorem 3.1.\nThe neural network in Neural ODEs also use ReLU activation functions. And we already show that Eqn.(12) provides a weaker upper bound for ReLU.\nBesides, modern ODE solvers, such as Fehlberg method (Fehlberg, 1969) and DOPRI method (Dormand & Prince, 1980), can adaptively adjust the step size until that the desired tolerance is reached. At each step, two different approximations (for example, a fourth order Runge–Kutta and a fifth order Runge–Kutta) for the solution are made and compared. If the two answers are in close agreement, the approximation is accepted. If the two answers do not agree to a specified accuracy, the step size is reduced. If the answers agree to more significant digits than required, the step size is increased. So modern ODE solvers provide a weaker upper bound for step size when it does not agree with h→ 0.\nTo summarize, although Neural ODEs cannot hold the strong upper bound provided by Eqn.(14), it still holds a weaker upper bound to constrain the error caused by adversarial noises. We will experimentally show that Neural ODEs is more robust on adversarial examples compared with ResNet." }, { "heading": "3.3 OTHER DEEP NEURAL NETWORKS WITH SKIP CONNECTIONS", "text": "Following similar procedures, we can figure out the reason why other deep neural networks with skip connections cannot resist adversarial examples." }, { "heading": "3.3.1 POLYNET", "text": "PolyNet (Zhang et al., 2017) introduced a family of models called PolyInception. Each PolyInception module is a polynomial combination of Inception units that can be described as(\nI + F + F 2 ) · y = y + F (y) + F (F (y)) (15)\nWhere y denotes the input, I the identity operator, and F the nonlinear transform carried out by the residual block, which can also be considered as an operator.\nIt has been shown that Eqn.(15) can be interpreted as an approximation to one step of the backward (or implicit) Euler method (Lu et al., 2018):\nyn+1 = yn + hf(tn+1,yn+1) (16)\nAnd from Eqn.(16) we can get yn+1 = (I − hf)−1yn (17) Where (I − hf)−1 can be formally rewritten as\nI + hf + (hf)2 + · · ·+ (hf)n + · · · (18)\nSo, there are two differences between the polynomial combination with the backward Euler method:\n• The polynomial combination is only a second order approximation to the backward Euler method, the truncation error generated when high order terms are ignored.\n• PolyNet faces the same problem with ResNet that the step size h = 1. Thus, PolyNet cannot holds the upper bound given by the backward Euler method.\nAnd step size h = 1 is the main reason that PolyNet fails to resist adversarial examples." }, { "heading": "3.3.2 FRACTALNET", "text": "FractalNet (Larsson et al., 2016) introduced a design strategy for neural network macro-architecture based on self-similarity. The expansion rule generates a fractal architecture with C intertwined columns can be described as\nfC+1(y) = 1\n2 (fC ◦ fC)(y) +\n1 2 f1(y) (19)\nWhere f1(y) = conv(y) and ◦ denotes composition. This expansion rule resembles to the Runge-Kutta method of order 2 (also known as Heun’s method)(Lu et al., 2018)\nyn+1 = yn + 1\n2 h(f(tn,yn) + f(tn+1,yn + hf(tn,yn))) (20)\nSo, there are two differences between the expansion rule with Heun’s method:\n• The expansion rule losses two identity mapping, namely yn, compared to Heun’s method. • FractalNet faces the same problem with ResNet that the step size h = 1. Thus, FractalNet\ncannot holds the upper bound given by Heun’s method.\nAnd step size h = 1 is the main reason that FractalNet fails to resist adversarial examples." }, { "heading": "3.3.3 REVNET", "text": "RevNet (Gomez et al., 2017) introduced a variant of ResNet where each layer’s activations can be reconstructed exactly from the next layer’s. Each reversible block takes inputs and produces outputs according to the following additive coupling rules\nXn+1 = Xn + f1 (Yn) Yn+1 = Yn + f2 (Xn+1)\n(21)\nThis additive coupling rules can be interpreted as Euler’s method applies to the following systems of differential equations (Lu et al., 2018)\nẊ = f1(Y, t) Ẏ = f2(X, t) (22)\nRevNet faces the same problem with ResNet that the step size h = 1. Thus, RevNet cannot holds the upper bound given by Euler’s method for the dynamic systems." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "4.1 RESNET WITH STEP SIZE h 6= 1\nWe first evaluate the performance of ResNets with different step sizes on adversarial examples. Adopting the same hyperparameter settings for ResNet56 that presented in He et al. (2016), we train 11 ResNet56s on CIFAR10 datasets with the step size decreases from 10−0 to 10−10.\nThrough the numerical stability analysis of ResNet, we know that the classification result should be well behaved when considering small adversarial noises, provided that the step size of h is sufficiently small. In actual computations, however, when we decrease the step size from 10−1 to 10−10, ResNet has no obvious robustness against adversarial examples. And when step size is very small, ResNet is difficult to train. Therefore, the accuracy of ResNet56 is considerably low both on clean and adversarial inputs. We consider that ResNet is impossible to have natural robustness against adversarial examples." }, { "heading": "4.2 THE NATURAL ROBUSTNESS OF NEURAL ODES", "text": "As we mentioned in 3.2, Neural ODEs define a continuous dynamic system, which is equivalent to ResNet with step size h → 0. And Neural ODEs hold a weaker upper bound to constrain the error\ncaused by adversarial noises. In this section, we evaluate the performance of Neural ODEs under three white-box attacks, FGSM(Goodfellow et al., 2014), PGD(Madry et al., 2017), DI2-FGSM(Xie et al., 2019) and one black-box adversarial attack, Boundary Attack(Brendel et al., 2017).\nWithout data augmentation, the accuracy of ResNet on clean MNIST and CIFAR10 datasets is 99.33% and 89.60%, the accuracy of Neural ODEs on clean MNIST and CIFAR10 datasets is 99.05% and 69.94%. Experiment details can be seen in appendix A." }, { "heading": "4.2.1 THE EVALUATION ON WHITE-BOX ADVERSARIAL ATTACKS", "text": "Table 1 shows that Neural ODEs have natural robustness under FGSM and PGD attacks. The classification accuracy of adversarial MNIST images generated by FGSM only drops by less than 2% for all . Even the strongest first order attack method PGD can hardly beat Neural ODEs on MNIST. Although the performance of Neural ODEs on clean CIFAR10 images is worse than state-of-the-art networks like ResNet, it still works well on adversarial CIFAR10 images generated by PGD.\nAs shown in Figure 2, when facing adversarial noises generated by DI2-FGSM, for MNIST, Neural ODEs remains strongly resistant to perturbations while the accuracy of ResNet drops sharply with larger . For CIFAR10, the reduction in accuracy of Neural ODEs is significantly less than ResNet50 with an increase of . Overall, Neural ODEs is more stable when facing white-box adversarial noises compared with ResNet." }, { "heading": "4.2.2 THE EVALUATION ON BLACK-BOX ADVERSARIAL ATTACKS", "text": "Boundary Attack is a decision-based attack that starts from a large adversarial perturbation and then seeks to reduce the perturbation while staying adversarial(Brendel et al., 2017). For MNIST, Boundary Attack has almost no effect on Neural ODEs. For CIFAR10, however, it seems that Neural\nODEs also fails in this gradient-free attack, but the performance of Neural ODEs is still better than ResNet50 when is larger than 33." }, { "heading": "4.2.3 COMPARING WITH ADVERSARIAL TRAINING METHODS", "text": "As one can see, Neural ODEs can remain its robustness and outperform PGD, TRADES, and YOPO without taking any adversarial training methods." }, { "heading": "5 CONCLUSION AND DISCUSSION", "text": "In this paper, we highlighted and analyzed the natural robustness of Neural ODEs. We proved there are two similar upper bounds for ResNet and Neural ODEs under assumptions of continuity and step size h → 0. We showed that it is the step size h = 1 causes ResNet and the other three deep neural networks with skip connections fail on adversarial examples. Neural ODEs define a continuous dynamic system with step size h → 0. However, in actual computations, Modern ODE solvers adaptively adjust the step size to solve the continuous dynamic system and this brings us a weaker upper bound for Neural ODEs. We experimentally showed that Neural ODEs is more robust on adversarial examples compared with ResNet.\nAccording to Theorem 3.1 and 3.2, the upper bound c| | and ĉ| | are related to Lipschitz constant and independent with the step size h. Cisse et al. (2017) showed that the robustness of DNNs can be improved by constraining the Lipschitz constant. In this paper, however, we are more concerned with the step size h, because the upper bound cannot be guaranteed if h → 0 is not satisfied. We experimentally demonstrated that the difference of ResNet with Neural ODEs in step size h is sufficient to make a significant difference in robustness, even without constraining the Lipschitz constant. Nevertheless, constraining the Lipschitz constant is expected to make neural networks to be more robust, which we would like to consider as our future work.\nDue to the similarity between DNNs and numerical methods for solving ODEs, we believe that we can learn from the numerical stability analysis to design more robust deep neural networks." }, { "heading": "A TRAINING CONFIGURATION OF NEURAL ODES", "text": "Table A1 summarizes our experiment settings for training Neural ODEs. We only change some hyperparameters in Neural ODEs, without taking any help of adversarial defense methods.\nTraining Configurations MNIST CIFAR10 training method Adam Adam\nODEslover dopri5 dopri5 convolution layers 4 5\nconvolution channels 128 256 start time t0 0 0 stop time t1 100 500\ntraining epochs 100 200 initial learning rate* 10−3 10−4\nmini-batches size 32 32\nTable A1: The training configurations of Neural ODEs on MNIST and CIFAR10. *Every 50 epochs, the learning rate is reduced to half.\nIt is worth to mention that there are no common training configurations for Neural ODEs on CIFAR10, Dupont et al. (2019) had reported that the accuracy of Neural ODEs on CIFAR10 test sets is 53.7%±0.2%, our experiment result is better than it." } ]
2,020
null
SP:75b5d32a5a6bc3373309ee3e9ad7507d23221f19
[ "To approach \"reasoning after memorization\", the paper presents a Continual Memory (CM) framework using a memory-augmented neural network (MANN) and self-supervised training. In particular, the CM compresses the input sequence into a matrix memory using self-attention mechanisms and gated recurrent update (GRU). Then together with a Transformer decoder, the memory is used for downstream reasoning tasks without the need of referring to the original input sequence. Moreover, the framework is simultaneously trained with auxiliary losses to enforce memorization capability. A variety of experiments demonstrate promising results, in which the CM outperforms two MANN baselines and shows competitive performance against state-of-the-art methods. ", "In this paper, the authors propose the Continual Memory (CM) targeted towards a reasoning scenario called “reasoning after memorization”. The main goal of CM is to enable long-term memorization as opposed to memory networks that suffer from gradual forgetting. They evaluate their model both on synthetic data as well as a few downstream benchmarks. " ]
Existing reasoning tasks often follow the setting of “end-to-end reasoning”, which has an important assumption that the input contents can be always accessed while reasoning. However, human beings frequently adopt another reasoning setting in daily life, referred to “reasoning after memorizing”. Concretely, human beings have the ability to unconsciously memorize their experiences within limited memory capacity, from which they can recall and respond to subsequent tasks. In this setting, the input contents are no longer available during reasoning, thus we need to compress and memorize the input stream in one pass, trying to answer general queries that are unseen before. Memory augmented neural networks introduce a write-read memory to perform such human-like memorization and reasoning, but they continually update the memory from current information and inevitably forget the early contents, failing to answer the queries relevant to early information. In this paper, we propose the Continual Memory (CM) to explore this ability of reasoning after long-term memorization. To alleviate the gradual forgetting of early information, we develop self-supervised memorization training with item-level and sequence-level objectives. We demonstrate several interesting characteristics of our continual memory via synthetic data, and evaluate its performance by several downstream tasks, including long-term text QA, long-term video QA and recommendation with long sequences.
[]
[ { "authors": [ "Fabian Caba Heilbron", "Victor Escorcia", "Bernard Ghanem", "Juan Carlos Niebles" ], "title": "Activitynet: A large-scale video benchmark for human activity understanding", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Arslan Chaudhry", "Marc’Aurelio Ranzato", "Marcus Rohrbach", "Mohamed Elhoseiny" ], "title": "Efficient lifelong learning with a-gem", "venue": "arXiv preprint arXiv:1812.00420,", "year": 2018 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Xu Chen", "Hongteng Xu", "Yongfeng Zhang", "Jiaxi Tang", "Yixin Cao", "Zheng Qin", "Hongyuan Zha" ], "title": "Sequential recommendation with user memory networks", "venue": "In Proceedings of the eleventh ACM international conference on web search and data mining,", "year": 2018 }, { "authors": [ "Róbert Csordás", "Juergen Schmidhuber" ], "title": "Improving differentiable neural computers through memory masking, de-allocation, and link distribution sharpness control", "venue": null, "year": 1904 }, { "authors": [ "Cyprien de Masson d’Autume", "Sebastian Ruder", "Lingpeng Kong", "Dani Yogatama" ], "title": "Episodic memory in lifelong language learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "venue": "In Proceedings of the Conference on The North American Chapter of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Ming Ding", "Chang Zhou", "Hongxia Yang", "Jie Tang" ], "title": "Cogltx: Applying bert to long texts", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Alex Graves", "Greg Wayne", "Ivo Danihelka" ], "title": "Neural turing machines", "venue": "arXiv preprint arXiv:1410.5401,", "year": 2014 }, { "authors": [ "Alex Graves", "Greg Wayne", "Malcolm Reynolds", "Tim Harley", "Ivo Danihelka", "Agnieszka GrabskaBarwińska", "Sergio Gómez Colmenarejo", "Edward Grefenstette", "Tiago Ramalho", "John Agapiou" ], "title": "Hybrid computing using a neural network with dynamic external", "venue": "memory. Nature,", "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Balázs Hidasi", "Alexandros Karatzoglou", "Linas Baltrunas", "Domonkos Tikk" ], "title": "Session-based recommendations with recurrent neural networks", "venue": "arXiv preprint arXiv:1511.06939,", "year": 2015 }, { "authors": [ "Rudolf Kadlec", "Martin Schmid", "Ondrej Bajgar", "Jan Kleindienst" ], "title": "Text understanding with the attention sum reader network", "venue": "arXiv preprint arXiv:1603.01547,", "year": 2016 }, { "authors": [ "Junyeong Kim", "Minuk Ma", "Trung Pham", "Kyungsu Kim", "Chang D Yoo" ], "title": "Modality shifting attention network for multi-modal video question answering", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the national academy of sciences,", "year": 2017 }, { "authors": [ "Tomáš Kočiskỳ", "Jonathan Schwarz", "Phil Blunsom", "Chris Dyer", "Karl Moritz Hermann", "Gábor Melis", "Edward Grefenstette" ], "title": "The narrativeqa reading comprehension challenge", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Hung Le", "Truyen Tran", "Svetha Venkatesh" ], "title": "Learning to remember more with less memorization", "venue": "arXiv preprint arXiv:1901.01347,", "year": 2019 }, { "authors": [ "Hung Le", "Truyen Tran", "Svetha Venkatesh" ], "title": "Neural stored-program memory", "venue": "arXiv preprint arXiv:1906.08862,", "year": 2019 }, { "authors": [ "Hung Le", "Truyen Tran", "Svetha Venkatesh" ], "title": "Self-attentive associative memory", "venue": "arXiv preprint arXiv:2002.03519,", "year": 2020 }, { "authors": [ "Thao Minh Le", "Vuong Le", "Svetha Venkatesh", "Truyen Tran" ], "title": "Hierarchical conditional relation networks for video question answering", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Jie Lei", "Licheng Yu", "Mohit Bansal", "Tamara L Berg" ], "title": "Tvqa: Localized, compositional video question answering", "venue": "arXiv preprint arXiv:1809.01696,", "year": 2018 }, { "authors": [ "David Lopez-Paz", "Marc’Aurelio Ranzato" ], "title": "Gradient episodic memory for continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Morris Moscovitch", "Roberto Cabeza", "Gordon Winocur", "Lynn Nadel" ], "title": "Episodic memory and beyond: the hippocampus and neocortex in transformation", "venue": "Annual review of psychology,", "year": 2016 }, { "authors": [ "Tsendsuren Munkhdalai", "Alessandro Sordoni", "Tong Wang", "Adam Trischler" ], "title": "Metalearned neural memory", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Taewon Park", "Inchul Choi", "Minho Lee" ], "title": "Distributed memory based self-supervised differentiable neural computer", "venue": "arXiv preprint arXiv:2007.10637,", "year": 2020 }, { "authors": [ "Qi Pi", "Weijie Bian", "Guorui Zhou", "Xiaoqiang Zhu", "Kun Gai" ], "title": "Practice on long sequential user behavior modeling for click-through rate prediction", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Jack Rae", "Jonathan J Hunt", "Ivo Danihelka", "Timothy Harley", "Andrew W Senior", "Gregory Wayne", "Alex Graves", "Timothy Lillicrap" ], "title": "Scaling memory-augmented neural networks with sparse reads and writes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Jack W Rae", "Anna Potapenko", "Siddhant M Jayakumar", "Timothy P Lillicrap" ], "title": "Compressive transformers for long-range sequence modelling", "venue": null, "year": 1911 }, { "authors": [ "Kan Ren", "Jiarui Qin", "Yuchen Fang", "Weinan Zhang", "Lei Zheng", "Weijie Bian", "Guorui Zhou", "Jian Xu", "Yong Yu", "Xiaoqiang Zhu" ], "title": "Lifelong sequential modeling with personalized memorization for user response prediction", "venue": "In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval,", "year": 2019 }, { "authors": [ "Adam Santoro", "Ryan Faulkner", "David Raposo", "Jack Rae", "Mike Chrzanowski", "Theophane Weber", "Daan Wierstra", "Oriol Vinyals", "Razvan Pascanu", "Timothy Lillicrap" ], "title": "Relational recurrent neural networks", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Minjoon Seo", "Aniruddha Kembhavi", "Ali Farhadi", "Hannaneh Hajishirzi" ], "title": "Bidirectional attention flow for machine comprehension", "venue": "arXiv preprint arXiv:1611.01603,", "year": 2016 }, { "authors": [ "Sainbayar Sukhbaatar", "Jason Weston", "Rob Fergus" ], "title": "End-to-end memory networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Jiaxi Tang", "Ke Wang" ], "title": "Personalized top-n sequential recommendation via convolutional sequence embedding", "venue": "In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining,", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Caiming Xiong", "Victor Zhong", "Richard Socher" ], "title": "Dynamic coattention networks for question answering", "venue": "arXiv preprint arXiv:1611.01604,", "year": 2016 }, { "authors": [ "Andrew P Yonelinas" ], "title": "The nature of recollection and familiarity: A review of 30 years of research", "venue": "Journal of memory and language,", "year": 2002 }, { "authors": [ "Zhou Yu", "Dejing Xu", "Jun Yu", "Ting Yu", "Zhou Zhao", "Yueting Zhuang", "Dacheng Tao" ], "title": "Activitynetqa: A dataset for understanding complex web videos via question answering", "venue": "In Proceedings of the American Association for Artificial Intelligence,", "year": 2019 }, { "authors": [ "Guorui Zhou", "Na Mou", "Ying Fan", "Qi Pi", "Weijie Bian", "Chang Zhou", "Xiaoqiang Zhu", "Kun Gai" ], "title": "Deep interest evolution network for click-through rate prediction", "venue": "In Proceedings of the American Association for Artificial Intelligence,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "In recent years, the tremendous progress of neural networks has enabled machines to perform reasoning given a query Q and the input contents X , e.g., infer the answer of given questions from the text/video stream in text/video question answering (Seo et al., 2016; Le et al., 2020b), or predict whether a user will click the given item based on the user behavior sequence in recommender systems (Ren et al., 2019; Pi et al., 2019). Studies that achieve top performances at such reasoning tasks usually follow the setting of “end-to-end reasoning”, where the raw input contents X is available at the time of answering Q. In this setting, complex interaction between X and Q can be designed to extract query-relevant information from X with little loss, such as co-attention interaction (Xiong et al., 2016). Though these methods (Seo et al., 2016; Le et al., 2020b) can effectively handle these reasoning tasks, they require unlimited storage resources to hold the original input X . Further, they have to encode the whole input and develop the elaborate interaction from scratch, which are time consuming. This is not acceptable for online services that require instant response such as recommender systems, as the input sequence becomes extremely long (Ren et al., 2019).\nAnother setting of “reasoning after memorization”, which has the restrictions that the raw input X is not available at the time of answering Q, requires the model to first digest X in a streaming manner, i.e., incrementally compress the current subsequence of X into a memory M with very limited capacity (size much smaller than |X|). Under such constraints, in the inference phase, we can only capture query-relevant clues from the limited states M (rather than X) to infer the answer to Q, where the information compression procedure in M is totally not aware of Q, posing great challenges of what to remember in M . This setting is very similar to the daily situation of our human beings, i.e., we may not even know the tasksQ that we will answer in the future when we are experiencing current events, and we also cannot go back to replay when we are solving problems at hand. However, it’s our instincts, which continually process information during our entire life with\nlimited and compressed memory storages, that allow us to recall and draw upon past events to frame our behaviors given the present situations (Moscovitch et al., 2016; Baddeley, 1992).\nCompared to “end-to-end reasoning”, “reasoning after memorization” though may not achieve better precisions at regular tasks with short sequences according to literatures (Park et al., 2020), is naturally a better choice for applications like long-sequence recommendation (Ren et al., 2019) and long-text understanding (Ding et al., 2020). Maintaining M can be incremental with only a small part of inputs at each timestep while inference over M and Q is also tractable for online service. Memory augmented neural networks (MANNs) (Graves et al., 2014; 2016) introduce a write-read memory that already follows the setting of “reasoning after memorization”, which compress the input contents into a fixed-size memory and only read relevant information from the memory during reasoning. However, existing works do not emphasize on using MANNs to perform long-term memory-based reasoning. They learn how to maintain the memory only by back-propagated losses to the final answer and do not design specific training target for long-term memorization, which inevitably lead to the gradual forgetting of early contents (Le et al., 2019a). That is, when dealing with the long-term input stream, the memory may only focus on current contents and naturally neglect long-term clues. Thus, existing MANNs fail to answer the query relevant to early information due to the lack of long-term memorization training.\nIn this paper, we propose the Continual Memory (CM) to further explore this ability of reasoning after long-term memorization. Specifically, we compress the long-term input stream into the continual memory with fixed-size capacity and infer subsequent queries based on the memory. To overcome gradual forgetting of early information and increase the generalization ability of the memorization technique, we develop the extra self-supervised task to recall the recorded history contents from the memory. This is inspired by the fact that human beings can recall details nearby some specific events and distinguish whether a series of events happened in the history, which respectively correspond to two different memory process revealed in cognitive, neuropsychological, and neuroimaging studies, i.e., recollection and familiarity (Yonelinas, 2002; Moscovitch et al., 2016). Concretely, we design the self-supervised memorization training with item-level and sequence-level objectives. The item-level objective aims to predict the masked items in history fragments, which are sampled from the original input stream and parts of items are masked as the prediction target. This task tries to endow the recollection ability that enables one to relive past episodes. And the sequence-level objective tries to distinguish whether a historical fragment ever appears in the input stream, where we directly sample positive fragments from the early input stream and replace parts of the items in positive ones as negative fragments. This task enables the familiarity process that can recognize experienced events or stimulus as familiar. We also give implementations on segment-level maintenance of memory to better capture context clues and improve the modeling efficiency. We illustrate the long-term memorization ability of our continual memory via a synthetic task, and evaluate its performance at solving real-world downstream tasks, including long-term text QA, long-term video QA and recommendation with long sequences, showing that it achieves significant advantages over existing MANNs in the “reasoning after memorizing” setting." }, { "heading": "2 RELATED WORKS", "text": "Memory Augmented Neural Networks (MANNs) introduce external memory to store and access the past information by differentiable write-read operators. Neural Turing Machine (NTM) (Graves et al., 2014) and Differentiable Neural Computer (DNC) (Graves et al., 2016) are the typical MANNs for human-like reasoning under the setting of “reasoning after memorizing”, whose inference relies only on the memory with limited capacity rather than starting from the original input data. In this line of research, Rae et al. (2016) adopt the sparse memory accessing to reduce computational cost. Csordás & Schmidhuber (2019) introduce the key/value separation problem of content-based addressing and adopt a mask for memory operations as a solution. Le et al. (2019b) manipulate both data and programs stored in memory to perform universal computations. And Santoro et al. (2018); Le et al. (2020a) consider the complex relational reasoning with the information they remember.\nHowever, these works exploit MANNs mainly to help capture long-range dependencies in dealing with input sequences, but not paying efforts in dealing with the gradual forgetting issue in MANNs (Le et al., 2019a). They share the same training objective as those methods developed for the setting of “end-to-end reasoning”, inevitably incurring gradual forgetting of early contents (Le\net al., 2019a). Recently, there are a few works trying to alleviate this problem in training MANNs. Le et al. (2019a) propose to measure “remember” ability by the final gradient on the early input, and adopt a uniform writing operation on the memory. Rae et al. (2019) minimize the difference between successive memory states along with the reasoning objective, as they assume the steadily changed memory states will benefit remembering older information. And Munkhdalai et al. (2019) design the meta-learned neural memory instead of the conventional array-structured memory and memorize the current and past information by reconstructing the written values via the memory function. Note that our approach is different and parallel to these techniques, since we give no assumptions on what behavior will remember the most. Instead, we optimize towards this goal directly by designing auxiliary tasks in a self-supervised manner. A recent work (Park et al., 2020) also introduces a selfsupervised memory loss to ensure how well the current input is written to the memory, but it only focuses on remembering the current information and ignoring the long-term information forgetting.\nContinual learning (Kirkpatrick et al., 2017; Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2018; de Masson d’Autume et al., 2019) is another field about the forgetting problem of neural networks, which aims to learn from an infinite stream of data and gradually extend acquired knowledge without catastrophic forgetting of early knowledge. But our continual memory focuses on remembering the infinite information stream and try to overcome gradually forgetting of long-term information." }, { "heading": "3 CONTINUAL MEMORY", "text": "" }, { "heading": "3.1 PROBLEM FORMULATION", "text": "Given the input stream X = {x1,x2, · · · } and a query Q, the methods under the setting of “endto-end reasoning” directly learn the reason model T (X,Q) to predict the answer A. These is an important assumption that the input stream X can be always accessed while reasoning. And complex interaction between X and Q can be designed to extract query-relevant information in T (X,Q). Obviously, these methods have to store the original input and infer the answer from scratch when the query is known. But under the setting of “reasoning after memorizing”, we compress the input stream X into a fixed-size memory M = {mk}Kk=1 with K memory slots and then infer the answer for any relevant query Q by A = R(M,Q). Here we only need to store the compressed memory M, which can be updated in real-time and reused for a series of queries. Since the slot number K in the memory is irrelevant to the input length |X|, this setting only requires O(1) storage space rather than O(|X|) in the setting of “end-to-end reasoning”." }, { "heading": "3.2 CONTINUAL MEMORY MACHINE", "text": "As shown in Figure 1, given the input stream X = {x1,x2, · · · }, we apply a continual memory machine GΘ(·) to compress them into continual memory M = {mk}Kk=1 with K memory slots. By self-supervised memorization training, we try to overcome the gradual forgetting of early information and make it possible to capture the clues at any time in the stream. Concretely, based on continual memory, we develop the history recall model Hξ(·) to reconstruct the masked history fragments and distinguish positive history fragments from negative ones. Simultaneously, we train the task-specific reason model RΩ(·) based on continual memory. Under the setting of “reasoning after memorizing”, we can develop the continual memory M = GΘ(X) and then infer the answer for any relevant query Q by A = RΩ(M,Q).\nWe deal with the input stream X from the segment level rather than item level, i.e., we cut the input sequence into fixed-length segments and memorize them into the memory slots segment-by-segment. Compared to existing MANNs (Graves et al., 2014; 2016), which store the input stream item-by-item orderly with a RNN-based controller, our segmentlevel memorization can further capture the bidirectional context of each item and improve the modeling efficiency. We denote the t-th seg-\nment as Xt = {xtn}Nn=1 and the current memory as Mt = {mtk}Kk=1, where we have recorded t-1 segments in Mt. The xtn and m t n have the dimension dx and dm, respectively.\nWe first model the t-th segment by a Transformer encoder (Vaswani et al., 2017) and obtain the sequence features Ft = {f tn}Nn=1 with dimension dx. After it, we apply a memory update module to write Ft into Mt. As shown in Figure 2, we apply a slot-to-item attention to align the sequence features to slot features in the current memory Mt, and then develop multi-head gate-based update. Concretely, we first calculate the slot-to-item attention matrix where each element means the relevance of a slot-item pair, and then learn aligned features Lt = {ltk}Kk=1 for each slot, given by\nαtkn = w > a tanh(W a 1m t k +W a 2f t n + b a), α̂tkn = exp(αtkn)∑K j=1 exp(α t jn) , ltk = N∑ n=1 α̂tknf t n, (1)\nwhere Wa1 ∈ Rdmodel×dm , Wa2 ∈ Rdmodel×dx and ba ∈ Rdmodel are the projection matrices and bias. w>a is the row vector. After it, we project slot features and aligned features into Z subspaces, which is similar to the Multi-Head setting in Transformer (Vaswani et al., 2017), given by\nmtkz = W M z m t k, l t kz = W L z l t k, (2)\nwhere WMz ∈ Rdmodel/Z×dm and WLz ∈ Rdmodel/Z×dx are the projection matrices. The mtkz and ltkz are the slot and aligned sub-features in the z-th subspace. Next, the k-th slot sub-feature m t kz is updated with the corresponding sub-feature ltkz based on the z-th GRU unit with dmodel Z -d hidden states, given by\nmt+1kz = GRUz(m t kz, l t kz), (3)\nwhere ltkz is the current input of the z-th GRU unit. Next, the new slot feature m t+1 k is aggregated from Z subspaces by mt+1k = W oConcat(mt+1k1 , · · · ,m t+1 kZ ), where W\no ∈ Rdm×dmodel is the aggregation matrix. After the memorization of T segments, we can obtain continual memory MT and we denote it by M for convenience. Note that, at the inference stage, we can develop and update the continual memory in real time by GΘ(·), thus we do not need the input contents during subsequent reasoning and have the ability to reason after long-term memorization." }, { "heading": "3.3 SELF-SUPERVISED MEMORIZATION TRAINING", "text": "After memorizing the input stream, we conduct self-supervised memorization training to alleviate the gradual forgetting of early information by the history recall model Hξ(·) with item-level and sequence-level objectives, where the item-level objective aims to reconstruct the masked positive\nhistory fragments and the sequence-level objective tries to distinguish positive history fragments from negative ones.\nConcretely, we sample the preceding segment from the input stream as the positive history fragment {h+1 , h + 2 , · · · , h + N} with N items, where each item h+∗ corresponds to a feature x∗ in the stream. We then mask 50% of items in the fragment and add an especial item [cls] at the beginning to obtain the masked positive history fragment H+ = {h+[cls], h + 1 , h + [mask1]\n, · · · , h+N}. In order to guarantee that the model Hξ(·) reconstructs the masked fragment by utilizing continual memory rather than only relying on fragment context, we set the mask ratio to 50% instead of 15% in BERT (Devlin et al., 2019). Moreover, we construct the masked negative history fragment H− = {h−[cls], h − 1 , h − [mask1]\n, · · · , h−N} by replacing 50% of unmasked items in the positive fragments, where the replacement items are sampled from other input stream to make the negative fragments distinguishable. Here we construct B masked positive fragments with corresponding B negative ones. Next, we adopt a bidirectional Transformer decoder (Vaswani et al., 2017) without the future masking to model each history fragmentH+/H−. In the decoder, each history item can interact with all other items. The continual memory M is input to the “encoder-decoder multi-head attention sub-layer” in each decoder layer, where the queries come from the previous decoder layer and the memory M are regarded as the keys and values. This allows each item in the decoder to attend over all slot features in the memory. Finally, we obtain the features {r+/−[cls] , r +/− 1 , r +/− [mask1]\n, · · · , r+/−N } where each r+/−∗ has the dimension dx.\nItem-Level Reconstruction. We first predict the masked items of positive history fragments to build the item-level loss. Considering there are too many item types, we apply the contrastive training He et al. (2020); Chen et al. (2020) based on the ground truth and other sampled items. For the N/2 masked items, we compute the item-level loss by\nLitem = − 2\nN N/2∑ i=1 log exp(φ(r+[maski],yi)) exp(φ(r+[maski],yi)) + ∑J j=1 exp(φ(r + [maski] ,yj)) , (4)\nwhere yi ∈ Rdx is the feature of ground truth of the i-th masked item, yj ∈ Rdx is the feature of sampled items and φ(·) is the inner product. Sequence-Level Prediction. Next, we predict whether the masked history fragment ever appears in the current input stream, i.e. distinguish positive history segment from negative ones. Concretely, we project the feature r+/−[cls] into a confident score s +/− and develop the sequence-level loss by\nLseq = − B∑ i=1 log(s+i ) + B∑ j=1 log(1− s−j ), (5)\nwhere B is the number of positive and negative history fragments." }, { "heading": "3.4 TASK-SPECIFIC REASONING TRAINING", "text": "Besides self-supervised memorization training, we simultaneously develop task-specific reasoning training. For several downstream tasks, we propose different task-specific reason modelRΩ(M,Q) for any query Q based on continual memory M. Here we adopt the simple and mature components in the reason model for fair comparison. The details are introduced in Appendix A.1. Briefly, we first learn the query representation q by a task-specific encoder and then perform the multi-hop attention-based reasoning. Finally, we obtain the reason loss Lr fromRΩ(M,Q). Eventually, we combine the memorization and reason losses to train our model, given by\nLcm = λ1Litem + λ2Lseq + λ3Lr, (6) where λ1, λ2 and λ3 are applied to adjust the balance of three losses." }, { "heading": "4 EXPERIMENTS", "text": "We validate our continual memory on synthetic data and several downstream tasks, including longterm text question answering, long-term video question answering and recommendation with long sequences." }, { "heading": "4.1 EXPERIMENT SETTING", "text": "Model Setting. We first introduce the common model settings for all downstream tasks. We set the layer number of the Transformer encoder and bi-directional Transformer decoder to 3. The head number in Multi-Head Attention is set to 4. And the subspace number Z in the memory update module is also set to 4. We set λ1, λ2 and λ3 to 1.0, 0.5 and 1.0, respectively. The number B of history fragments is set to 5. During training, we apply an Adam optimizer (Duchi et al., 2011) to minimize the multi-task loss Lcm, where the initial learning rate is set to 0.001. Baseline. We compare our continual memory with the end-to-end reasoning methods and the memory-based reasoning approaches under the “reasoning after memorizing” setting. The endto-end baselines are different in downstream tasks and the memory-based baselines mainly are DNC (Graves et al., 2016), NUTM (Le et al., 2019b), STM (Le et al., 2020a) and DMSDNC (Park et al., 2020). For fair comparison, we modify the reason module of memory-based baselines to be consistent with our continual memory, i.e. we conduct multi-hop attention-based reasoning based on the built memory. And the number of memory slots in these baselines is also set to K. Besides, we set the core number of NUTM to 4, the query number of STM to 8 and the memory block number of DMSDNC to 2." }, { "heading": "4.2 SYNTHETIC EXPERIMENT", "text": "Synthetic Dataset. We first introduce the setting of the synthetic task. Here we abstract the general concepts of reasoning tasks (QA/VQA/Recommendation) to construct the synthetic task. We define the input sequence as a Stream and each item in the sequence as a Fact, where the stream and fact can correspond to the text sequence and word token in text QA. We set the number of fact types to Rf , that is, each fact can be denoted by a Rf -d one-hot vector and obtain the fact feature by a trainable embedding layer. Different facts can correspond to different words in text QA. Considering reasoning tasks often need to retrieve vital clues related to the query from the given input and then infer the answer, we define the query-relevant facts in the stream as the Evidence and regard the Evidence-Query-Answer triple as the Logic Chain. As shown in Figure 3, given a stream and a query, we need to infer the answer if the stream contains the evidence. Specifically, we set the number of query types to Rq and each query can be denoted by a Rq-d one-hot vector. For each query, we set the number of answer types to Ra. That is, there are totally Rq ∗ Ra query-answer pairs and we need to synthesize Rq ∗ Ra corresponding evidences of each pair. Concretely, each evidence is denoted by a sequence of facts {fact1, · · · , factRc}, which orderly appear in the input stream. And Rc is the length of the evidence. During the evidence synthesis, we first define 20 different group and uniformly split these facts and queries to 20 groups. Next, if a query belongs to group k, we randomly sample Rc facts from the group as the evidence, and then assign the evidence to a query-answer pair to generate a fixed logic chain.\nEventually, we synthetic 400 data samples for each logic chain to train the models. Each sample contains the input stream with Rl items, a query and an answer. Concretely, we first sample Rl facts as a sequence and then place the evidence in the sequence, where we guarantee each stream-query pair corresponds to a unique answer. Moreover, we synthesize two datasets with the consecutive and discrete evidence, respectively. The facts in the consecutive evidence continuously appear in the stream, but facts in the discrete evidence are distributed in different parts of the stream, where we still make these facts exist in a certain interval that does not exceed 20% length of the input stream.\nBaselines and Model Details. The Directly Reason method first models the input stream by RNN to obtain the stream feature, then concatenates the stream feature with the query feature and predicts the answer by a linear layer. The Multi-Hop Reason method further applies multiple attention layers after RNN-based stream modeling to capture the query-relevant clues. In the main experiment, we\nTable 1: Performance Comparisons on Synthetic Data. Rf=400,Rl=200,Rq=40,Ra=30,Rc=5.\nMethod Setting Discrete Evidence Consecutive EvidenceEarly Later All Early Later All\nDirectly Reason End-to-End 9.45 9.32 9.39 13.57 13.41 13.49 Multi-Hop Reason End-to-End 32.16 33.42 32.29 34.38 34.50 34.44\nDNC Memory-Based 13.37 22.54 17.95 20.56 26.59 23.58 NUTM Memory-Based 17.84 23.59 20.72 24.31 29.71 27.01 STM Memory-Based 17.90 23.47 20.68 23.55 29.64 26.59 DMSDNC Memory-Based 18.17 24.21 21.19 24.92 30.74 27.83\nCM (Only Reason) Memory-Based 18.71 25.13 21.92 25.79 31.38 28.63 CM (Full) Self-Sup. Memory 22.14 24.98 23.56 28.42 31.71 30.07\nCM (Only Litem) Self-Sup. Memory 21.79 24.46 23.13 27.88 31.12 29.50 CM (Only Lseq) Self-Sup. Memory 19.75 23.81 21.78 26.63 30.25 28.44 CM (Single Head) Self-Sup. Memory 21.24 24.12 22.68 27.79 31.14 29.47\n5 10 15 20 25\nSlot Number K\n20\n22\n24\n26\nA c c (%\n) Early Later\nFigure 4: Effect of the Memory Slot Number. 0~25% 25~50% 50~75% 75~100% Evidence Location in the Stream\n0\n2\n4\n6\n8 10 Ac c( % ) Multi-Hop Reason DNC NUTM CM (Only Reason) CM (Full)\nFigure 5: Performance on Synthetic Data with Longer Stream and Evidence.\nset the dataset hyper-parameters Rf , Rl, Rq , Ra and Rc to 400, 200, 40, 30, and 5, respectively. The facts of the evidence may appear in different stages of the input stream. Early means the facts appear in the preceding 50% of the stream and Later means the facts appear in the subsequent 50%. For our continual memory, we set the dx, dm and dmodel to 128. The number K of memory slots and length N of segments are set to 20 and 10, respectively. And we sample all other facts as negative items in Litem. Evaluation Results. Table 1 reports the performance comparison between our method and baselines, where CM (Full) is the full model with the memorization and reasoning training and CM (Only Reason) only employ the task-specific reasoning training. Overall, end-to-end methods have the close early and later performance, but memory-based approaches DNC, NUTM, STM, DMSDNC and CM (Only Reason) achieve the terrible early performance due to the gradual forgetting. By the self-supervised memorization training, our CM (Full) significantly improves the early accuracy and achieves the best memory-based reasoning performance. This fact suggests our proposed memorization training can alleviate the gradual forgetting of early information. Besides, CM (Only Reason) outperforms other memory-based methods, which indicates our continual memory machine can better memorize the long-term information even without the memorization training. Moreover, we can find the Directly Reason approach achieves the worst performance but the Multi-Hop Reason method has a high accuracy, which demonstrates the performance of end-to-end methods mainly depends on the complicated interaction between the input contents and queries.\nAblation Study. We next perform ablation studies on the memorization loss and multi-head update strategy. Concretely, we first remove the sequence-level or item-level loss to produce two ablation\nmodels CM (Only Litem) and CM (Only Lseq). As shown in Table 1, CM (Full) outperforms two ablation models on all metrics, which illustrates two self-supervised losses are both helpful for longterm memorization. Moreover, CM (Only Litem) achieves better accuracy than CM (Only Lseq), demonstrating the importance of the item-level loss. Next, we discard the multi-head setting from the memory update module to generate the ablation model CM(Single Head). From the results, we can find the CM(Single Head) has the performance degradation, indicating the multi-head update can improve the memorization ability of continual memory.\nHyper-Parameters Analysis. We then explore the effect of the slot number K. We set K to [5, 10, 15, 20, 25] and display the results in Figure 4. We note that the model performance gradually improves with the increase of slot number and it reaches the bottleneck when the slot number is set to 25. Besides, the early performance has more significant improvement than the later performance.\nLonger Stream and Evidence. To further validate the characteristics of our continual memory, we synthesize a more complicated dataset where Rf , Rl, Rq , Ra and Rc are set to 2000, 1000, 40, 30, and 10, respectively. That is, the dataset contains the longer input stream and more complex evidence. We set the number K of memory slots and length N of segments to 20. As shown in Figure 5, we display the reasoning performance of each model when the facts in the evidence appear in different locations of the input stream. For example, 25∼50% means the facts appear between 25% and 50% of the stream. We can observe that our CM (Full) has an obvious performance improvement compared to memory-based methods in the 0∼25% and 25∼50% stage, but achieves a slight improvement in the 75∼100% stage. This verifies our self-supervised training is beneficial for long-term memorization." }, { "heading": "4.3 LONG-TERM TEXT QUESTION ANSWERING", "text": "Dataset, Baseline and Model Details. We apply two multi-choice datasets NarrativeQA (Kočiskỳ et al., 2018) and TVQA (Lei et al., 2018) for long-term text question answering. The two datasets provide the answer candidates thus they are suitable for the setting “reasoning after memorizing”. Note that the TVQA dataset also provides the video contents as input but we only use the subtitles in the videos. For NarrativeQA, the AS Reader (Kadlec et al., 2016) applies a pointer network to generate the answer and E2E-MN (Sukhbaatar et al., 2015) employs the end-to-end memory network to conduct multi-hop reasoning. For TVQA, the Multi-Stream (Lei et al., 2018) method develops the query-subtitle interaction for reasoning. And MSAN (Kim et al., 2020) first localizes the clues relevant to the question and then predicts the answer. For our continual memory, we set the dx, dm and dmodel to 256. The number K of memory slots and length N of segments are both set to 20. And we sample all other words as negative items in Litem. Evaluation Results. As shown in Table 2 and Table 3, the results are similar to synthetic experiments, i.e. our continual memory obtains the best performance among the memory-based approaches. And CM (Full) achieves the further improvement compared to CM (Only Reason), demonstrating the self-supervised memorization training can boost the reasoning ability of continual memory. Moreover, our proposed method achieves the accuracy close to the excellent end-to-end method E2E-MN in the NarrativeQA dataset, but still has a large performance gap with MSAN in the TVQA dataset. This is mainly because the MSAN method applies the two-stage reasoning with elaborate interactions between texts and queries." }, { "heading": "4.4 LONG-TERM VIDEO QUESTION ANSWERING", "text": "Dataset, Baseline and Model Details: The ActivityNet-QA dataset (Yu et al., 2019) contains 5,800 videos from the ActivityNet (Caba Heilbron et al., 2015). The average video duration of this dataset is about 180s and is the longest in VQA datasets. We compare our method with three basic endto-end baselines E-VQA, E-MN, E-SA from (Yu et al., 2019) and the SOTA end-to-end model HCRN (Le et al., 2020b). For our continual memory, we set the dx, dm and dmodel to 256. The number K of memory slots and length N of segments are both set to 20. And in Litem, we select 30 other frame features from the video as the sampled items.\nEvaluation Results. As shown in Table 4, the CM (Only Reason) obtains the better performance than other memory-based models DNC and NUTM, and CM (Full) further achieves the 1.1% absolute improvement, showing the effectiveness of our model designs and self-supervised memorization training. Moreover, our method outperforms the basic end-to-end baselines and slightly worse than the SOTA method HCRN. This suggests our continual memory can reduce the gap between memory-based and end-to-end reasoning paradigms." }, { "heading": "4.5 LIFELONG SEQUENCE RECOMMENDATION", "text": "Dataset, Baseline and Model Details. The lifelong sequence recommendation aims to predict whether the user will click a given item based on long sequences, thus it can be regarded as a longterm reasoning task. The XLong dataset (Ren et al., 2019) is sampled from the click logs on Alibaba. The length of historical behavior sequences in this dataset is 1000. We compare our method with four end-to-end methods GRU4REC (Hidasi et al., 2015), Caser (Tang & Wang, 2018), DIEN (Zhou et al., 2019), RUM (Chen et al., 2018) and two memory-based methods HPMN (Ren et al., 2019) and MIMN (Pi et al., 2019), where the HPMN method builds the memory by hierarchical RNNs and the MIMN method introduces a write-read memory as in (Graves et al., 2014). For our continual memory, we set the dx, dm and dmodel to 64. The number K of memory slots and length N of segments are both set to 20. And in Litem, we select 200 items from the large item set as the sampled items.\nEvaluation Results. As shown in Table 5, our CM (Full) method not only outperforms other memory-based approaches, but also achieves better performance than state-of-the-art end-to-end baselines. This is because our continual memory can aggregate and organize the long-term interests from user behavior sequences and these interests can be activated during next-item prediction. But the end-to-end approaches may fail to learn such informative interest representations." }, { "heading": "5 CONCLUSIONS", "text": "In this paper, we propose the continual memory to explore the ability of reasoning after long-term memorization. We compress the input stream into continual memory with the self-supervised memorization training and task-specific reasoning training. Based on continual memory, we can capture the clues for the subsequent queries and give the correct responses. Extensive experiments on a series of downstream tasks verify the effectiveness of the continual memory. For future work, we will further explore the property of continual memory." }, { "heading": "A APPENDIX", "text": "A.1 TASK-SPECIFIC REASON MODELS\nIn this section, we introduce the task-specific reason modelRΩ(M,Q), where M is the built memory and Q is the given query. Specifically, we first model the query feature q ∈ Rdmodel by a task-specific encoder. For synthetic experiments, the given query Q is a one-hot vector and we directly obtain q by an embedding layer. For long-term text and video QA tasks, the query Q is a sentence and we apply a bi-directional GRU to learn the sentence feature q. As for the recommendation task with long sequences, the given query is a target item with the unique id and we likewise learn an embedding layer to obtain the feature q.\nNext, we develop the multi-hop attention-based reasoning on continual memory M. Concretely, at each step c, we capture the importance memory feature ec ∈ Rdm from M based on the current query qc−1 using an attention method, given by\nγck = w > c tanh(W c 1q c−1 +Wc2mk + b c), γ̂ck = exp(γck)∑K j=1 exp(γ c j ) , ec = K∑ k=1 γ̂ckmk,\nwhere Wc1 ∈ Rdmodel×dmodel , Wc2 ∈ Rdmodel×dm and bc ∈ Rdmodel are the projection matrices and bias. And w>c is the row vector. We then produce the next query q\nc = Wq[ec;qc−1] ∈ Rdmodel , where Wq ∈ Rdmodel×(dm+dmodel) is the projection matrix and q0 is the original q. After C steps, we obtain the reason feature qC . The hyper-parameter C is set to 2, 2, 2 and 1 for synthetic experiments, text QA, video QA and sequence recommendation, respectively.\nAfter it, we design the final reasoning layer for different tasks. For synthetic experiments and longterm video QA with fixed answer sets, we directly apply a classification layer to select the answer and develop the cross-entropy loss Lr. But text QA datasets NarrativeQA and TVQA provide different candidate answers for each query, we first model each candidate feature ai by another bi-directional GRU and then concatenate ai with qC to predict the conference score for each candidate. Finally, we also learn the cross-entropy loss Lr based on answer probabilities. As for the sequence recommendation task, we can directly compute a confidence score based on qC by a linear layer and build the binary loss function Lr.\nA.2 HYPER-PARAMETER ANALYSIS OF SEGMENT LENGTH\nWe explore the effect of the segment length N in the main experiment of the synthesis task. We set N to [5, 10, 15, 20] and display the results in Figure 6. We can find that when the segment length is set to 5, the model achieves a terrible result and the performance is relatively stable when the length changes between 10 and 20. This is because when the segment is too short, important evidence may be scattered in different segments, and the model cannot effectively capture the evidence and infer the answer." } ]
2,020
CONTINUAL MEMORY: CAN WE REASON AFTER LONG-TERM MEMORIZATION?
SP:722584f20a74efbfb6e50fb795aa33a39d73f13b
[ "This paper deals with network quantization. It proposes Semi-Relaxed Quantization (SRQ) that uses a multi-class straight-through estimator to effectively reduce the bias and variance, along with a new regularization technique, DropBits that replaces dropout regularization to randomly drop the bits. Extensive experiments are conducted to validate our method on various benchmark datasets and network architectures.", "This work presents 1) Semi-Relaxed Quantization (SRQ), a method that targets learning low-bit neural networks, 2) DropBits, a method that performs dropout-like regularization on the bit width of the quantizers with an option to also automatically optimise the bit-width per layer according to the data, and 3) quantised lottery ticket hypothesis. SRQ is an extension of Relaxed Quantization (RQ), which is prior work, in two ways; firstly the authors replace the sampling from the concrete relaxation during training to deterministically selecting the mode (which is non-differentiable) and, secondly, they propose a specific straight-through gradient estimator (STE) than only propagates the gradient backwards for the elements that were selected in the forward pass. DropBits is motivated from the perspective of reducing the bias of the STE gradient estimator by randomly dropping grid points associated with a specific bit-width and then renormalising the SRQ distribution over the grid. This essentially induces stochasticity in the sampling distribution for the quantised value (which was removed before by selecting the mode in SRQ). The authors further extend DropBits in a way that allows for learning the drop probabilities for each bit-width, thus allowing for learning mixed-precision networks. Finally the authors postulate the quantised lottery ticket hypothesis, which refers to that “one can find the learned bit-width network which can perform better than the network with the same but fixed bit-widths from scratch”." ]
Network quantization, which aims to reduce the bit-lengths of the network weights and activations, has emerged as one of the key ingredients to reduce the size of neural networks for their deployments to resource-limited devices. In order to overcome the nature of transforming continuous activations and weights to discrete ones, recent study called Relaxed Quantization (RQ) [Louizos et al. 2019] successfully employ the popular Gumbel-Softmax that allows this transformation with efficient gradient-based optimization. However, RQ with this Gumbel-Softmax relaxation still suffers from large quantization error due to the high temperature for low variance of gradients, hence showing suboptimal performance. To resolve the issue, we propose a novel method, Semi-Relaxed Quantization (SRQ) that uses multi-class straight-through estimator to effectively reduce the quantization error, along with a new regularization technique, DropBits that replaces dropout regularization to randomly drop the bits instead of neurons to reduce the distribution bias of the multi-class straight-through estimator in SRQ. As a natural extension of DropBits, we further introduce the way of learning heterogeneous quantization levels to find proper bit-length for each layer using DropBits. We experimentally validate our method on various benchmark datasets and network architectures, and also support a new hypothesis for quantization: learning heterogeneous quantization levels outperforms the case using the same but fixed quantization levels from scratch.
[]
[ { "authors": [ "Martı́n Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: A system for large-scale machine learning", "venue": "In USENIX Symposium on Operating Systems Design and Implementation,", "year": 2016 }, { "authors": [ "Jan Achterhold", "Jan Mathias Koehler", "Anke Schmeink", "Tim Genewein" ], "title": "Variational network quantization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Nicholas Leonard", "Aaron Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "arXiv preprint arXiv:1308.3432,", "year": 2013 }, { "authors": [ "Jungwook Choi", "Zhuo Wang", "Swagath Venkataramani", "Pierce I-Jen Chuang", "Vijayalakshmi Srinivasan", "Kailash Gopalakrishnan" ], "title": "PACT: parameterized clipping activation for quantized neural networks", "venue": "CoRR, abs/1805.06085,", "year": 2018 }, { "authors": [ "Junyoung Chung", "Sungjin Ahn", "Yoshua Bengio" ], "title": "Hierarchical multiscale recurrent neural networks", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Matthieu Courbariaux", "Yoshua Bengio", "Jean-Pierre David" ], "title": "Binaryconnect: Training deep neural networks with binary weights during propagations", "venue": "Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Zhen Dong", "Zhewei Yao", "Amir Gholami", "Michael W. Mahoney", "Kurt Keutzer" ], "title": "Hawq: Hessian aware quantization of neural networks with mixed-precision", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Steven K. Esser", "Jeffrey L. McKinstry", "Deepika Bablani", "Rathinakumar Appuswamy", "Dharmendra S. Modha" ], "title": "Learned step size quantization", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Joshua Fromm", "Shwetak Patel", "Matthai Philipose" ], "title": "Heterogeneous bitwidth binarization in convolutional neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Xue Geng", "Jie Lin", "Bin Zhao", "Anmin Kong", "Mohamed M. Sabry Aly", "Vijay Chandrasekhar" ], "title": "Hardware-aware softmax approximation for deep neural networks", "venue": "Computer Vision – ACCV", "year": 2018 }, { "authors": [ "Ruihao Gong", "Xianglong Liu", "Shenghu Jiang", "Tianxiang Li", "Peng Hu", "Jiazhen Lin", "Fengwei Yu", "Junjie Yan" ], "title": "Differentiable soft quantization: Bridging full-precision and low-bit neural networks", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Sambhav R Jain", "Albert Gural", "Michael Wu", "Chris H Dick" ], "title": "Trained quantization thresholds for accurate and efficient fixed-point inference of deep neural networks", "venue": null, "year": 1903 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Sangil Jung", "Changyong Son", "Seohyung Lee", "Jinwoo Son", "Jae-Joon Han", "Youngjun Kwak", "Sung Ju Hwang", "Changkyu Choi" ], "title": "Learning to quantize deep networks by optimizing quantization intervals with task loss", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Fengfu Li", "Bo Zhang", "Bin Liu" ], "title": "Ternary weight networks", "venue": "In NIPS Workshop on EMDNN,", "year": 2016 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Decoupled weight decay regularization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Qian Lou", "Feng Guo", "Minje Kim", "Lantao Liu", "Lei Jiang" ], "title": "Autoq: Automated kernel-wise neural network quantization", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Christos Louizos", "Max Welling", "Diederik P. Kingma" ], "title": "Learning sparse neural networks through l0 regularization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Christos Louizos", "Matthias Reisser", "Tijmen Blankevoort", "Efstratios Gavves", "Max Welling" ], "title": "Relaxed quantization for discretized neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Chris J. Maddison", "Andriy Mnih", "Yee Whye Teh" ], "title": "The concrete distribution: A continuous relaxation of discrete random variables", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Mohammad Rastegari", "Vicente Ordonez", "Joseph Redmon", "Ali Farhadi" ], "title": "Xnor-net: Imagenet classification using binary convolutional neural networks", "venue": "Computer Vision – ECCV", "year": 2016 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Oran Shayer", "Dan Levi", "Ethan Fetaya" ], "title": "Learning discrete weights using the local reparameterization trick", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": "Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Stefan Uhlich", "Lukas Mauch", "Fabien Cardinaux", "Kazuki Yoshiyama", "Javier Alonso Garcia", "Stephen Tiedemann", "Thomas Kemp", "Akira Nakamura" ], "title": "Mixed precision dnns: All you need is a good parametrization", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Kuan Wang", "Zhijian Liu", "Yujun Lin", "Ji Lin", "Song Han" ], "title": "Haq: Hardware-aware automated quantization with mixed precision", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Penghang Yin", "Shuai Zhang", "Jiancheng Lyu", "Stanley Osher", "Yingyong Qi", "Jack Xin" ], "title": "Blended coarse gradient descent for full quantization of deep neural networks", "venue": "arXiv preprint arXiv:1808.05240,", "year": 2018 }, { "authors": [ "Dongqing Zhang", "Jiaolong Yang", "Dongqiangzi Ye", "Gang Hua" ], "title": "Lq-nets: Learned quantization for highly accurate and compact deep neural networks", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Xiandong Zhao", "Ying Wang", "Xuyi Cai", "Cheng Liu", "Lei Zhang" ], "title": "Linear symmetric quantization of neural networks for low-precision integer hardware", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Shuchang Zhou", "Yuxin Wu", "Zekun Ni", "Xinyu Zhou", "He Wen", "Yuheng Zou" ], "title": "Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients", "venue": "CoRR, abs/1606.06160,", "year": 2016 }, { "authors": [ "Esser" ], "title": "2020) both of which utilize the full-precision first and last layer as well as employ their own ResNet-18 pretrained model performing much higher than one available from the official PyTorch repository. Table 4: Top-1/Top-5 error (%) with ResNet-18 and MobileNetV2 on ImageNet using 4-bit. † denotes the use of the full-precision first or last layer, and ", "venue": "QIL (Jung et al.,", "year": 2019 }, { "authors": [ "flipped horizontally" ], "title": "The test set is evaluated without any padding or cropping. Note that a batch normalization layer is put after every convolutional layer in VGG-7, but not in LeNet-5", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks have achieved great success in various computer vision applications such as image classification, object detection/segmentation, pose estimation, action recognition, and so on. However, state-of-the-art neural network architectures require too much computation and memory to be deployed to resource-limited devices. Therefore, researchers have been exploring various approaches to compress deep neural networks to reduce their memory usage and computation cost.\nIn this paper, we focus on neural network quantization, which aims to reduce the bit-width of a neural network while maintaining competitive performance with a full-precision network. It is typically divided into two groups, uniform and heterogeneous quantization. In uniform quantization, one of the simplest methods is to round the full-precision weights and activations to the nearest grid points: x̂ = αb xα + 1 2c where α controls the grid interval size. However, this naı̈ve approach incurs severe performance degradation on large datasets. Recent network quantization methods tackle this problem from different perspectives. In particular, Relaxed Quantization (RQ) (Louizos et al., 2019) employs Gumbel-Softmax (Jang et al., 2017; Maddison et al., 2017) to force weights and activations to be located near quantization grids with high density. Louizos et al. (2019) notice the importance of keeping the gradient variance small, which leads them to use high Gumbel-Softmax temperatures in RQ. However, such high temperatures may cause a large quantization error, thus preventing quantized networks from achieving comparable performance to full-precision networks.\nTo resolve this issue, we first propose Semi-Relaxed Quantization (SRQ) that uses the mode of the original categorical distribution in the forward pass, which induces small quantization error. It is clearly distinguished from Gumbel-Softmax choosing argmax among the samples from the concrete distribution. To cluster weights cohesively around quantization grid points, we devise a multi-class straight-through estimator (STE) that allows for efficient gradient-based optimization as well. As this STE is biased like a traditional one (Bengio et al., 2013) for the binary case, we present a novel technique, DropBits to reduce the distribution bias of the multi-calss STE in SRQ. Motivated from Dropout (Srivastava et al., 2014), DropBits drops bits rather than neurons/filters to train low-bit neural networks under SRQ framework.\nIn addition to uniform quantization, DropBits allows for heterogeneous quantization, which learns different bit-width per parameter/channel/layer by dropping redundant bits. DropBits with learnable bit-drop rates adaptively finds out the optimal bit-width for each group of parameters, possibly further reducing the overall bits. In contrast to recent studies (Wang et al., 2019; Uhlich et al., 2020) in heterogeneous quantization that exhibit almost all layers possess at least 4 bits, up to 10-bit, our method yields much more resource-efficient low-bit neural networks with at most 4 bits for all layers.\nWith trainable bit-widths, we also articulate a new hypothesis for quantization where one can find the learned bit-width network (termed a ‘quantized sub-network’) which can perform better than the network with the same but fixed bit-widths from scratch.\nOur contribution is threefold:\n• We propose a new quantization method, Semi-Relaxed Quantization (SRQ) that introduces the multi-class straight-through estimator to reduce the quantization error of Relaxed Quantization for transforming continuous activations and weights to discrete ones. We further present a novel technique, DropBits to reduce the distribution bias of the multi-class straight-through estimator in SRQ.\n• Extending DropBits technique, we propose a more resource-efficient heterogeneous quantization algorithm to curtail redundant bit-widths across groups of weights and/or activations (e.g. across layers) and verify that our method is able to find out ‘quantized sub-networks’.\n• We conduct extensive experiments on several benchmark datasets to demonstrate the effectiveness of our method. We accomplish new state-of-the-art results for ResNet-18 and MobileNetV2 on the ImageNet dataset when all layers are uniformly quantized." }, { "heading": "2 RELATED WORK", "text": "While our goal in this work is to obtain an extremely low-bit neural network both for weights and activations, here we broadly discuss existing quantization techniques with various goals and settings. BinaryConnect (Courbariaux et al., 2015) first attempted to binarize weights to ±1 by employing deterministic or stochastic operation. To obtain better performance, various studies (Rastegari et al., 2016; Li et al., 2016; Achterhold et al., 2018; Shayer et al., 2018) have been conducted in binarization and ternarization. To reduce hardware cost for inference, Geng et al. (2019) proposed softmax approximation via a look-up table. Although these works effectively decrease the model size and raise the accuracy, they are limited to quantizing weights with activations remained in full-precision. To take full advantage of quantization at run-time, it is necessary to quantize activations as well.\nResearchers have recently focused more on simultaneously quantizing both weights and activations (Zhou et al., 2016; Yin et al., 2018; Choi et al., 2018; Zhang et al., 2018; Gong et al., 2019; Jung et al., 2019; Esser et al., 2020). XNOR-Net (Rastegari et al., 2016), the beginning of this line of work, exploits the efficiency of XNOR and bit-counting operations. QIL (Jung et al., 2019) also quantizes weights and activations by introducing parametrized learnable quantizers that can be trained jointly with weight parameters. Esser et al. (2020) recently presented a simple technique to approximate the gradients with respect to the grid interval size to improve QIL. Nevertheless, these methods do not quantize the first or last layer, which leaves a room to improve power-efficiency.\nFor ease of deployment in practice, it is inevitable to quantize weights and activations of all layers, which is the most challenging. Louizos et al. (2019) proposed to cluster weights by using GumbelSoftmax, but it shows drawbacks as we will discuss in Section 3.2. Jain et al. (2019) presented efficient fixed-point implementations by formulating the grid interval size to the power of two, but they quantized the first and last layer to at least 8-bit. Zhao et al. (2020) proposed to quantize the grid interval size and network parameters in batch normalization for the deployment of quantized models on low-bit integer hardware, but it requires a specific accelerator only for this approach.\nAs another line of work, Fromm et al. (2018) proposed a heterogeneous binarization given pre-defined bit-distribution. HAWQ (Dong et al., 2019) determines the bit-width for each block heuristically based on the top eigenvalue of Hessian. Unfortunately, both of them do not learn optimal bit-widths for heterogeneity. Toward this, Wang et al. (2019) and Uhlich et al. (2020) proposed a layer-wise heterogeneous quantization by exploiting reinforcement learning and learning dynamic range of quantizers, respectively. However, their results exhibit that almost all layers have up to 10-bit (at least 4-bit), which would be suboptimal. Lou et al. (2020) presented a channel-wise heterogeneous quantization by exploiting hierarchical reinforcement learning, but channel-wise precision limits the structure of accelerators, thereby restricting the applicability of the model." }, { "heading": "3 METHOD", "text": "In this section, we briefly review Relaxed Quantization (RQ) (Louizos et al., 2019) and propose Semi-Relaxed Quantization, which selects the nearest grid point in the forward pass to decrease the quantization error. To make it learnable and to cluster compressed parameters cohesively, SRQ expresses the nearest grid selection of the forward pass as the equivalent form, the combination of logistic distribution and argmax, and performs the backward pass on it. Then, we present DropBits technique to reduce the distribution bias of SRQ and its extension to heterogeneous quantization." }, { "heading": "3.1 PRELIMINARY: RELAXED QUANTIZATION", "text": "Relaxed Quantization (RQ) considers the following quantization grids for weights: Ĝ = α[−2b−1, . . . , 0, . . . , 2b−1 − 1] =: [g0, . . . , g2b−1] where b is the bit-width and a learnable parameter α > 0 for each layer controls a grid interval. When quantizing activations, the grid points in Ĝ start from zero since the output of ReLU is always non-negative. Then, x (a weight or an activation) is perturbed by noise as x̃ = x+ , which enables gradient-based optimization for non-differentiable rounding. The noise follows a distribution p( ) = Logistic(0, σ) so that p(x̃) is governed by Logistic(x, σ) where σ represents the standard deviation. Under p(x̃), we can easily compute the unnormalized probability of x̃ being quantized to each grid point gi in a closed form as below:\nπi = p(x̂ = gi|x, α, σ) = Sigmoid ( (gi + α/2− x)/σ ) − Sigmoid ( (gi − α/2− x)/σ ) , (1)\nwhere x̂ denotes a quantized realization of x̃. Note that the cumulative distribution function of the logistic distribution is just a sigmoid function. Finally, given unnormalized categorical probability π = {πi}2 b−1 i=0 for grid points Ĝ = {gi} 2b−1 i=0 , RQ discretizes x to x̂ = r ·Ĝ by sampling r = {ri} 2b−1 i=0 from the concrete distribution (Jang et al., 2017; Maddison et al., 2017) with a temperature τ :\nui ∼ Gumbel(0, 1), ri = exp\n( (log πi + ui)/τ )∑2b−1 j=0 exp ( (log πj + uj)/τ ) , x̂ = 2b−1∑ i=0 rigi. (2)\nThe algorithm of RQ is described in Appendix in detail." }, { "heading": "3.2 SEMI-RELAXED QUANTIZATION - FIXING PITFALLS OF RQ", "text": "Although RQ achieves competitive performance with both weights and activations of neural networks quantized, the quantization probability modeling of RQ may still incur large quantization error, thereby yielding suboptimal performance. To be specific, Louizos et al. (2019) recommend high temperatures for the concrete distribution (e.g. τ = 1.0 or 2.0) in (2) since exploiting low temperatures hinders networks from converging due to high variance of gradients. However, it turns out that\nthe concrete distribution with such a high temperature is almost similar to the uniform distribution. As a concrete example, we consider 2-bit quantization with Ĝ = α[−2,−1, 0, 1] for a fixed scale parameter α > 0, σ = α/3, and we set τ to 1.0 as in Louizos et al. (2019). Now, suppose that the original weight value is α/2. As in Figure 1-(b,d,e), ? can be sporadically quantized to below zero by RQ as the original categorical distribution has support for −α and −2α. It may be okay on average, but RQ computes only one sample in each forward pass due to computational burden, which can accidentally lead to very large quantization error for these particular sample.\nTo avoid the counterintuitive-sample with large quantization error as seen in Figure 1-(b,d,e), we propose ‘Semi-Relaxed Quantization’ (SRQ) which rather directly considers the original categorical distribution in Figure 1-(g). To be concrete, for a weight or an activation x, the probability of x being quantized to each grid is ri = πi/ ∑2b−1 j=0 πj for i ∈ {0, · · · , 2b − 1} with b-bit precision, where πi is computed as (1). In such a manner, selecting a grid point for x can be thought of as sampling from the categorical distribution with categories Ĝ = {gi}2 b−1 i=0 and the corresponding probabilities r = {ri}2 b−1 i=0 as illustrated in Figure 1-(g). Then, the grid point gimax with imax = argmaxi ri would be the most reasonable speculation due to the highest probability. SRQ therefore chooses the mode of the original categorical distribution, gimax and assign it to x̂, entirely discriminated from Gumbel-Softmax which selects the argmax among samples from the concrete distribution. As a result, SRQ does not suffer from counterintutive-sample problem that RQ encounters at all.\nThe last essential part for SRQ is to handle the non-differentiable argmax operator in computing imax. Toward this, we propose a multi-class straight-through estimator (STE) that allows for backpropagating through a non-differentiable categorical sample by approximating ∂L/∂rimax to ∂L/∂yimax and ∂L/∂ri to zero for i 6= imax, where L is the cross entropy between the true label and the prediction made by a quantized neural network as delineated in the previous paragraph and yimax is the imax-th entry of the one-hot vector y. The forward and backward passes of SRQ are summarized as follows.\nForward: y = one hot[argmax i ri], Backward: ∂L ∂rimax = ∂L ∂yimax and ∂L ∂ri = 0 for i 6= imax (3) Such a formulation brings two important advantages in network quantization. First of all, (3) makes the variance of gradient estimator become zero. Since SRQ always chooses the mode of the original categorical distribution (i.e., there is no randomness in the forward pass of SRQ), and the gradient of loss function L with respect to the individual categorical probabilities is defined as zero everywhere except for the coordinate corresponding to the mode, the variance of gradients in SRQ is indeed zero.\nThe other advantage is that the backward pass (3) can cluster network weight parameters cohesively. Under the assumption that ri = πi, ∂L∂x is proportional to\n∂πimax ∂x = 1 σ\n(\nSigmoid\n( gimax +α/2−x\nσ\n)\nSigmoid\n(\n− gimax +α/2−xσ\n)\n−Sigmoid\n( gimax−α/2−x\nσ\n)\nSigmoid\n(\n−\ngimax−α/2−x σ )) 1. When x is close to gimax , ∂πimax/∂x is nearly zero, so is ∂L/∂x. With an appropriate learning rate, x converges to gimax , which leads SRQ to cluster weights better than RQ as shown in Figure 2. Although ∂L∂x is almost zero, α is still trained. After α is updated, there is a gap between x and α so that x can be trained. Hence, the network will continue to train until it finds the optimal α." }, { "heading": "3.3 DROPBITS", "text": "Although our multi-class STE enjoys low variance of gradients, it is biased to the mode as the binary one in Bengio et al. (2013). To reduce the bias of a STE, Chung et al. (2016) propose the slope annealing trick, but this strategy is only applicable to the binary case. To address this limitation, we propose a novel method, DropBits, to decrease the distribution bias of a multi-class STE. Inspired by dropping neurons in Dropout (Srivastava et al., 2014), we drop an arbitrary number of grid points at random every iteration, where in effect the probability of being quantized to dropped grid points becomes zero.\nHowever, the design policy that each grid point has its own binary mask would make the number of masks increase exponentially with bitwidth. Taking into account appropriate noise levels with a less aggressive design, the following two examples are available: (a) endpoints in the grids share the same binary mask, and (b) the grid points in the same bit-level share the same binary mask (see Figure 3). Hereafter, we consider (b) bitwise-sharing masks for groups of grid points, unless otherwise specified.\nNow, we introduce how to formulate binary masks. Unlike practical Dropout implementation through dividing activations by 1− p (here, p is a dropout probability), we employ an explicit binary mask Z whose probability Π can be optimized jointly with model parameters. The Bernoulli random variable being non-differentiable, we relax a binary mask via the hard concrete distribution (Louizos et al., 2018). While the binary concrete distribution (Maddison et al., 2017) has its support (0, 1), the hard concrete distribution stretches it slightly at both ends, thus concentrating more mass on exact 0 and 1. Assuming disjoint masks, we describe the construction of a binary mask Zk for the k-th bit-level using the hard concrete distribution as\nUk ∼ Uniform(0, 1), Sk = Sigmoid (( logUk−log (1− Uk) + log Πk − log(1−Πk) ) /τ ′ ) (4)\nS̄k = Sk(ζ − γ) + γ and Zk = min(max(S̄k, 0), 1)\nwhere τ ′ is a temperature for the hard concrete distributions with γ < 0 and ζ > 0 reflecting stretching level. For i = 2b−1 − 1, 2b−1 and 2b−1 + 1, we do not sample from the above procedure but fix Z = 1 so as to prohibit all the binary masks from becoming zero (see ‘No Mask’ in Figure 3).\nWith the value of each mask generated from the above procedure, the probability of being quantized to any grid point is re-calculated by multiplying πi’s by their corresponding binary masks Zk’s (e.g. π̃0 = Z2 ·π0) and then normalizing them (to sum to 1). As seen in Figure 4, the sampling distribution\n1Since imax does not change, the assumption is not unreasonable.\nof SRQ is biased to the mode, −3α. With DropBits adjusting πi’s to π̃i’s based on Zk’s, the sampling distribution of SRQ + DropBits more resemble the original categorical distribution than that of SRQ, which means that DropBits could effectively reduce the distribution bias of the multi-class straightthrough estimator in SRQ. Not only that, DropBits does not require any hand-crafted scheduling at all due to the learnable characteristic of Πk, whereas such scheduling is vital for Gumbel-Softmax (Jang et al., 2017; Maddison et al., 2017) and slope annealing trick (Chung et al., 2016).\nAlthough quantization grids for weights are symmetric with respect to zero, those for activations start from zero, which makes it difficult to exploit symmetrically-designed DropBits for activations. Therefore, DropBits is applied only for weights in our experiments. Assuming that Zk’s are shared across all weights of each layer, the overall procedure is described in Figure 5. The overall algorithm including the test phase is deferred to Appendix due to space limit." }, { "heading": "3.4 EMPIRICAL ANALYSIS OF QUANTIZATION ERROR, DISTRIBUTION BIAS, VARIANCE", "text": "In this section, we empirically compare (i) the expectation of quantization error, (ii) the `2-norm of distribution bias, and (iii) the gradient variance of RQ, SRQ, and SRQ + DropBits. The expectation of quantization error indicates the expected value of difference between an input value and its quantized value, E[|x− x̂|] where x̂ is a quantized value of x by each algorithm. Here, the expectation is taken over the randomness of x̂.\nAs described in Appendix B, it is not possible to compute the bias of gradient with respect to π, so we instead compute the bias of gradient with respect to x as a proxy. However, it is not straightforward to say that SRQ + DropBits is better than RQ and vice versa in terms of the bias with respect to x (see Figure 8-(b) in Appendix B). As an additional indirect metric, we suggest a distribution bias as the difference between the original categorical distribution and the distribution approximated by each algorithm, E[p] − πorigin where a vector πorigin := (πi/ ∑2b−1 i=0 πj) 2b−1 i=0 is the original categorical distribution and a vector p := (pi)2 b−1 i=0 is the approximated one (i.e., (pi) 2b−1 i=0 = (ri) 2b−1 i=0 in (2) for RQ, and (pi)2 b−1 i=0 = (yi) 2b−1 i=0 in (3) for SRQ (+ DropBits)). As a distribution bias is also a vector, we compare the `2-norm of a distribution bias of each algorithm. Note that our new notion of distribution bias is very loosely coupled with the gradient in the sense that both converge to zero if there is no approximation for the non-differentiable parts. However, it can be used as an indirect indicator to measure how much biased distribution is used in computing the gradient estimator.\nFor the variance, we use the closed-form of the gradient estimator ∂r∂π of each algorithm: for i 6= j, (a) RQ: ∂ri∂πj = − rirj τπj with randomness on ui and uj , and (b) SRQ + DropBits: ∂ri∂πj = − rirj πj\nwith randomness on binary masks Zi and Zj . The case for i = j can be derived similarly.\nArmed with these terms, we conduct a case study in 3-bit quantization with grid points Ĝ = α[−4, · · · , 3] via Monte Carlo simulation when α is simply set to 1.0. Here, let x be the midpoint of consecutive grid points, i.e., x = −3.5α, · · · , 2.5α. For gradient variance, since there are many pairs of ∂ri∂πj for different i and j, we representatively choose i as the index of the grid point closest to x, and j as the indices of two neighboring grid points of x (e.g. if x ∈ [gi−1, gi], then j = i− 1 and i).\nIn Figure 6, SRQ shows smaller quantization error than RQ for all x = −3.5α, · · · , 2.5α. This is because SRQ deliberately performs biased estimation on the underlying categorical distribution to prevent large quantization error from even occurring in the forward pass while sharing this underlying distribution with RQ in the backward pass. Instead, we devised DropBits to reduce the incurred distribution bias of SRQ, which is indeed the case as can be seen in Figure 6. Interestingly, SRQ + DropBits can also achieve smaller distribution bias than RQ for all x = −3.5α, · · · , 2.5α." }, { "heading": "3.5 LEARNING BIT-WIDTH TOWARDS RESOURCE-EFFICIENCY", "text": "As noted in Section 1 and 2, recent studies on heterogeneous quantization use at least 4-bit in almost all layers, up to 10-bit, which leaves much room for the saving of energy and memory. Towards more resource-efficient one, we introduce an additional regularization on DropBits to drop redundant bits.\nSince the mask design in Figure 3-(b) reflects the actual bit-level and the probability of each binary mask in DropBits is learnable, we can penalize the case where we use higher bit-levels via a sparsity encouraging regularizer like `1. As Louizos et al. (2018) proposed a relaxed `0 regularization using the hard concrete binary mask, we adopt this continuous version of `0 as a sparsity inducing regularizer. Following (4), we define the smoothed `0-norm asR(Z; Π) = Sigmoid(log Π1−Π − τ\n′ log −γζ ). One caveat here is that we do not have to regularize masks for low bit-level if a higher bit-level is still alive (in this case such a high bit-level is still necessary for quantization). We thus design a regularization in such a specific way as only to permit the probability of a binary mask at the current highest bit-level to approach zero. More concretely, for bit-level binary masks {Zk}b−1k=1 as in Figure 3-(b) and the corresponding probabilities {Πk}b−1k=1, our regularization term to learn the bit-width is\nR ( {Zk}b−1k=1, {Πk} b−1 k=1 ) = b−1∑ k=1 I(Zk > 0) ( b−1∏ j=k+1 I(Zj = 0) ) R(Zk; Πk).\nNote that {Zk}b−1k=1 is assigned to each group (e.g. all weights or activations in a layer or channel for instance). Hence, every weight in a group shares the same sparsity pattern (and bit-width as a result), and learned bit-widths across groups are allowed to be heterogeneous. Assuming the l-th layer shares binary masks Zl := {Zlk} b−1 k=1 associated with probabilities Π l := {Πlk} b−1 k=1, our final objective function for a L-layer neural network becomes L(θ,α,σ,Z,Π) +\nλ ∑L l=1R(Zl,Πl), where α = {αl}Ll=1 and σ = {σl}Ll=1 represent the layer-wise grid interval parameters and standard deviations of logistic distributions, Z = {Zl}Ll=1, Π = {Πl}Ll=1, and λ is a regularization parameter. In inference phase, we just drop unnecessary bits based on the values of Π." }, { "heading": "3.6 NEW HYPOTHESIS FOR QUANTIZATION", "text": "Frankle & Carbin (2019) articulated the ‘lottery ticket hypothesis’, stating that one can find some sparse sub-networks, ‘winning tickets’, from randomly-initialized, dense neural networks that are easier to train than sparse networks resulting from pruning. In this section, we define a new hypothesis for quantization with slightly different (opposite in some sense) perspective from the original one. Notation. a bit b and a =bit b denote that a has strictly higher bit-width than b for at least one of all groups (e.g. channels or layers), and a has the same bit-precision as b across all groups, respectively. Definition. For a network f(x; θ) with randomly-initialized parameters θ, we consider a quantized network f(x; θ′) from f(x; θ) such that θ bit θ′. If the accuracy of f(x; θ′) is higher than that\nof f(x; θ′′) where f(x; θ′′) is trained from scratch with fixed bit-widths such that θ′ =bit θ′′, then f(x; θ′) is referred to as a quantized sub-network of f(x; θ).\nThis hypothesis implies that learning bit-width would be superior to pre-defined bit-width. To the best of our knowledge, our study is the first attempt to delve into this hypothesis." }, { "heading": "4 EXPERIMENTS", "text": "Since popular deep learning libraries such as TensorFlow (Abadi et al., 2016) and PyTorch from v1.3 (Paszke et al., 2019) already provide their own 8-bit quantization functionalities, we focus on lower bit-width regimes (i.e. 2∼4-bit). In contrast to some other quantization papers, our method uniformly quantizes the weights and activations of all layers containing both the first and last layers. We first show that SRQ and DropBits have its own contribution, none of which is negligible. Then, we evaluate our method, SRQ + DropBits on a totally large-scale dataset with deep networks. Finally, we demonstrate that our heterogeneous quantization method yields promising results even if all layers have at most 4-bit and validate a new hypothesis for quantization in Section 3.6." }, { "heading": "4.1 ABLATION STUDIES", "text": "To validate the efficacy of SRQ and DropBits, we successively apply each piece of our method to RQ for LeNet-5 (LeCun et al., 1998) on MNIST and VGG-7 (Simonyan & Zisserman, 2014) on CIFAR-10. Table 1 shows that SRQ outperforms RQ in most cases. One might wonder that the issue of RQ introduced in Section 3.2 can be addressed by an annealing schedule of the temperature τ in RQ. It could be possible, but RQ with an annealing schedule suffers from high variance of gradients due to low temperatures at the end of training as shown in Figure 7. As a result, annealing τ gives rise to worse performance than RQ as shown in Table 1. However, SRQ does not suffer from both problems at all, thus displaying the best learning curve in Figure 7. Finally, it can be clearly identified that DropBits consistently improves SRQ by decreasing the distribution bias of our multi-class STE.\n2We cannot reproduce the results of RQ in 2-bit, so we experiment only on 3-bit and 4-bit RQ 3Our own implementation with all layers quantized by using pretrained models available from PyTorch" }, { "heading": "4.2 RESNET-18 AND MOBILENETV2 ON IMAGENET", "text": "To verify the effectiveness of our algorithm on the ImageNet dataset, we quantize the ResNet-18 (He et al., 2016) and MobileNetV2 (Sandler et al., 2018) architectures initialized with each pre-trained full-precision network available from the official PyTorch repository. In Table 2, our method is only compared to the state-of-the-art algorithms that quantize both weights and activations of all layers for fair comparisons. The extensive comparison against recent works that remain the first or last layer in the full-precision is given in Appendix.\nTable 2 illustrates how much better our model performs than the latest quantization methods as well as our baseline, RQ. In ResNet-18, SRQ + DropBits outdoes RQ, QIL, LLSQF, and TQT, even achieving the top-1 and top-5 errors in 4-bit nearly close to those of the full-precision network. In MobileNetV2, SRQ + DropBits with 4-bit surpasses all existing studies by more than one percentage point. Moreover, we quantize MobileNetV2 to 3-bit, obtaining competitive performance, which is remarkable due to the fact that none of previous works successfully quantizes MobileNetV2 to 3-bit." }, { "heading": "4.3 FINDING QUANTIZED SUB-NETWORKS", "text": "In this experiment, we validate a new hypothesis for quantization by training the probabilities of binary masks using the regularizer in Section 3.5 to learn the bit-width of each layer. For brevity, only weights are heterogeneously quantized, and the bit-width for activations is fixed to the initial one.\nIn Table 3, the fourth column represents the bit-width per layer learned by our regularizer, and the fifth and last columns indicate the test error when fixing the bit-width of each layer same as trained bit-widths (fourth column) from scratch and when using our regularization approach, respectively. Table 3 shows that a learned structure by our heterogeneous quantization method (last column) is superior to the fixed structure with learned bit-widths from scratch (fifth column) for all cases. It might be doubtful whether our regularizer is able to recognize which layer is really redundant or not. This may be indirectly substantiated by the observation that the fixed structure with trained bit-widths from scratch (fifth column) outperforms the uniform quantization (third column) on CIFAR-10. More experiments on different values of the regularization parameter λ are deferred to Appendix ." }, { "heading": "5 CONCLUSION", "text": "We proposed Semi-Relaxed Quantization (SRQ), which effectively clusters the weights in low bitwidth regimes, along with DropBits that reduces the distribution bias of SRQ. We empirically showed that both SRQ and DropBits possess its own value, thereby leading SRQ + DropBits to achieve the state-of-the-art performance for ResNet-18 and MobileNetV2 on ImageNet. Furthermore, we took one step forward to consider heterogeneous quantization by simply penalizing binary masks in DropBits, which enables us to find out quantized sub-networks. As future work, we plan to extend our heterogeneous quantization method to activations and its application to other quantizers." }, { "heading": "A ALGORITHM OF SEMI-RELAXED QUANTIZATION WITH DROPBITS", "text": "Algorithm 1 Semi-Relaxed Quantization (SRQ) + DropBits 1: Input: Training data 2: Initialize: Bit-width b, network parameters {Wl, bl}Ll=1, layer-wise grid interval parameters and the\nstandard deviations of a logistic distribution in the l-th layer {αl, σl}Ll=1. Initialize layer-wise grid Ĝl = αl[−2b−1, · · · , 2b−1 − 1] =: [gl,0, gl,1, · · · , gl,2b−1] for l ∈ {1, · · · , L}. 3: procedure TRAINING 4: for l = 1, · · · , L do 5: x← Each entry of Wl or bl 6: Il = Ĝl − α/2 . Shift the grid by −α/2 7: F = Sigmoid ( Il−x σl ) . Compute CDFs 8: πi = F [i+ 1]− F [i] for i = 0, · · · , 2b − 1 9: if DropBits then\n10: Sample a mask Zk for k = 0, · · · , b− 1 from (4) 11: π̃ = π Z 12: ri = π̃i/ ∑2b−1 j=0 π̃j . Figure 3 13: else 14: ri = πi/ ∑2b−1 j=0 πj 15: end if 16: y = one hot[argmaxi ri] . Multi-class Straight-Through Estimator 17: x̂ = y Ĝl . Quantization 18: Ŵl← Each entry of Wl quantized to x̂ 19: b̂l← Each entry of bl quantized to x̂\n20: Forward pass with quantized Ŵl and b̂l 21: Activation can be quantized in the same way, with DropBits being False 22: end for 23: end procedure 24: 25: procedure DEPLOYMENT 26: for l = 1, · · · , L do 27: Ŵl = min(max(αl · Round(Wl/αl), gl,0), gl,2b−1) 28: b̂l = min(max(αl · Round(bl/αl), gl,0), gl,2b−1) 29: end for 30: end procedure" }, { "heading": "B COMPARISON OF BIAS BETWEEN RQ, SRQ, AND SRQ + DROPBITS", "text": "In general, when the loss involves discrete random variables, the true gradient, the gradient of the expectation of the loss with respect to the parameter of discrete random variables, can be obtained by using existing stochastic gradient estimation techniques such as the score function estimator (see Equation (4) and (5) in Maddison et al. (2017)). However, the reason why we did not explicitly compare the bias of gradient is that distribution parameters (i.e. π = {πi}2 b−1 i=0 ) in our setting are not independent of network parameters x. In fact, π is a function of x in (1). Given that an inverse of a function π does not exist because πi is not one-to-one with respect to x for each i, it is not possible to directly apply existing techniques such as the score function to compute unbiased estimators. Instead, we compute the bias of gradient with respect to x as a proxy of that of with respect to π.\nAlthough the bias of SRQ + DropBits is smaller than that of SRQ as displayed in Figure 8-(a), SRQ + DropBits and RQ exhibit comparable levels of bias with respect to x as shown in Figure 8-(b)." }, { "heading": "C EXTENSIVE COMPARISON FOR RESNET-18 AND MOBILENETV2 ON IMAGENET", "text": "Our method, SRQ + DropBits surpasses quantization methods remaining the first or last layer in the full precision as well as the latest algorithms that quantize both the weights and activations of all layers including the first and last layers, except QIL (Jung et al., 2019) and LSQ (Esser et al., 2020) both of which utilize the full-precision first and last layer as well as employ their own ResNet-18 pretrained model performing much higher than one available from the official PyTorch repository." }, { "heading": "D MORE EXPERIMENTS ON HETEROGENEOUS QUANTIZATION", "text": "As we can see in Table 5, our heterogeneous quantization method is capable of finding quantized sub-networks in a broad range of regularization parameter λ." }, { "heading": "E COMPARISON OF SRQ + DROPBITS WITH GUMBEL-SOFTMAX + MULTI-CLASS STE", "text": "As described in Section 3.4, our SRQ + DropBits shows smaller quantization error, variance of gradients, and distribution bias than RQ, while maintaining stochasticity. For this reason, we employ the deterministic scheme in the first place and then encourage stochasticity via DropBits.\nTo show the effectiveness of SRQ + DropBits further, we empirically compare it with an algorithm using the Gumbel-Softmax STE in the forward pass instead of DropBits and our multi-class STE in the backwared pass. Let such an algorithm be called “Gumbel-Softmax + multi-class STE”.\nAlthough employing our multi-class STE in the backward pass, Gumbel-Softmax + multi-class STE performs worse than SRQ + DropBits. This is primarily due to the fact that Gumbel-Softmax STE still incurs a large quantization error like RQ.\nF IMPLEMENTATION DETAILS\nThe weights and activations of all layers including the first and last layers (denoted by W and A) are assumed to be perturbed as W̃ = W + and à = A+ respectively, under ∼ L(0, σ) as we describe in Section 2.\nConcerning DropBits regularization in 3.3, we initialize the probability of each binary mask with Π ∼ N (0.9, 0.012) (i.e. corresponding to low dropout probability). The concrete distribution of a binary mask is stretched to ζ = 1.1 and γ = −0.1 as recommended in Louizos et al. (2018), and τ ′ is initialized to 0.2 to make a binary mask more discretized.\nFor MNIST experiments, we train LeNet-5 with 32C5 - MP2 - 64C5 - MP2 - 512FC - Softmax architecture for 100 epochs irrespective of the bit-width. In addition, a learning rate is set to 5e-4 regardless of the bit-width and exponentially decayed with decay factor 0.8 for the last 50 epochs. The input is normalized into [−1, 1] range without any data augmentation.\nFor CIFAR-10 experiments, following the convention that the location of max-pooling layer is changed, which originates from Rastegari et al. (2016), a max-pooling layer is located after a convolutional layer, but before a batch normalization and an activation function. We train VGG-7 with 2x(128C3) - MP2 - 2x(256C3) - MP2 - 2x(512C3) - MP2 - 1024FC - Softmax architecture for 300 epochs, and a learning rate is initially set to 1e-4 regardless of the bit-width. The learning rate is multiplied by 0.1 at 50% of the total epochs and decay exponentially with the decay factor 0.9 during the last 50 epochs. The input images are preprocessed by substracting its mean and dividing by its standard deviation. The training set is augmented as follows: (i) a random 32 × 32 crop is sampled from a padded image with 4 pixels on each side, (ii) images are randomly flipped horizontally. The test set is evaluated without any padding or cropping. Note that a batch normalization layer is put after every convolutional layer in VGG-7, but not in LeNet-5.\nIn Section 4.1, RQ with an annealing schedule of the temperature τ in RQ is implemented by following Jang et al. (2017): τ is annealed every 1000 iterations by the schedule τ = max(0.5, exp (−t/100000)) in 3-bit and τ = max(0.5, 2 exp (−t/100000)) in 4-bit in order to make the decreasing rate of τ as small as possible. Here, t is the global training iteration.\nFor ImageNet experiments in Section 4.2, the weight parameters of both ResNet-18 and MobileNetv2 are initialized with the pre-trained full precision model available from the official PyTorch repository. When quantizing ResNet-18 to 3-bit, fine-tuning is implemented for 80 epochs with a batch size of 256: a learning rate is initialized to 2e-5 and divided by two at 50, 60, and 68 epochs. When quantizing ResNet-18 to 4-bit, fine-tuning is carried out for 150 epochs with a batch size of 128: for the first 125 epochs, a learning rate is set to 5e-6, but 1e-6 for the last 25 epochs. When quantizing MobileNetV2 to 3-bit and 4-bit, fine-tuning is performed for 25 epochs with a batch size of 48 and an initial learning rate of 2e-5: the learning rate is divided by two at 15 and 20 epochs for 3-bit and at 10, 12, 18, and 20 epochs for 4-bit. We employ AdamW in Decoupled Weight Decay Regularization (Loshchilov & Hutter, 2019) with a weight decay factor of 0.01.\nIn Section 4.3 and D, if the probability of a binary mask is less than 0.5, then we drop the corresponding bits. For LeNet-5 on MNIST and VGG-7 on CIFAR-10, our regularization term in Section 3.5 is activated only for the first 50% of the total epochs. With the remained bit-width for each layer, fine-tuning process is conducted for the last 50% of the total epochs. For ResNet-18 on ImageNet, we initialize the weights of ResNet-18 with the pre-trained full precision model and train it for ten epochs for simplicity. During training, our regularization term in Section 3.5 is activated only for the first 9 epochs, and fine-tuning process is done for the last epoch with the remained bit-width of each layer fixed. All experiments in Table 3 and 5 were conducted by the use of AdamW: the weight decay value is set to 0.01 for LeNet-5, 0.02 for VGG-7, and 0.01 for ResNet-18. We consider the regularization parameter λ ∈ [5× 10−5, 10−2] to encourage layer-wise heterogeneity." }, { "heading": "G ALGORITHM OF RQ", "text": "We provide the algorithmic details of RQ as follows. For quantizing weights, Ĝ = [gi]2 b−1 i=0 = α[−2b−1, . . . , 0, . . . , 2b−1 − 1]; however, when quantizing activations, the grid points start from zero since the outputs of ReLU activations are always non-negative, that is, Ĝ = [gi]2 b−1 i=0 = α[0, . . . , 2\nb − 1]. The objective function in RQ is cross-entropy loss function of class labels and class probabilities predicted by quantized weights, biases, and activations, which is the same as the loss function L in our method.\nAlgorithm 2 Relaxed Quantization (RQ) Louizos et al. (2019) for training\n1: Input: x (a weight or an activation) 2: Initialize: scale α, standard deviation σ,\ngrids Ĝ = [gi]2 b−1 i=0 = [g0, · · · , g2b−1]\n3: Require: bit-width b, temperature τ 4: I = [ g0 − α\n2 , · · · , g2b−1 −\nα 2 , g2b−1 + α 2 ] 5: F = Sigmoid (I − x σ ) . Compute CDF\n6: πi = F [i+ 1]− F [i] for i = 0, · · · , 2b − 1 7: . Unnormalized Prob. for each grid point\n8: # Sampling from the concrete distribution 9: ui ∼ Gumbel(0, 1) for i = 0, · · · , 2b − 1\n10: ri = exp ( (log πi + ui)/τ )∑ j exp ( (log πj + uj)/τ\n) 11: Output: x̂ = ∑2b−1 i=0 rigi\nAlgorithm 3 Relaxed Quantization (RQ) Louizos et al. (2019) for inference\n1: Input: x (a weight or an activation) 2: Require: scale α, grids Ĝ = [g0, · · · , g2b−1]\n3: x̂ = α · round (x α ) 4: Output: min(max(x̂, g0), g2b−1)" } ]
2,020
null
SP:45cbc9c97027fe59ce2ce7f8a02d9257d3460a4c
[ "The paper claims that a (computationally intractable) randomized smoothing of any classifier can be distilled into the (deterministic) classifier itself via fine-tuning it with gradient penalty. This is motivated by a theoretical result that Gaussian smoothing of a classifier is equivalent to solving a certain heat equation, which can be approximated by a regularized loss training. Experimental results use the resulting deterministic classifier to compute the certified radius compared to (stochastic) smoothed classifiers, arguing its efficiency and higher certified radius of the proposed method. ", "Randomized smoothing is the major way to certify the robustness of large scale networks, however, it requires sampling from Gaussian distribution many times, which is not fast enough for real-time inference. This paper uses a regularized loss to get deterministic Gaussian averaged results. This paper points out an interesting direction for certifying robustness, the method is simple and effective." ]
Machine learning models are vulnerable to adversarial attacks. One approach to addressing this vulnerability is certification, which focuses on models that are guaranteed to be robust for a given perturbation size. A drawback of recent certified models is that they are stochastic: they require multiple computationally expensive model evaluations with random noise added to a given image. In our work, we present a deterministic certification approach which results in a certifiably robust model. This approach is based on an equivalence between training with a particular regularized loss, and the expected values of Gaussian averages. We achieve certified models on ImageNet-1k by retraining a model with this loss for one epoch without the use of label information.
[]
[ { "authors": [ "C.M. Bishop" ], "title": "Training with noise is equivalent to Tikhonov regularization", "venue": "Neural Computation,", "year": 1995 }, { "authors": [ "C. Cheng", "G. Nührenberg", "H. Ruess" ], "title": "Maximum resilience of artificial neural networks", "venue": "Lecture Notes in Computer Science,", "year": 2017 }, { "authors": [ "J.M. Cohen", "E. Rosenfeld", "J.Z. Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "A. Einstein" ], "title": "On the theory of the Brownian movement", "venue": "Ann. Phys,", "year": 1906 }, { "authors": [ "C. Finlay", "A.M. Oberman" ], "title": "Scaleable input gradient regularization for adversarial robustness", "venue": "CoRR, abs/1905.11468,", "year": 2019 }, { "authors": [ "I.J. Goodfellow", "J. Shlens", "C. Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "W.B. Johnson", "J. Lindenstrauss" ], "title": "Extensions of Lipschitz mappings into a Hilbert space", "venue": "Contemporary mathematics,", "year": 1984 }, { "authors": [ "I. Karatzas", "S.E. Shreve" ], "title": "Brownian motion. In Brownian Motion and Stochastic Calculus, pages 47–127", "venue": null, "year": 1998 }, { "authors": [ "A. Krizhevsky", "G. Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "A. Kurakin", "I.J. Goodfellow", "S. Bengio" ], "title": "Adversarial machine learning at scale", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Y. LeCun", "L. Bottou", "Y. Bengio", "P. Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "M. Lécuyer", "V. Atlidakis", "R. Geambasu", "D. Hsu", "S. Jana" ], "title": "Certified robustness to adversarial examples with differential privacy", "venue": "IEEE Symposium on Security and Privacy, SP 2019,", "year": 2019 }, { "authors": [ "B. Li", "C. Chen", "W. Wang", "L. Carin" ], "title": "Second-order adversarial attack and certifiable", "venue": "robustness. CoRR,", "year": 2018 }, { "authors": [ "A. Madry", "A. Makelov", "L. Schmidt", "D. Tsipras", "A. Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "A. Raghunathan", "J. Steinhardt", "P. Liang" ], "title": "Certified defenses against adversarial examples", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "J. Rony", "L.G. Hafemann", "L.S. Oliveira", "I.B. Ayed", "R. Sabourin", "E. Granger" ], "title": "Decoupling direction and norm for efficient gradient-based L2 adversarial attacks and defenses", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "H. Salman", "J. Li", "I.P. Razenshteyn", "P. Zhang", "H. Zhang", "S. Bubeck", "G. Yang" ], "title": "Provably robust deep learning via adversarially trained smoothed classifiers", "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "W.A. Strauss" ], "title": "Partial differential equations: An introduction", "venue": null, "year": 2007 }, { "authors": [ "C. Szegedy", "W. Zaremba", "I. Sutskever", "J. Bruna", "D. Erhan", "I.J. Goodfellow", "R. Fergus" ], "title": "Intriguing properties of neural networks", "venue": "2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "S.S. Vempala" ], "title": "The random projection method, volume 65", "venue": "American Mathematical Soc.,", "year": 2005 }, { "authors": [ "T. Weng", "H. Zhang", "H. Chen", "Z. Song", "C. Hsieh", "L. Daniel", "D.S. Boning", "I.S. Dhillon" ], "title": "Towards fast computation of certified robustness for relu networks", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "T. Weng", "H. Zhang", "P. Chen", "J. Yi", "D. Su", "Y. Gao", "C. Hsieh", "L. Daniel" ], "title": "Evaluating the robustness of neural networks: An extreme value theory approach", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "G. Yang", "T. Duan", "J.E. Hu", "H. Salman", "I.P. Razenshteyn", "J. Li" ], "title": "Randomized smoothing of all shapes and sizes", "venue": "CoRR, abs/2002.08118,", "year": 2020 }, { "authors": [ "H. Zhang", "T. Weng", "P. Chen", "C. Hsieh", "L. Daniel" ], "title": "Efficient neural network robustness certification with general activation functions", "venue": "Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "H. Zhang", "Y. Yu", "J. Jiao", "E.P. Xing", "L.E. Ghaoui", "M.I. Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": null, "year": 1901 } ]
[ { "heading": "1 Introduction", "text": "Neural networks are very accurate on image classification tasks, but they are vulnerable to adversarial perturbations, i.e. small changes to the model input leading to misclassification (Szegedy et al., 2014). Adversarial training (Madry et al., 2018) improves robustness, at the expense of a loss of accuracy on unperturbed images (Zhang et al., 2019). Model certification (Lécuyer et al., 2019; Raghunathan et al., 2018; Cohen et al., 2019) is complementary approach to adversarial training, which provides a guarantee that a model prediction is invariant to perturbations up to a given norm. Given an input x, the model f is certified to `2 norm r at x if it gives the same classification on f(x+ η) for all perturbation η with norm up to r,\narg max f(x+ η) = arg max f(x), for all ‖η‖2 ≤ r (1)\nCohen et al. (2019) and Salman et al. (2019) certify models by defining a “smoothed” model, fsmooth, which is the expected Gaussian average of our initial model f at a given input example x,\nfsmooth(x) ≈ Eη [f(x+ η)] (2)\nwhere the perturbation is sampled from a Gaussian, η ∼ N (0, σ2I). Cohen et al. (2019) used a probabilistic argument to show that models defined by (2) can be certified to a given radius by making a large number of stochastic model evaluations. Certified models can classify by first averaging the model, (Salman et al., 2019), or by taking the mode, the most popular classification given by the ensemble (Cohen et al., 2019). Cohen et al. and Salman et al. approximate the model fsmooth stochastically, using a Gaussian ensemble, which consists of evaluating the base model f multiple times on the image perturbed by noise. Like all ensemble models, these stochastic models require multiple inferences, which is more costly than performing inference a single time. In addition, these stochastic models require training the base model f from scratch, by exposing it to Gaussian noise, in order to improve the accuracy of fsmooth. Salman et al. (2019) additionally expose the model to adversarial attacks during training. In the case of certified models, there is a trade-off between certification and accuracy: the certified models lose accuracy on unperturbed images.\nModel CIFAR-10 ImageNet-1k CPU GPU CPU GPU\nDeterministic (ours) 0.0049 0.0080 0.0615 0.0113 Stochastic (Cohen et al., 2019) 0.0480 0.0399 0.1631 0.0932" }, { "heading": "2 Related work", "text": "The issue of adversarial vulnerability arose in the works of Szegedy et al. (2014) and Goodfellow et al. (2015), and has spawned a vast body of research. The idea of training models to be robust to adversarial attacks was widely popularized in Madry et al. (2018). This method, known as adversarial training, trains a model on images corrupted by gradientbased adversarial attacks, resulting in robust models. In terms of certification, early work by Cheng et al. (2017) provided a method of computing maximum perturbation bounds for neural networks, and reduced to solving a mixed integer optimization problem. Weng et al. (2018a) introduced non-trivial robustness bounds for fully connected networks, and provided tight robustness bounds at low computational cost. Weng et al. (2018b) proposed a metric that has theoretical grounding based on Lipschitz continuity of the classifier model and is scaleable to state-of-the-art ImageNet neural network classifiers. Zhang et al. (2018) proposed a general framework to certify neural networks based on linear and quadratic bounding techniques on the activation functions, which is more flexible than its predecessors. Training a neural network with Gaussian noise has been shown to be equivalent to gradient regularization (Bishop, 1995). This helps improve robustness of models; however, recent work has used additive noise during training and evaluation for certification purposes. Lécuyer et al. (2019) first considered adding random Gaussian noise as a certifiable defense in a method called PixelDP. In their method, they take a known neural network architecture and add a layer of random noise to make the model’s output random. The expected classification is in turn more robust to adversarial perturbations. Furthermore, their defense is a certified defense, meaning they provide a lower bound on the amount of adversarial perturbations for which their defence will always work. In a following work, Li et al. (2018) provided a defence with improved certified robustness. The certification guarantees given in these two papers are loose, meaning the defended model will always be more robust than the certification bound indicates. In contrast, Cohen et al. (2019) provided a defence utilizing randomized Gaussian smoothing that leads to tight robustness guarantees under the `2 norm. Moreover Cohen et al. used Monte Carlo sampling to compute the radius in which a model’s prediction is unchanged; we refer to this method as RandomizedSmoothing. In work building on Cohen et al., Salman et al. (2019) developed an adversarial training framework called SmoothAdv and defined a Lipschitz constant of averaged models. Yang et al. (2020) generalize previous randomized smoothing methods by providing robustness guarantees in the `1, `2, and `∞ norms for smoothing with several non-Gaussian distributions." }, { "heading": "3 Deterministic Smoothing", "text": "Suppose we are given a dataset consisting of paired samples (x, y) ∈ X × Y where x is an example with corresponding true classification y. The supervised learning approach trains a model f : X −→ RNc which maps images to a vector whose length equals the number of classes. Suppose f is the initial model, and let fsmooth be the averaged model given by equation (2). Cohen et al. (2019) find a Gaussian smoothed classification model fsmooth by sampling η ∼ N (0, σ2I) independently n times, performing n classifications, and then computing the most popular classification. In the randomized smoothing method, the initial model f is trained on data which is augmented with Gaussian noise to improve accuracy on noisy images. We take a different approach to Gaussian smoothing. Starting from an accurate pretrained model f , we now discard the training labels, and iteratively retrain a new model, fsmooth using a quadratic loss between the model f and the new model’s predictions, with an additional gradient regularization term. We have found that discarding the original one-hot labels and instead using model predictions helps make the model smoother. To be precise, our new models is a result of minimizing the loss which we call HeatSmoothing,\nEx [ 1 2 ∥∥softmax (fsmooth(x))− softmax (f(x))∥∥22 + σ22 ∥∥∇xfsmooth(x)∥∥22 ] (3)\nThe smoothing achieved by the new models is illustrated schematically in Figure 5." }, { "heading": "3.1 Related regularized losses", "text": "Gradient regularization is known to be equivalent to Gaussian smoothing (Bishop, 1995; LeCun et al., 1998). Our deterministic smoothed model arises by training using the HeatSmoothing loss (3), which is designed so to ensure that (2) holds for our model. Our results is related to the early results on regularized networks (Bishop, 1995; LeCun et al., 1998): that full gradient regularization is equivalent to Gaussian smoothing. Formally this is stated as follows. Theorem 1. (Bishop, 1995) Training a feed-forward neural-network model using the quadratic (or mean-squared error) loss, with added Gaussian noise of mean 0 and variance σ2 to the inputs, is equivalent to training with\nEx [ ‖f(x)− y‖2 + σ2‖∇f(x)‖2 ] (4)\nup to higher order terms.\nThe equivalence is normally used to go from models augmented with Gaussian noise to regularized models. In our case, we use the result in the other direction: we train a regularized model in order to produce a model which is equivalent to evaluating with noise. In practice, this means that rather than adding noise to regularize models for certifiable robustness, we explicitly perform a type of gradient regularization, in order to produce a model which performs as if Gaussian noise was added. See Figure 4 in Appendix D for an illustration of the effect of this gradient regularization. The gradient regularization term in the HeatSmoothing loss (3), is also related to adversarial training. Tikhonov regularization is used to produced adversarially robust models (Finlay and Oberman, 2019). However in adversarial training, the gradient of the loss is used, rather that the gradient of the full model. Also, our loss does not use information from the true labels. The reason for these differences is due to the fact that we wish to have a model that approximates the Gaussian average of our initial model f (see Appendix A). Furthermore, minimizing the gradient-norm of the loss of the output gives us a smooth model in all directions, rather than being robust to only adversarial perturbations." }, { "heading": "3.2 Algorithmic Details", "text": "We have found that early on in training, the value 12 ∥∥fsmooth(x)− fk(x)∥∥22 may be far greater than the σ 2 2 ∥∥∇xfsmooth(x)∥∥22 term. So we introduced a softmax of the vectors in the distance-squared term to reduce the overall magnitude of this term. We perform the training minimization of (3) for one epoch. The pseudo-code for our neural network weight update is given by Algorithm 1 1\nNote that the ∥∥∇xfsmooth(x)∥∥22 term in (3) requires the computation of a Jacobian matrix norm. In high dimensions this is computationally expensive. To approximate this term, we make use of the Johnson-Lindenstrauss lemma (Johnson and Lindenstrauss, 1984; Vempala, 2005) followed by the finite difference approximation from Finlay and Oberman (2019). We are able to approximate\n∥∥∇xfsmooth(x)∥∥22 by taking the average of the product of the Jacobian matrix and Gaussian noise vectors. Jacobian-vector products can be easily computed via reverse mode automatic differentiation, by moving the noise vector w inside:\nw · (∇xv(x)) = ∇x (w · v(x)) (5)\nFurther computation expense is reduced by using finite-differences to approximate the norm of the gradient. Once the finite-difference is computed, we detach this term from the automatic differentiation computation graph, further speeding training. More details of our implementation of these approximation techniques, and the definition of the term ĝ which is a regularization of the gradient, are presented in Appendix B.\nAlgorithm 1: HeatSmoothing Neural Network Weight Update Input :Minibatch of input examples x(mb) = ( x(1), . . . , x(Nb) ) A model v set to “train” mode Current model f set to “eval” mode σ, standard deviation of Gaussian smoothing κ, number of Gaussian noise replications (default= 10) δ, finite difference step-size (default= 0.1) Update : learning-rate according to a pre-defined scheduler. for i ∈ {1, . . . Nb} do\nCompute : fsmooth(x(i)), f(x(i)) ∈ RNc Ji = 12 ∥∥fsmooth(x(i))− f(x(i))∥∥22 ∈ R\nfor j ∈ {1, . . . κ} do Generate w = 1√\nNc (w1, . . . , wNc), w1, . . . , wNc ∈ N (0, 1)\nCompute the normalized gradient ĝ via (18) Detach x(i) from the computation graph Ji ← Ji + σ2 2δ2 ( w · fsmooth(x(i) + δĝ)− w · fsmooth(x(i))\n)2 end\nJ ← 1Nb Nb∑ i=1 Ji\nend Update the weights of v by running backpropagation on J at the current learning rate." }, { "heading": "3.3 Theoretical details", "text": "We appeal to partial differential equations (PDE) theory for explaining the equivalence between gradient regularization and Gaussian convolution (averaging) of the model2. The idea is that the gradient term which appears in the loss leads to a smoothing of the new function (model). The fact that the exact form of the smoothing corresponds to Gaussian convolution is a mathematical result which can be interpreted probabilistically or using techniques from analysis. Briefly, we detail the link as follows. Einstein (1906) showed that the function value of an averaged model under Brownian motion is related to the heat equation (a PDE); the theory of stochastic differential equations makes this rigorous (Karatzas and Shreve, 1998). Moreover, solutions of the heat equation are given by Gaussian convolution with the original model. Crucially, in addition solutions of the heat equation can be interpreted as iterations of a regularized loss problem (called a variational energy) like that of equation 3. The minimizer of this variational energy (3) satisfies an equation which is formally equivalent to the heat equation (Gelfand et al., 2000). Thus, taking these facts together, we see that a few steps of the minimization of the loss in (3) yield a model which approximately satisfies the heat equation, and corresponds to a model smoothed by Gaussian convolution. See Figure 4 for an illustration of a few steps of the training procedure. This result is summarized in the following theorem.\nTheorem 2. (Strauss, 2007) Let f be a bounded function, x ∈ Rd, and η ∼ N ( 0, σ2I ) . Then the following are equivalent:\n1. Eη [f(x+ η)], the expected value of Gaussian averages of f at x.\n2. ( f ∗ N (0, σ2I) ) (x), the convolution of f with the density of the N (0, σ2I) distribu-\ntion evaluated at x.\n1Code and links to trained models are posted in Supplemental Materials 2We sometimes interchange the terms Gaussian averaging and Gaussian convolution; they are\nequivalent, as shown in Theorem 2.\n3. The solutions of the heat equation,\n∂ ∂t f(x, t) = σ\n2\n2 ∆xf(x, t) (6)\nat time t = 1, with initial condition f(x, 0) = f(x).\nIn Appendix A, we use Theorem 2 to show the equivalence of training with noise and iteratively training (3). To assess how well our model approximates the Gaussian average of the initial model, we compute the certified `2 radius for averaged models introduced in Cohen et al. (2019). A larger radius implies a better approximation of the Gaussian average of the initial model. We compare our models with stochastically averaged models via certified accuracy. This is the fraction of the test set which a model correctly classifies at a given radius while ignoring abstained classifications. Throughout, we always use the same σ value for certification as for training. In conjunction with the certification technique of Cohen et al., we also provide the following theorem, which describes a bound based on the Lipschitz constant of a Gaussian averaged model. We refer to this bound as the L-bound, which demonstrates the link between Gaussian averaging and adversarial robustness. Theorem 3 (L-bound). Suppose fsmooth is the convolution (average) of f : Rd → [0, 1]Nc with a Gaussian kernel of variance σ2I,\nfsmooth(x) = ( f ∗ N (0, σ2I) ) (x)\nThen any perturbation δ which results in a change of rank of the k-th component of fsmooth(x) must have norm bounded as follows:\n‖δ‖2 ≥ σ(π/2) 1/2(fsmooth(x)(k) − fsmooth(x)(k+1)) (7)\nwhere fsmooth(x)(i) is the ith largest value in the vector fsmooth(x) ∈ [0, 1]Nc.\nSee Appendix C for proof. This bound is equally applicable to deterministic or stochastically averaged models. In stochastically averaged models fsmooth(x) is replaced by the stochastic approximation of Eη∼N (0,σ2I) [f(x+ η)]." }, { "heading": "3.4 Adversarial Attacks", "text": "To test how robust our model is to adversarial examples, we calculate the minimum `2 adversarial via our L-bound and we attack our models using the projected gradient descent (PGD) (Kurakin et al., 2017; Madry et al., 2018) and decoupled direction and norm (DDN) (Rony et al., 2019) methods. These attacks are chosen because there is a specific way they can be applied to stochastically averaged models (Salman et al., 2019). In the `2 setting of both attacks, it is standard to take the step\ng = α ∇δtL (f(x+ δt), y) ‖∇δtL (f(x+ δt), y)‖2\n(8)\nin the iterative algorithm. Here, x is an input example with corresponding true class y; δt denotes the adversarial perturbation at its current iteration; L denotes the cross-entropy Loss function (or KL Divergence); ε is the maximum perturbation allowed; and α is the step-size. In the stochastically averaged model setting, the step is given by\ngn = α n∑ i=1 ∇δtL (f(x+ δt + ηi), y)∥∥∥∥ n∑\ni=1 ∇δtL (f(x+ δt + ηi), y) ∥∥∥∥ 2\n(9)\nwhere η1, . . . , ηn iid∼ N (0, σ2I). For our deterministically averaged models, we implement the update (8). This is because our models are deterministic, meaning there is no need to sample noise at evaluation time. For stochastically averaged models (Cohen et al., 2019; Salman et al., 2019), we implement the update (9)." }, { "heading": "4 Experiments & Results", "text": "We now execute our method on the ImageNet-1k dataset (Deng et al., 2009) with the ResNet-50 model architecture. The initial model f was trained on clean images for 29 epochs with the cross-entropy loss function. Due to a lack of computing resources, we had to modify the training procedure (3) and Algorithm 1 to obtain our smoothed model fsmooth. This new training procedure amounts to minimizing the loss function\n1 2 ∥∥softmax (fsmooth(x+ η))− softmax (f(x))∥∥22 + σ22 ∥∥∇xfsmooth(x+ η)∥∥22 (10)\nfor only 1 epoch of the training set using stochastic gradient descent at a fixed learning rate of 0.01 and with σ = 0.25. This is needed because the output vectors in the ImageNet setting are of length 1,000. Using softmax in the calculation of the `2 distance metric prevents the metric from dominating the gradient-penalty term and the loss blowing up. Furthermore, we add noise η ∼ N ( 0, σ2I ) to half of the training images." }, { "heading": "4.1 Comparison to Stochastic Methods via Certified Radii", "text": "We compare our results to a pretrained RandomizedSmoothing ResNet-50 model with σ = 0.25 provided by Cohen et al. (2019). We also compare to a pretrained SmoothAdv ResNet-50 model trained with 1 step of PGD and with a maximum perturbation of ε = 0.5 with σ = 0.25 provided by Salman et al. (2019). To assess certified accuracy, we run the Certify algorithm from Cohen et al. (2019) with n0 = 25, n = 1, 000, σ = 0.25 for the stochastically trained models. We realize that this may not be an optimal number of noise samples, but it was the most our computational resources could handle. For the HeatSmoothing model, we run the same certification algorithm but without running SamplingUnderNoise to compute ĉA. For completeness, we also certify the baseline model f0. Certification results on 5,000 ImageNet test images are presented in Figure 1b. We see that our model is indeed comparable to the stochastic methods presented in earlier paper, despite the fact that we only needed one training epoch. Note that CIFAR-10 results are presented in Appendix E." }, { "heading": "4.2 Comparison to Stochastic Methods via Adversarial Attacks", "text": "Next, we attack our four models using PGD and DDN. For the stochastic models, we do 25 noise samples to compute the loss. We run both attacks with max `2 perturbation of = 4.0 until top-5 misclassification is achieved or until 20 steps are reached. Results on 1,000 ImageNet test images are presented in Table 3 and Figures 2a and 2b. We see that our model is comparable to the stochastic models, but does not outperform them. In Figure 2a, it is clear that the model presented in Salman et al. (2019) performs best, since this model was trained on PGD-corrupted images. Note that CIFAR-10 results are presented in Appendix E." }, { "heading": "4.3 Comparing Models Using Their Classifications", "text": "Recall that the goal of this work is to use deterministic methods to obtain an averaged model equivalent to that of Cohen et al. (2019). One way of measuring the similarity is to compare the predictions of both models. To do this, we consider a pretrained stochastically averaged model with σ = 0.25 and our deterministic model trained using (10). We randomly select 1, 000 ImageNet test images and assess the difference between evaluating our model in a single forward pass vs. the stochastic method presented in Cohen et al.. In Figure 3a, we see how often a model’s isnge forward pass top-1 prediction matches with predictions of this model using Cohen et al.’s Predict algorithm, conditioning on the single forward pass prediction being correct. In Figure 3b, we then see how often our deterministic model agrees with Cohen et al.’s stochastic model’s prediction. In these plots, for stochastic prediction, we fix the number of Gaussian replications n = 100. We see that our model’s predictions are the same when doing a single forward pass and doing a stochastic prediction with high probability. We also see that our model’s deterministic predictions match a high-performing stochastically averaged model’s predictions with high probability." }, { "heading": "4.4 Certifying Robust Models", "text": "So far, we have showed that we can take a non-robust baseline model and make it certifiably robust by retraining for one epoch with a regularized loss (10). A natural question arises: can we use this method to make robust models certifiably robust? To test this, we begin with an adversarially trained model (Madry et al., 2018). This pretrained model was downloaded from Madry’s “Robustness” GitHub repository and was trained with images corrupted by the L2 PGD attack with maximum perturbation size = 3.0. We certify this model by retraining it with (10) for one epoch using stochastic gradient descent with fixed learning rate 0.01. In Table 4, we compute the `2 certified radius from Cohen et al. (2019) for these models using 1,000 ImageNet-1k test images with σ = 0.25. The certified radii for the model trained with the loss function (10) are significantly higher than those of the adversarially trained model from Madry et al. (2018)." }, { "heading": "5 Conclusion", "text": "Randomized smoothing is a well-known method to achieve a Gaussian average of some initial neural network. This is desirable to guarantee that a model’s predictions are unchanged given perturbed input data. In this work, we used a regularized loss to obtain deterministic Gaussian averaged models. By computing `2 certified radii, we showed that our method is comparable to previously-known stochastic methods. This is confirmed by attacking our models, which results in adversarial distances similar to those seen with stochastically smoothed models. We also developed a new lower bound on perturbations necessary to throw off averaged models, and used it as a measure of model robustness. Lastly, our method is less computationally expensive in terms of inference time (see Table 2)." }, { "heading": "A Solving the heat equation by training with a regularized loss function", "text": "Theorem 2 tells us that training a model with added Gaussian noise is equivalent to training a model to solve the heat equation. We can discretize the heat equation (6) to obtain\nfk+1 − fk\nh = σ\n2\n2 ∆f k+1 (11)\nfor k = 0, . . . , nT − 1, where nT is the fixed number of timesteps, h = 1/nT , and f0 = f , our initial model. Notice how, using the Euler-Lagrange equation, we can express fk+1 in (11) as the variational problem\nfk+1 = argmin v 1 2 ∫ Rd (∣∣v(x)− fk(x)∣∣2 + hσ22 ‖∇xv(x)‖22 ) ρ(x)dx (12)\nwhere ρ is the density from which our clean data comes form. Therefore, this is equivalent to solving\nfk+1 = argmin v Ex [∣∣v(x)− fk(x)∣∣2 + hσ22 ‖∇xv(x)‖22 ] (13)\nNote that the minimizer of the objective of (12) satisfies\nv − fk = hσ 2\n2 ∆v (14)\nwhich matches (11) if we set fk+1 = v. In the derivation of (14), we take for granted the fact that empirically, ρ is approximately uniform and is therefore constant. In the end, we iteratively compute (13) and obtain models f1, . . . , fnT , setting v = fnT , our smoothed model. Something to take note of is that our model outputs be vectors whose length corresponds to the total number of classes; therefore, the objective function in (13) will not be suitable for vector-valued outputs fk(x) and v(x). We instead use the following update\nfk+1 = argmin v\nEx [ 1 2 ∥∥v(x)− fk(x)∥∥22 + hσ22 ‖∇xv(x)‖22 ] (15)" }, { "heading": "B Approximating the gradient-norm regularization term", "text": "By the Johnson-Lindenstrauss Lemma (Johnson and Lindenstrauss, 1984; Vempala, 2005), ‖∇xv(x)‖22 has the following approximation,\n‖∇xv(x)‖22 ≈ κ∑ i=1 ‖∇x (wi · v(x))‖22\n≈ κ∑ i=1 ( (wi · v (x+ δĝi))− (wi · v(x)) δ )2 (16) where\nwi = 1√ K (wi1, . . . , wiK)T ∈ RK , wij iid∼ N (0, 1) (17)\nand ĝi is given by\nĝi = { ∇x(wi·v(x)) ‖∇x(wi·v(x))‖2\nif ∇x (wi · v(x)) 6= 0 0 otherwise\n(18)\nIn practice, we set δ = 0.1, κ = 10, and K = Nc, the total number of classes." }, { "heading": "C Proof of Theorem 3", "text": "Proof. Suppose the loss function ` is Lipschitz continuous with respect to model input x, with Lipschitz constant L. Let `0 be such that if `(x) < `0, the model is always correct. Then by Proposition 2.2 in Finlay and Oberman (2019), a lower bound on the minimum magnitude of perturbation δ necessary to adversarially perturb an image x is given by\n‖δ‖2 ≥ max {`0 − `(x), 0}\nL (19)\nBy Lemma 1 of Appendix A in Salman et al. (2019), our averaged model\nfsmooth(x) = ( f ∗ N (0, σ2I) ) (x)\nhas Lipschitz constant L = 1σ √ 2 π . Replacing L in (19) with 1 σ √ 2 π and setting `0 =\nfsmooth(x)(k), `(x) = fsmooth(x)(k+1) gives us the proof.\nD Illustration of regularized training" }, { "heading": "E Results on the CIFAR-10 dataset", "text": "Model L-bound PGD DDN median mean median mean median mean\nHeatSmoothing 0.094 0.085 0.7736 0.9023 0.5358 0.6361 SmoothAdv 0.090 0.078 0.7697 1.3241 0.4812 0.6208 RandomizedSmoothing 0.087 0.081 0.7425 1.2677 0.4546 0.5558 Undefended baseline - - 0.7088 0.8390 0.4911 0.5713" } ]
2,020
null
SP:a685e4a6a1f6f3d69a9f0968145b6afd805dc5ab
[ "This work explores combining an RNN and a neural density estimator for forecasting in multivariate time series. RNN is stacked with a density estimator, MAF for best results, to forecast density of a multivariate time series at future time steps. In addition, variations of the architecture with attention and other density estimators are examined. The architecture, RNN+MAF and variants, is evaluated by CRPS score on several datasets.", "The paper proposes a method to provide probabilistic forecasts of multivariate time-series taking dependencies between series into account even for large dimensions. The approach consists in using a normalizing flow to model the distribution of observations at a time-step condition on a state that can be obtained either with a RNN or a transformer. The motivation of using the normalizing flow is to be able to model various type of distributions without having to make specific hypothesis on the distribution which could hinder accuracy. Experiments are performed both on a synthetic task and on real-world datasets where accuracy is shown to outperform previous methods." ]
Time series forecasting is often fundamental to scientific and engineering problems and enables decision making. With ever increasing data set sizes, a trivial solution to scale up predictions is to assume independence between interacting time series. However, modeling statistical dependencies can improve accuracy and enable analysis of interaction effects. Deep learning methods are well suited for this problem, but multivariate models often assume a simple parametric distribution and do not scale to high dimensions. In this work we model the multivariate temporal dynamics of time series via an autoregressive deep learning model, where the data distribution is represented by a conditioned normalizing flow. This combination retains the power of autoregressive models, such as good performance in extrapolation into the future, with the flexibility of flows as a general purpose high-dimensional distribution model, while remaining computationally tractable. We show that it improves over the state-of-the-art for standard metrics on many real-world data sets with several thousand interacting time-series.
[ { "affiliations": [], "name": "NORMALIZING FLOWS" }, { "affiliations": [], "name": "Kashif Rasul" }, { "affiliations": [], "name": "Abdul-Saboor Sheikh" }, { "affiliations": [], "name": "Ingmar Schuster" }, { "affiliations": [], "name": "Urs Bergmann" }, { "affiliations": [], "name": "Roland Vollgraf" } ]
[ { "authors": [ "Alexander Alexandrov", "Konstantinos Benidis", "Michael Bohlke-Schneider", "Valentin Flunkert", "Jan Gasthaus", "Tim Januschowski", "Danielle C. Maddix", "Syama Rangapuram", "David Salinas", "Jasper Schulz", "Lorenzo Stella", "Ali Caner Türkmen", "Yuyang Wang" ], "title": "GluonTS: Probabilistic and Neural Time Series Modeling in Python", "venue": "Journal of Machine Learning Research,", "year": 2020 }, { "authors": [ "Sam Charrington" ], "title": "TWiML & AI Podcast: Systems and software for machine learning at scale with Jeff Dean, 2018", "venue": "URL https://bit.ly/2G0LmGg", "year": 2018 }, { "authors": [ "XI Chen", "Nikhil Mishra", "Mostafa Rohaninejad", "Pieter Abbeel" ], "title": "PixelSNAIL: An improved autoregressive generative model", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Junyoung Chung", "Caglar Gulcehre", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", "venue": "In NIPS 2014 Workshop on Deep Learning,", "year": 2014 }, { "authors": [ "Djork-Arné Clevert", "Thomas Unterthiner", "Sepp Hochreiter" ], "title": "Fast and accurate deep network learning by exponential linear units (elus)", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Emmanuel de Bézenac", "Syama Sundar Rangapuram", "Konstantinos Benidis", "Michael BohlkeSchneider", "Richard Kurle", "Lorenzo Stella", "Hilaf Hasson", "Patrick Gallinari", "Tim Januschowski" ], "title": "Normalizing Kalman Filters for Multivariate Time series Analysis", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Alexandre De Brébisson", "Étienne Simon", "Alex Auvolat", "Pascal Vincent", "Yoshua Bengio" ], "title": "Artificial neural networks applied to taxi destination prediction", "venue": "In Proceedings of the 2015th International Conference on ECML PKDD Discovery Challenge - Volume 1526,", "year": 2015 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using Real NVP", "venue": "In International Conference on Learning Representations 2017 (Conference Track),", "year": 2017 }, { "authors": [ "Jonathan Ho", "Xi Chen", "Aravind Srinivas", "Yan Duan", "Pieter Abbeel" ], "title": "Flow++: Improving flowbased generative models with variational dequantization and architecture design", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "S. Hochreiter", "J. Schmidhuber" ], "title": "Long Short-Term Memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "J.D. Hunter" ], "title": "Matplotlib: A 2d graphics environment", "venue": "Computing in Science & Engineering,", "year": 2007 }, { "authors": [ "Seong Jae Hwang", "Zirui Tao", "Won Hwa Kim", "Vikas Singh" ], "title": "Conditional recurrent flow: Conditional generation of longitudinal samples with applications to neuroimaging", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "R.J. Hyndman", "G. Athanasopoulos" ], "title": "Forecasting: Principles and practice", "venue": "OTexts,", "year": 2018 }, { "authors": [ "Rob Hyndman", "Anne Koehler", "Keith Ord", "Ralph Snyder" ], "title": "Forecasting with exponential smoothing", "venue": "The state space approach,", "year": 2008 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37,", "year": 2015 }, { "authors": [ "Alexander Jordan", "Fabian Krüger", "Sebastian Lerch" ], "title": "Evaluating probabilistic forecasts with scoringRules", "venue": "Journal of Statistical Software, Articles,", "year": 2019 }, { "authors": [ "Diederick P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Iryna Korshunova", "Yarin Gal", "Arthur Gretton", "Joni Dambre" ], "title": "Conditional BRUNO: A Deep Recurrent Process for Exchangeable Labelled Data", "venue": "In Bayesian Deep Learning workshop,", "year": 2018 }, { "authors": [ "Rahul G Krishnan", "Uri Shalit", "David Sontag" ], "title": "Structured inference networks for nonlinear state space models", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Manoj Kumar", "Mohammad Babaeizadeh", "Dumitru Erhan", "Chelsea Finn", "Sergey Levine", "Laurent Dinh", "Durk Kingma" ], "title": "VideoFlow: A Flow-Based Generative Model for Video", "venue": "In Workshop on Invertible Neural Nets and Normalizing Flows ,", "year": 2019 }, { "authors": [ "Guokun Lai", "Wei-Cheng Chang", "Yiming Yang", "Hanxiao Liu" ], "title": "Modeling long- and short-term temporal patterns with deep neural networks", "venue": "In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval,", "year": 2018 }, { "authors": [ "Shiyang Li", "Xiaoyong Jin", "Yao Xuan", "Xiyou Zhou", "Wenhu Chen", "Yu-Xiang Wang", "Xifeng Yan" ], "title": "Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "H. Lütkepohl" ], "title": "New Introduction to Multiple Time Series Analysis", "venue": "URL https://books.google.de/books?id=muorJ6FHIiEC", "year": 2007 }, { "authors": [ "James E. Matheson", "Robert L. Winkler" ], "title": "Scoring rules for continuous probability distributions", "venue": "Management Science,", "year": 1976 }, { "authors": [ "Junier Oliva", "Avinava Dubey", "Manzil Zaheer", "Barnabas Poczos", "Ruslan Salakhutdinov", "Eric Xing", "Jeff Schneider" ], "title": "Transformation autoregressive networks", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Boris N. Oreshkin", "Dmitri Carpov", "Nicolas Chapados", "Yoshua Bengio" ], "title": "N-BEATS: Neural basis expansion analysis for interpretable time series forecasting", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "George Papamakarios", "Theo Pavlakou", "Iain Murray" ], "title": "Masked autoregressive flow for density estimation", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "George Papamakarios", "Eric Nalisnick", "Danilo Jimenez Rezende", "Shakir Mohamed", "Balaji Lakshminarayanan" ], "title": "Normalizing flows for probabilistic modeling and inference, 2019", "venue": null, "year": 2019 }, { "authors": [ "Niki Parmar", "Ashish Vaswani", "Jakob Uszkoreit", "Lukasz Kaiser", "Noam Shazeer", "Alexander Ku", "Dustin Tran" ], "title": "Image transformer", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "David Salinas", "Michael Bohlke-Schneider", "Laurent Callot", "Roberto Medico", "Jan Gasthaus" ], "title": "Highdimensional multivariate forecasting with low-rank Gaussian copula processes", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "David Salinas", "Valentin Flunkert", "Jan Gasthaus", "Tim Januschowski" ], "title": "DeepAR: Probabilistic forecasting with autoregressive recurrent networks", "venue": "International Journal of Forecasting,", "year": 2019 }, { "authors": [ "E.G. Tabak", "C.V. Turner" ], "title": "A family of nonparametric density estimation algorithms", "venue": "Communications on Pure and Applied Mathematics,", "year": 2013 }, { "authors": [ "L. Theis", "A. van den Oord", "M. Bethge" ], "title": "A note on the evaluation of generative models", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Dustin Tran", "Keyon Vafa", "Kumar Agrawal", "Laurent Dinh", "Ben Poole" ], "title": "Discrete flows: Invertible generative models of discrete data", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Ruey S. Tsay" ], "title": "Multivariate Time Series Analysis: With R and Financial Applications", "venue": "Wiley Series in Probability and Statistics. Wiley,", "year": 2014 }, { "authors": [ "Roy van der Weide" ], "title": "GO-GARCH: a multivariate generalized orthogonal GARCH model", "venue": "Journal of Applied Econometrics,", "year": 2002 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Classical time series forecasting methods such as those in Hyndman & Athanasopoulos (2018) typically provide univariate forecasts and require hand-tuned features to model seasonality and other parameters. Time series models based on recurrent neural networks (RNN), like LSTM (Hochreiter & Schmidhuber, 1997), have become popular methods due to their end-to-end training, the ease of incorporating exogenous covariates, and their automatic feature extraction abilities, which are the hallmarks of deep learning. Forecasting outputs can either be points or probability distributions, in which case the forecasts typically come with uncertainty bounds.\nThe problem of modeling uncertainties in time series forecasting is of vital importance for assessing how much to trust the predictions for downstream tasks, such as anomaly detection or (business) decision making. Without probabilistic modeling, the importance of the forecast in regions of low noise (small variance around a mean value) versus a scenario with high noise cannot be distinguished. Hence, point estimation models ignore risk stemming from this noise, which would be of particular importance in some contexts such as making (business) decisions.\nFinally, individual time series, in many cases, are statistically dependent on each other, and models need the capacity to adapt to this in order to improve forecast accuracy (Tsay, 2014). For example, to model the demand for a retail article, it is important to not only model its sales dependent on its own past sales, but also to take into account the effect of interacting articles, which can lead to cannibalization effects in the case of article competition. As another example, consider traffic flow in a network of streets as measured by occupancy sensors. A disruption on one particular street will also ripple to occupancy sensors of nearby streets—a univariate model would arguably not be able to account for these effects.\nIn this work, we propose end-to-end trainable autoregressive deep learning architectures for probabilistic forecasting that explicitly models multivariate time series and their temporal dynamics by employing a normalizing flow, like the Masked Autoregressive Flow (Papamakarios et al., 2017) or\nReal NVP (Dinh et al., 2017). These models are able to scale to thousands of interacting time series, we show that they are able to learn ground-truth dependency structure on toy data and we establish new state-of-the-art results on diverse real world data sets by comparing to competitive baselines. Additionally, these methods adapt to a broad class of underlying data distribution on account of using a normalizing flow and our Transformer based model is highly efficient due to the parallel nature of attention layers while training.\nThe paper first provides some background context in Section 2. We cover related work in Section 3. Section 4 introduces our model and the experiments are detailed in Section 5. We conclude with some discussion in Section 6. The Appendix contains details of the datasets, additional metrics and exploratory plots of forecast intervals as well as details of our model." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 DENSITY ESTIMATION VIA NORMALIZING FLOWS", "text": "Normalizing flows (Tabak & Turner, 2013; Papamakarios et al., 2019) are mappings from RD to RD such that densities pX on the input space X = RD are transformed into some simple distribution pZ (e.g. an isotropic Gaussian) on the space Z = RD. These mappings, f : X 7→ Z , are composed of a sequence of bijections or invertible functions. Due to the change of variables formula we can express pX (x) by\npX (x) = pZ(z) ∣∣∣∣det(∂f(x)∂x )∣∣∣∣ ,\nwhere ∂f(x)/∂x is the Jacobian of f at x. Normalizing flows have the property that the inverse x = f−1(z) is easy to evaluate and computing the Jacobian determinant takes O(D) time.\nThe bijection introduced by Real NVP (Dinh et al., 2017) called the coupling layer satisfies the above two properties. It leaves part of its inputs unchanged and transforms the other part via functions of the un-transformed variables (with superscript denoting the coordinate indices){\ny1:d = x1:d\nyd+1:D = xd+1:D exp(s(x1:d)) + t(x1:d),\nwhere is an element wise product, s() is a scaling and t() a translation function from Rd 7→ RD−d, given by neural networks. To model a nonlinear density map f(x), a number of coupling layers which map X 7→ Y1 7→ · · · 7→ YK−1 7→ Z are composed together all the while alternating the dimensions which are unchanged and transformed. Via the change of variables formula the probability density function (PDF) of the flow given a data point can be written as\nlog pX (x) = log pZ(z) + log |det(∂z/∂x)| = log pZ(z) + K∑ i=1 log |det(∂yi/∂yi−1)|. (1)\nNote that the Jacobian for the Real NVP is a block-triangular matrix and thus the log-determinant of each map simply becomes\nlog |det(∂yi/∂yi−1)| = log | exp(sum(si(y1:di−1))|, (2) where sum() is the sum over all the vector elements. This model, parameterized by the weights of the scaling and translation neural networks θ, is then trained via stochastic gradient descent (SGD) on training data points where for each batch D we maximize the average log likelihood (1) given by\nL = 1 |D| ∑ x∈D log pX (x; θ).\nIn practice, Batch Normalization (Ioffe & Szegedy, 2015) is applied as a bijection to outputs of successive coupling layers to stabilize the training of normalizing flows. This bijection implements the normalization procedure using a weighted moving average of the layer’s mean and standard deviation values, which has to be adapted to either training or inference regimes.\nThe Real NVP approach can be generalized, resulting in Masked Autoregressive Flows (Papamakarios et al., 2017) (MAF) where the transformation layer is built as an autoregressive neural network in the sense that it takes in some input x ∈ RD and outputs y = (y1, . . . , yD) with the requirement that this transformation is invertible and any output yi cannot depend on input with dimension indices ≥ i, i.e. x≥i. The Jacobian of this transformation is triangular and thus the Jacobian determinant is tractable. Instead of using a RNN to share parameters across the D dimensions of x one avoids this sequential computation by using masking, giving the method its name. The inverse however, needed for generating samples, is sequential.\nBy realizing that the scaling and translation function approximators don’t need to be invertible, it is straight-forward to implement conditioning of the PDF pX (x|h) on some additional information h ∈ RH : we concatenate h to the inputs of the scaling and translation function approximators of the coupling layers, i.e. s(concat(x1:d,h)) and t(concat(x1:d,h)) which are modified to map Rd+H 7→ RD−d. Another approach is to add a bias computed from h to every layer inside the s and t networks as proposed by Korshunova et al. (2018). This does not change the log-determinant of the coupling layers given by (2). More importantly for us, for sequential data, indexed by t, we can share parameters across the different conditioners ht by using RNNs or Attention in an autoregressive fashion.\nFor discrete data the distribution has differential entropy of negative infinity, which leads to arbitrary high likelihood when training normalizing flow models, even on test data. To avoid this one can dequantize the data, often by adding Uniform[0, 1) noise to integer-valued data. The log-likelihood of the resulting continuous model is then lower-bounded by the log-likelihood of the discrete one as shown in Theis et al. (2016)." }, { "heading": "2.2 SELF-ATTENTION", "text": "The self-attention based Transformer (Vaswani et al., 2017) model has been used for sequence modeling with great success. The multi-head self-attention mechanism enables it to capture both longand short-term dependencies in time series data. Essentially, the Transformer takes in a sequence X = [x1, . . . ,xT ]\nᵀ ∈ RT×D, and the multi-head self-attention transforms this into H distinct query Qh = XW Q h , key Kh = XW K h and value Vh = XW V h matrices, where the W Q h , W K h , and W V h are learnable parameters. After these linear projections the scaled dot-product attention computes a sequence of vector outputs via:\nOh = Attention(Qh,Kh,Vh) = softmax\n( QhK\nᵀ h√\ndK ·M\n) Vh,\nwhere a mask M can be applied to filter out right-ward attention (or future information leakage) by setting its upper-triangular elements to −∞ and we normalize by dK the dimension of the WKh matrices. Afterwards, all H outputs Oh are concatenated and linearly projected again.\nOne typically uses the Transformer in an encoder-decoder setup, where some warm-up time series is passed through the encoder and the decoder can be used to learn and autoregressively generate outputs." }, { "heading": "3 RELATED WORK", "text": "Related to this work are models that combine normalizing flows with sequential modeling in some way. Transformation Autoregressive Networks (Oliva et al., 2018) which model the density of a multi-variate variable x ∈ RD as D conditional distributions ΠDi=1pX (xi|xi−1, . . . , x1), where the conditioning is given by a mixture model coming from the state of a RNN, and is then transformed via a bijection. The PixelSNAIL (Chen et al., 2018) method also models the joint as a product of conditional distributions, optionally with some global conditioning, via causal convolutions and self-attention (Vaswani et al., 2017) to capture long-term temporal dependencies. These methods are well suited to modeling high dimensional data like images, however their use in modeling the temporal development of data has only recently been explored for example in VideoFlow (Kumar et al., 2019) in which they model the distribution of the next video frame via a flow where the model outputs the parameters of the flow’s base distribution via a ConvNet, whereas our approach will be based on conditioning of the PDF as described above.\nUsing RNNs for modeling either multivariate or temporal dynamics introduces sequential computational dependencies that are not amenable to parallelization. Despite this, RNNs have been shown to be very effective in modeling sequential dynamics. A recent work in this direction (Hwang et al., 2019) employs bipartite flows with RNNs for temporal conditioning to develop a conditional generative model of multivariate sequential data. The authors use a bidirectional training procedure to learn a generative model of observations that together with the temporal conditioning through a RNN, can also be conditioned on (observed) covariates that are modeled as additional conditioning variables in the latent space, which adds extra padding dimensions to the normalizing flow.\nThe other aspect of related works deals with multivariate probabilistic time series methods which are able to model high dimensional data. The Gaussian Copula Process method (Salinas et al., 2019a) is a RNN-based time series method with a Gaussian copula process output modeled using a low-rank covariance structure to reduce computational complexity and handle non-Gaussian marginal distributions. By using a low-rank approximation of the covariance matrix they obtain a computationally tractable method and are able to scale to multivariate dimensions in the thousands with state-of-the-art results. We will compare our model to this method in what follows." }, { "heading": "4 TEMPORAL CONDITIONED NORMALIZING FLOWS", "text": "We denote the entities of a multivariate time series by xit ∈ R for i ∈ {1, . . . , D} where t is the time index. Thus the multivariate vector at time t is given by xt ∈ RD. We will in what follows consider time series with t ∈ [1, T ], sampled from the complete time series history of our data, where for training we will split this time series by some context window [1, t0) and prediction window [t0, T ].\nIn the DeepAR model (Salinas et al., 2019b), the log-likelihood of each entity xit at a time step t ∈ [t0, T ] is maximized given an individual time series’ prediction window. This is done with respect to the parameters of the chosen distributional model (e.g. negative binomal for count data) via the state of a RNN derived from its previous time step xit−1 and current covariates c i t. The emission distribution model, which is typically Gaussian for real-valued data or negative binomial for count data, is selected to best match the statistics of the time series and the network incorporates activation functions that satisfy the constraints of these distribution parameters, e.g. a softplus() for the scale parameter of the Gaussian.\nA simple model for multivariate real-valued data could use a factorizing distribution in the emissions. Shared parameters can then learn patterns across the individual time series through the temporal component—but the model falls short of capturing dependencies in the emissions of the model. For this, a full joint distribution at each time step must be modeled, for example by using a multivariate Gaussian model. However, modeling the full covariance matrix not only increases the number of parameters of the neural network by O(D2), making learning difficult, but computing the loss becomes expensive when D is large. Furthermore, statistical dependencies in the emissions would be limited to second-order effects. These models are referred to as Vec-LSTM in Salinas et al. (2019a).\nWe wish to have a scalable model of D interacting time-series xt, and further to use a flexible distribution model on the emissions that allows for capturing and representing higher order moments. To this end, we model the conditional joint distribution at time t of all time series pX (xt|ht; θ) with a flow, e.g. a Real NVP, conditioned on either the hidden state of a RNN at time t or an embedding of the time series up to t− 1 from an attention module. In the case of an autoregressive RNN (either a LSTM or a GRU (Chung et al., 2014)), its hidden state ht is updated given the previous time step observation xt−1 and the current time step’s covariates ct (as in Figure 1):\nht = RNN(concat(xt−1, ct),ht−1). (3)\nThis model is autoregressive since it consumes the observation of the last time step xt−1 as well as the recurrent state ht−1 to produce the state ht on which we condition the current observation.\nTo get a powerful and general emission distribution model, we stack K layers of a conditional flow module (Real NVP or MAF) and together with the RNN, we arrive at our model of the conditional distribution of the future of all time series, given its past t ∈ [1, t0) and all the covariates in t ∈ [1, T ]. As the model is autoregressive it can be written as a product of factors\npX (xt0:T |x1:t0−1, c1:T ; θ) = ΠTt=t0pX (xt|ht; θ), (4) where θ denotes the set of all parameters of both the flow and the RNN.\nFor modeling the time evolution, we also investigate an encoder-decoder Transformer (Vaswani et al., 2017) architecture where the encoder embeds x1:t0−1 and the decoder outputs the conditioning for the flow over xt0:T via a masked attention module. See Figure 2 for a schematic of the overall model in this case. While training, care has to be taken to prevent using information from future time points as well as to preserve the autoregressive property by utilizing a mask that reflects the causal direction of the progressing time, i.e. to mask out future time points. The Transformer allows the model to access any part of the historic time series regardless of temporal distance (Li et al., 2019) and thus is potentially able to generate better conditioning for the normalizing flow head. In real-world data the magnitudes of different\ntime series can vary drastically. To normalize scales, we divide each individual time series by their training window means before feeding it into the model. At inference the distributions are then correspondingly transformed with the same mean values to match the original scale. This rescaling technique simplifies the problem for the model, which is reflected in significantly improved empirical performance as noted in Salinas et al. (2019b).\n4.1 TRAINING\nGiven D, a batch of time series, where for each time series and each time step we have xt ∈ RD and their associated covariates ct, we maximize the log-likelihood given by (1) and (3), i.e.\nL = 1 |D|T ∑ x1:T∈D T∑ t=1 log pX (xt|ht; θ)\nvia SGD using Adam (Kingma & Ba, 2015) with respect to the parameters θ of the conditional flow and the RNN or Transformer. In practice, the time series x1:T in a batch D are selected from a random time window of size T within our training data, and the relative time steps are kept constant. This allows the model to learn to cold-start given only the covariates. This also increases the size of our training data when the training data has small time history and allows us to trade off computation time with memory consumption especially when D or T are large.\nNote that information about absolute time is only available to the RNN or Transformer via the covariates and not the relative position of xt in the training data.\nThe Transformer has computational complexity O(T 2D) compared to a RNN which is O(TD2), where T is the time series length and the assumption that the dimension of the hidden states are proportional to the number of simultaneous time-series modeled. This means for large multivariate time series, i.e. D > T , the Transformer flow model has smaller computational complexity and unlike the RNN, all computation while training, over the time dimension happens in parallel." }, { "heading": "4.2 COVARIATES", "text": "We employ embeddings for categorical features (Charrington, 2018), which allows for relationships within a category, or its context, to be captured while training models. Combining these embeddings\nas features for time series forecasting yields powerful models like the first place winner of the Kaggle Taxi Trajectory Prediction1 challenge (De Brébisson et al., 2015). The covariates ct we use are composed of time-dependent (e.g. day of week, hour of day) and time-independent embeddings, if applicable, as well as lag features depending on the time frequency of the data set we are training on. All covariates are thus known for the time periods we wish to forecast." }, { "heading": "4.3 INFERENCE", "text": "For inference we either obtain the hidden state ĥt1 by passing a “warm up” time series x1:t1−1 through the RNN or use the cold-start hidden state, i.e. we set ĥt1 = h1 = ~0, and then by sampling a noise vector zt1 ∈ RD from an isotropic Gaussian, go backward through the flow to obtain a sample of our time series for the next time step, x̂t1 = f\n−1(zt1 |ĥt1), conditioned on this starting state. We then use this sample and its covariates to obtain the next conditioning state ĥt1+1 via the RNN and repeat till our inference horizon. This process of sampling trajectories from some initial state can be repeated many times to obtain empirical quantiles of the uncertainty of our prediction for arbitrary long forecast horizons.\nThe attention model similarly uses a warm-up time series x1:t1−1 and covariates and passes them through the encoder and then uses the decoder to output the conditioning for sampling from the flow. This sample is then used again in the decoder to iteratively sample the next conditioning state, similar to the inference procedure in seq-to-seq models.\nNote that we do not sample from a reduced-temperature model, e.g. by scaling the variance of the isotropic Gaussian, unlike what is done in likelihood-based generative models (Parmar et al., 2018) to obtain higher quality samples." }, { "heading": "5 EXPERIMENTS", "text": "Here we discuss a toy experiment for sanity-checking our model and evaluate probabilistic forecasting results on six real-world data sets with competitive baselines. The source code of the model, as well as other time series models, is available at https://github.com/zalandoresearch/ pytorch-ts." }, { "heading": "5.1 SIMULATED FLOW IN A SYSTEM OF PIPES", "text": "In this toy experiment, we check if the inductive bias of incorporating relations between time series is learnt in our model by simulating flow of a liquid in a system of pipes with valves. See Figure 3 for a depiction of the system.\nLiquid flows from left to right, where pressure at the first sensor in the system is given by S0 = X + 3, X ∼ Gamma(1, 0.2) in the shape/scale parameterization of the Gamma distribution. The valves are given by V1, V2 ∼iid Beta(0.5, 0.5), and we have\nSi = Vi\nV1 + V2 S0 + i\nfor i ∈ {1, 2} and finally S3 = S1 + S2 + 3 with ∗ ∼ N (0, 0.1). With this simulation we check whether our model captures correlations in space and time. The correlation between S1 and S2 results from both having the same source, measured by S0. This is reflected by Cov(S1, S2) > 0, which is captured by our model as shown in Figure 4 left.\nThe cross-covariance structure between consecutive time points in the ground truth and as captured by our trained model is depicted in Figure 4 right. It reflects the true flow of liquid in the system from S0 at time t to S1 and S2 at time t+ 1, on to S3 at time t+ 2." }, { "heading": "5.2 REAL WORLD DATA SETS", "text": "For evaluation we compute the Continuous Ranked Probability Score (CRPS) (Matheson & Winkler, 1976) on each individual time series, as well as on the sum of all time series (the latter denoted by CRPSsum). CRPS measures the compatibility of a cumulative distribution function F with an observation x as\nCRPS(F, x) = ∫ R (F (z)− I{x ≤ z})2 dz (5)\nwhere I{x ≤ z} is the indicator function which is one if x ≤ z and zero otherwise. CRPS is a proper scoring function, hence CRPS attains its minimum when the predictive distribution F and the data distribution are equal. Employing the empirical CDF of F , i.e. F̂ (z) = 1n ∑n i=1 I{Xi ≤ z} with n samples Xi ∼ F as a natural approximation of the predictive CDF, CRPS can be directly computed from simulated samples of the conditional distribution (4) at each time point (Jordan et al., 2019). We take 100 samples to estimate the empirical CDF in practice. Finally, CRPSsum is obtained by first summing across the D time-series—both for the ground-truth data, and sampled data (yielding F̂sum(t) for each time point). The results are then averaged over the prediction horizon, i.e. formally\nCRPSsum = Et [ CRPS ( F̂sum(t),\n∑ i xit\n)] .\nOur model is trained on the training split of each data set, and for testing we use a rolling windows prediction starting from the last point seen in the training data set and compare it to the test set. We train on Exchange (Lai et al., 2018), Solar (Lai et al., 2018), Electricity2, Traffic3, Taxi4 and Wikipedia5 open data sets, preprocessed exactly as in Salinas et al. (2019a), with their\n1https://www.kaggle.com/c/pkdd-15-predict-taxi-service-trajectory-i 2https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014 3https://archive.ics.uci.edu/ml/datasets/PEMS-SF 4https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page 5https://github.com/mbohlkeschneider/gluon-ts/tree/mv_release/datasets\nproperties listed in Table 2 of the appendix. Both Taxi and Wikipedia consist of count data and are thus dequantized before being fed to the flow (and mean-scaled).\nWe compare our method using LSTM and two different normalizing flows (LSTM-Real-NVP and LSTM-MAF based on Real NVP and MAF, respectively) as well as a Transformer model with MAF (Transformer-MAF), with the most competitive baseline probabilistic models from Salinas et al. (2019a) on the six data sets and report the results in Table 1. Vec-LSTM-ind-scaling outputs the parameters of an independent Gaussian distribution with mean-scaling, Vec-LSTM-lowrank-Copula parametrizes a low-rank plus diagonal covariance via Copula process. GP-scaling unrolls a LSTM with scaling on each individual time series before reconstructing the joint distribution via a low-rank Gaussian. Similarly, GP-Copula unrolls a LSTM on each individual time series and then the joint emission distribution is given by a low-rank plus diagonal covariance Gaussian copula.\nData\nSamples\nAbsolute Difference\nIn Table 1 we observe that MAF with either RNN or self-attention mechanism for temporal conditioning achieves the state-of-the-art (to the best of our knowledge) CRPSsum on all benchmarks. Moreover, bipartite flows with RNN either also outperform or are found to be competitive w.r.t. the previous state-of-the-art results as listed in the first four columns of Table 1. Further analyses with other metrics (e.g. CRPS and MSE) are reported in Section B of the appendix.\nTo showcase how well our model captures dependencies in extrapolating the time series into the future versus real data, we plot in Figure 5 the cross-covariance matrix of observations (plotted left) as well as the mean of 100 sample trajectories (middle plot) drawn from Transformer-MAF model for the test split of Traffic data set. As can be seen, most of the covariance structure especially in the top-left region of highly correlated sensors is very well reflected in the samples drawn from the model." }, { "heading": "6 CONCLUSION", "text": "We have presented a general method to model high-dimensional probabilistic multivariate time series by combining conditional normalizing flows with an autoregressive model, such as a recurrent neural network or an attention module. Autoregressive models have a long-standing reputation for working very well for time series forecasting, as they show good performance in extrapolation into the future. The flow model, on the other hand, does not assume any simple fixed distribution class, but instead can adapt to a broad range of high-dimensional data distributions. The combination hence combines the extrapolation power of the autoregressive model class with the density estimation flexibility of flows. Furthermore, it is computationally efficient, without the need of resorting to approximations (e.g. low-rank approximations of a covariance structure as in Gaussian copula methods) and is robust compared to Deep Kernel learning methods especially for large D. Analysis on six commonly used time series benchmarks establishes the new state-of-the-art performance against competitive methods.\nA natural way to improve our method is to incorporate a better underlying flow model. For example, Table 1 shows that swapping the Real NVP flow with a MAF improved the performance, which is a\nconsequence of Real NVP lacking in density modeling performance compared to MAF. Likewise, we would expect other design choices of the flow model to improve performance, e.g. changes to the dequantization method, the specific affine coupling layer or more expressive conditioning, say via another Transformer. Recent improvements to flows, e.g. as proposed in the Flow++ (Ho et al., 2019), to obtain expressive bipartite flow models, or models to handle discrete categorical data (Tran et al., 2019), are left as future work to assess their usefulness. To our knowledge, it is however still an open problem how to model discrete ordinal data via flows—which would best capture the nature of some data sets (e.g. sales data)." }, { "heading": "ACKNOWLEDGMENTS", "text": "K.R.: I would like to thank Rob Hyndman and Zaeem Burq for the helpful discussions and suggestions. I would like to acknowledge the traditional owners of the land on which I have lived and worked, the Wurundjeri people of the Kulin nation who have been custodians of their land for thousands of years. I pay my respects to their elders, past and present as well as past and present aboriginal elders of other communities.\nWe wish to acknowledge and thank the authors and contributors of the following open source libraries that were used in this work: GluonTS (Alexandrov et al., 2020), NumPy (Harris et al., 2020), Pandas (Pandas development team, 2020), Matplotlib (Hunter, 2007) and PyTorch (Paszke et al., 2019). We would also like to thank and acknowledge the hard work of the reviewers whose comments and suggestions have without a doubt help improve this paper." }, { "heading": "A DATA SET DETAILS", "text": "" }, { "heading": "B ADDITIONAL METRICS", "text": "We used exactly the same open source code to evaluate our metrics as provided by the authors of Salinas et al. (2019a).\nB.1 COMPARISON AGAINST CLASSICAL BASELINES\nWe report test set CRPSsum results on VAR (Lütkepohl, 2007) a mutlivariate linear vector autoregressive model with lags corresponding to the periodicity of the data, VAR-Lasso a Lasso regularized VAR, GARCH (van der Weide, 2002) a multivariate conditional heteroskedastic model, GP Gaussian process model, KVAE (Krishnan et al., 2017) a variational autoencoder on top of a linear state space model and VES a innovation state space model (Hyndman et al., 2008) in Table 3. Note that VAR-Lasso, KVAE and VES metrics are from (de Bézenac et al., 2020).\nB.2 CONTINUOUS RANKED PROBABILITY SCORE (CRPS)\nThe average marginal CRPS over dimensions D and over the predicted time steps compared to the test interval is given in Table 4.\nThe MSE is defined as the mean squared error over all the time series dimensions D and over the whole prediction range with respect to the test data. Table 5 shows the MSE results for the the marginal MSE." }, { "heading": "C UNIVARIATE AND POINT FORECASTS", "text": "Univariate methods typically give better forecasts than multivariate ones, which is counter-intuitive, the reason being the difficulty in estimating the cross-series correlations. The additional variance that multivariate methods add often ends up harming the forecast, even when one knows that individual time series are related. Thus as an additional sanity check, that this method is good enough to improve the forecast and not make it worse, we report the metrics with respect to a modern univariate point forecasting method as well as a multivariate point forecasting method for the Traffic data set.\nFigure 6 reports the metrics from LSTNet (Lai et al., 2018) a multivariate point forecasting method and Figure 7 reports the metrics from N-BEATS (Oreshkin et al., 2020) a univariate model. As can be seen, our methods improve on the metrics for the Traffic data set and this pattern holds for other data sets in our experiments. As a visual comparison, we have also plotted the prediction intervals using our models in Figures 8, 9, 10 and 11." }, { "heading": "D EXPERIMENT DETAILS", "text": "D.1 FEATURES\nFor hourly data sets we used hour of day, day of week, day of month features which are normalized. For daily data sets we use the day of week features. For data sets with minute granularity we use minute of hour, hour of day and day of week features. The normalized features are concatenated to\nthe RNN or Transformer input at each time step. We also concatenate lag values as inputs according to the data set’s time frequency: [1, 24, 168] for hourly data, [1, 7, 14] for daily and [1, 2, 4, 12, 24, 48] for the half-hourly data.\nD.2 HYPERPARAMETERS\nWe use batch sizes of 64, with 100 batches per epoch and train for a maximum of 40 epochs with a learning rate of 1e−3. The LSTM hyperparameters were the ones from Salinas et al. (2019a) and we used K = 5 stacks of normalizing flow bijections layers. The components of the normalizing flows (f and g) are linear feed forward layers (with fixed input and final output sizes because we model bijections) with hidden dimensions of 100 and ELU (Clevert et al., 2016) activation functions. We sample 100 times to report the metrics on the test set. The Transformer uses H = 8 heads and n = 3 encoding and m = 3 decoding layers and a dropout rate of 0.1. All experiments run on a single Nvidia V-100 GPU and the code to reproduce the results will be made available after the review process." } ]
2,021
null
SP:1e88ff3d6daf6ca26d2f9d504c5ff282af5d3659
[ "This paper addresses the question of how to stabilize a system in a vicinity of an equilibrium. While the majority of reinforcement learning algorithms rely on trial and error, which may damage the system, the authors introduce an algorithm for safe exploration and control. A traditional approach in model-based RL is to use MPC with a surrogate forward model to minimize a planning objective comprising a sum of stage costs along with a terminal cost, often chosen as an approximated value function -i.e. the optimal expected cost-to-go- which can be learned by a Bellman equation. Instead, this work is placed in the framework of Robust MPC, where this value function is replaced by a Luyapunov function $V$, which is related to the notion of stability and is only constrained to decrease along trajectories. Such a Luyapunov function, when available, provides both a safe region, defined as a level-set of V, and a MPC policy for which stability analyses have been developed: the authors extend a result from Limon et al. (2003; 2009) to show that this MPC policy enjoys asymptotic stability in general, and input-to-state stability in the presence of small enough model errors. Accordingly, the authors propose a scheme allowing to learn a Lyapunov function $V$ from demonstration data only, through a loss function that penalizes increments of $V$ along one-step transitions. A regularization parameter $\\alpha$ of this MPC, which balances stability with constraints satisfaction and stage costs, is also learned jointly by an alternative training procedure. This approach is evaluated empirically on two standard constrained non-linear continuous control tasks.", "In this paper the author proposed an MPC algorithm in which both the dynamics function and the Lyapunov function are parameterized with neural networks.. Specifically leveraging the results of Lyapunov networks (2018 CORL paper: https://arxiv.org/abs/1808.00924) for learning Lyapunov functions, the authors derived an MPC algorithm for quadratic cost/reward problems and also proved the stability, robustness, and sub-optimality performance. To demonstrate the effectiveness of the algorithms, the authors also evaluated this approach on the simple inverted pendulum and car kinematics tasks. " ]
With a growing interest in data-driven control techniques, Model Predictive Control (MPC) provides a significant opportunity to exploit the surplus of data reliably, particularly while taking safety and stability into account. In this paper, we aim to infer the terminal cost of an MPC controller from transitions generated by an initial unknown demonstrator. We propose an algorithm to alternatively learn the terminal cost and update the MPC parameters according to a stability metric. We design the terminal cost as a Lyapunov function neural network and theoretically show that, under limited approximation error, our proposed approach guarantees that the size of the stability region (region of attraction) is greater than or equal to the one from the initial demonstrator. We also present theorems that characterize the stability and performance of the learned MPC in the presence of model uncertainties and sub-optimality due to function approximation. Empirically, we demonstrate the efficacy of the proposed algorithm on non-linear continuous control tasks with soft constraints. Our results show that the proposed approach can improve upon the initial demonstrator also in practice and achieve better task performance than other learning-based baselines.
[]
[ { "authors": [ "Joshua Achiam", "David Held", "Aviv Tamar", "Pieter Abbeel" ], "title": "Constrained policy optimization", "venue": "arXiv preprint arXiv:1705.10528,", "year": 2017 }, { "authors": [ "Akshay Agrawal", "Brandon Amos", "Shane Barratt", "Stephen Boyd", "Steven Diamond", "Zico Kolter" ], "title": "Differentiable Convex Optimization Layers", "venue": "URL http://arxiv.org/abs/1910.12430", "year": 2019 }, { "authors": [ "Dario Amodei", "Chris Olah", "Jacob Steinhardt", "Paul Christiano", "John Schulman", "Dan Mané" ], "title": "Concrete Problems in AI Safety, 2016", "venue": null, "year": 2016 }, { "authors": [ "Brandon Amos", "Ivan Jimenez", "Jacob Sacks", "Byron Boots", "J. Zico Kolter" ], "title": "Differentiable MPC for End-to-end Planning and Control", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Luca Bugliari Armenio", "Enrico Terzi", "Marcello Farina", "Riccardo Scattolini" ], "title": "Echo state networks: analysis, training and predictive control", "venue": "In 2019 18th European Control Conference (ECC). IEEE,", "year": 2019 }, { "authors": [ "Kavosh Asadi", "Dipendra Misra", "Michael L. Littman" ], "title": "Lipschitz Continuity in Model-based Reinforcement Learning. arXiv:1804.07193 [cs, stat], July 2018", "venue": "URL http://arxiv.org/ abs/1804.07193", "year": 2018 }, { "authors": [ "A. Bemporad", "M. Morari", "V. Dua", "E.N. Pistikopoulos" ], "title": "The explicit solution of model predictive control via multiparametric quadratic programming", "venue": "In Proceedings of the American Control Conference", "year": 2000 }, { "authors": [ "Felix Berkenkamp", "Matteo Turchetta", "Angela P. Schoellig", "Andreas Krause" ], "title": "Safe Model-based Reinforcement Learning with Stability Guarantees", "venue": "URL http://arxiv.org/abs/1705.08551", "year": 2017 }, { "authors": [ "Franco Blanchini", "Stefano Miani" ], "title": "Set-Theoretic Methods in Control (Systems & Control", "venue": "Foundations & Applications). Birkhäuser,", "year": 2007 }, { "authors": [ "R.V. Bobiti" ], "title": "Sampling driven stability domains computation and predictive control of constrained nonlinear systems", "venue": "PhD thesis,", "year": 2017 }, { "authors": [ "Ruxandra Bobiti", "Mircea Lazar" ], "title": "Sampling-based verification of Lyapunov’s inequality for piecewise continuous nonlinear systems", "venue": "[cs],", "year": 2016 }, { "authors": [ "Jacob Buckman", "Danijar Hafner", "George Tucker", "Eugene Brevdo", "Honglak Lee" ], "title": "Sampleefficient reinforcement learning with stochastic ensemble value expansion", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ya-Chien Chang", "Nima Roohi", "Sicun Gao" ], "title": "Neural Lyapunov Control", "venue": "In Advances in Neural Information Processing Systems, pp", "year": 2019 }, { "authors": [ "Marco Ciccone", "Marco Gallieri", "Jonathan Masci", "Christian Osendorfer", "Faustino Gomez" ], "title": "NAISNet: Stable Deep Networks from Non-Autonomous Differential Equations. arXiv:1804.07209 [cs, stat], April 2018", "venue": "URL http://arxiv.org/abs/1804.07209", "year": 2018 }, { "authors": [ "Robin Deits", "Twan Koolen", "Russ Tedrake" ], "title": "Lvis: Learning from value function intervals for contact-aware robot", "venue": "controllers. pp. 7762–7768,", "year": 2019 }, { "authors": [ "Nguyen Anh Khoa Doan", "Wolfgang Polifke", "Luca Magri" ], "title": "Physics-informed echo state networks for chaotic systems forecasting", "venue": "In Lecture Notes in Computer Science,", "year": 2019 }, { "authors": [ "Thor I Fossen" ], "title": "Handbook of marine craft hydrodynamics and motion control", "venue": null, "year": 2011 }, { "authors": [ "Vladimir Gaitsgory", "Lars Gruene", "Neil Thatcher" ], "title": "Stabilization with discounted optimal control", "venue": "pp. 91–98,", "year": 2015 }, { "authors": [ "Marco Gallieri", "Seyed Sina Mirrazavi Salehian", "Nihat Engin Toklu", "Alessio Quaglino", "Jonathan Masci", "Jan Koutnı́k", "Faustino Gomez" ], "title": "Safe Interactive Model-Based Learning. arXiv:1911.06556 [cs, eess], November 2019", "venue": "URL http://arxiv.org/abs/1911.06556", "year": 1911 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "Proceedings of the International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "L. Grune", "A. Rantzer" ], "title": "On the infinite horizon performance of receding horizon controllers", "venue": "IEEE Transactions on Automatic Control,", "year": 2008 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "Dylan Hadfield-Menell", "Christopher Lin", "Rohan Chitnis", "Stuart Russell", "Pieter Abbeel" ], "title": "Sequential quadratic programming for task plan optimization", "venue": "In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2016 }, { "authors": [ "Lukas Hewing", "Kim P. Wabersich", "Marcel Menner", "Melanie N. Zeilinger" ], "title": "Learning-based model predictive control: Toward safe learning in control", "venue": "Annual Review of Control, Robotics, and Autonomous Systems,", "year": 2020 }, { "authors": [ "Michael Janner", "Justin Fu", "Marvin Zhang", "Sergey Levine" ], "title": "When to Trust Your Model: ModelBased Policy Optimization. arXiv:1906.08253 [cs, stat], November 2019", "venue": "URL http://arxiv", "year": 1906 }, { "authors": [ "EC Kerrigan" ], "title": "Robust constraint satisfaction: Invariant sets and predictive control", "venue": "Technical report,", "year": 2000 }, { "authors": [ "Eric C. Kerrigan", "Jan M. Maciejowski" ], "title": "Soft constraints and exact penalty functions in model predictive control", "venue": "In Proceedings of UKACC International Conference,", "year": 2000 }, { "authors": [ "T. Koller", "F. Berkenkamp", "M. Turchetta", "A. Krause" ], "title": "Learning-based model predictive control for safe exploration", "venue": "IEEE Conference on Decision and Control (CDC),", "year": 2018 }, { "authors": [ "D. Limon", "T. Alamo", "E.F. Camacho" ], "title": "Stable constrained MPC without terminal constraint", "venue": "Proceedings of the American Control Conference,", "year": 2003 }, { "authors": [ "D. Limon", "T. Alamo", "D.M. Raimondo", "D. Muñoz de la Peña", "J.M. Bravo", "A. Ferramosca", "E.F. Camacho" ], "title": "Input-to-State Stability: A Unifying Framework for Robust Model Predictive Control", "venue": "In Nonlinear Model Predictive Control,", "year": 2009 }, { "authors": [ "Kendall Lowrey", "Aravind Rajeswaran", "Sham Kakade", "Emanuel Todorov", "Igor Mordatch" ], "title": "Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control. arXiv:1811.01848 [cs, stat], November 2018", "venue": "URL http://arxiv.org/abs/1811", "year": 2018 }, { "authors": [ "D.Q. Mayne", "J.B. Rawlings", "C.V. Rao", "P.O.M" ], "title": "Scokaert. Constrained model predictive control: Stability and optimality", "venue": null, "year": 2000 }, { "authors": [ "Thomas M. Moerland", "Joost Broekens", "Catholijn M. Jonker" ], "title": "Model-based reinforcement learning: A survey, 2020", "venue": null, "year": 2020 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga", "Alban Desmaison", "Andreas Köpf", "Edward Yang", "Zach DeVito", "Martin Raison", "Alykhan Tejani", "Sasank Chilamkurthy", "Benoit Steiner", "Lu Fang", "Junjie Bai", "Soumith Chintala" ], "title": "Pytorch: An imperative style, high-performance deep learning library, 2019", "venue": "URL http://arxiv.org/abs/1912.01703", "year": 1912 }, { "authors": [ "Jaideep Pathak", "Zhixin Lu", "Brian R. Hunt", "Michelle Girvan", "Edward Ott" ], "title": "Using machine learning to replicate chaotic attractors and calculate lyapunov exponents from data", "venue": "Chaos: An Interdisciplinary Journal of Nonlinear Science,", "year": 2017 }, { "authors": [ "Simone Pozzoli", "Marco Gallieri", "Riccardo Scattolini" ], "title": "Tustin neural networks: a class of recurrent nets for adaptive MPC of mechanical systems. arXiv:1911.01310 [cs, eess], November 2019", "venue": "URL http://arxiv.org/abs/1911.01310", "year": 1911 }, { "authors": [ "Alessio Quaglino", "Marco Gallieri", "Jonathan Masci", "Jan Koutnı́k" ], "title": "SNODE: Spectral Discretization of Neural ODEs for System Identification", "venue": "[cs],", "year": 2020 }, { "authors": [ "S.V. Raković", "B. Kouvaritakis", "R. Findeisen", "M. Cannon" ], "title": "Homothetic tube model predictive control", "venue": "Automatica, 48:1631–1638,", "year": 2012 }, { "authors": [ "J.B. Rawlings", "D.Q. Mayne" ], "title": "Model Predictive Control Theory and Design", "venue": "Nob Hill Pub, Llc,", "year": 2009 }, { "authors": [ "Alex Ray", "Joshua Achiam", "Dario Amodei" ], "title": "Benchmarking Safe Exploration in Deep Reinforcement Learning", "venue": null, "year": 2019 }, { "authors": [ "Spencer M. Richards", "Felix Berkenkamp", "Andreas Krause" ], "title": "The Lyapunov Neural Network: Adaptive Stability Certification for Safe Learning of Dynamical Systems", "venue": "[cs],", "year": 2018 }, { "authors": [ "John Salvatier", "Thomas V. Wiecki", "Christopher Fonnesbeck" ], "title": "Probabilistic programming in python using PyMC3. PeerJ Computer Science, 2:e55, apr 2016", "venue": "doi: 10.7717/peerj-cs.55. URL https://doi.org/10.7717/peerj-cs.55", "year": 2016 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Yuval Tassa", "Tom Erez", "Emanuel Todorov" ], "title": "Synthesis and stabilization of complex behaviors through online trajectory optimization", "venue": "In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems., pp. 4906–4913,", "year": 2012 }, { "authors": [ "Markus Heinonen", "Harri Lähdesmäki" ], "title": "ODE2VAE: Deep generative second order ODEs with Bayesian neural networks. arXiv:1905.10994 [cs, stat], October 2019", "venue": "URL http://arxiv.org/abs/1905.10994", "year": 1905 } ]
[ { "heading": "1 INTRODUCTION", "text": "Control systems comprise of safety requirements that need to be considered during the controller design process. In most applications, these are in the form of state/input constraints and convergence to an equilibrium point, a specific set or a trajectory. Typically, a control strategy that violates these specifications can lead to unsafe behavior. While learning-based methods are promising for solving challenging non-linear control problems, the lack of interpretability and provable safety guarantees impede their use in practical control settings (Amodei et al., 2016). Model-based reinforcement learning (RL) with planning uses a surrogate model to minimize the sum of future costs plus a learned value function terminal cost (Moerland et al., 2020; Lowrey et al., 2018). Approximated value functions, however, do not offer safety guarantees. By contrast, control theory focuses on these guarantees but it is limited by its assumptions. Thus, there is a gap between theory and practice.\nA feedback controller stabilizes a system if a local Control Lyapunov Function (CLF) function exists for the pair. This requires that the closed-loop response from any initial state results in a smaller value of the CLF at the next state. The existence of such a function is a necessary and sufficient condition for showing stability and convergence (Khalil, 2014). However, finding an appropriate Lyapunov function is often cumbersome and can be conservative. By exploiting the expressiveness of neural networks (NNs), Lyapunov NNs have been demonstrated as a general tool to produce stability (safety) certificates (Bobiti, 2017; Bobiti & Lazar, 2016) and also improve an existing controller (Berkenkamp et al., 2017; Gallieri et al., 2019; Chang et al., 2019). In most of these settings, the controller is parameterized through a NN as well. The flexibility provided by this choice comes at the cost of increased sample complexity, which is often expensive in real-world safety-critical systems. In this work, we aim to overcome this limitation by leveraging an initial set of one-step transitions from an unknown expert demonstrator (which may be sub-optimal) and by using a learned Lyapunov function and surrogate model within an Model Predictive Control (MPC) formulation.\nOur key contribution is an algorithmic framework, Neural Lyapunov MPC (NLMPC), that obtains a single-step horizon MPC for Lyapunov-based control of non-linear deterministic systems with constraints. By treating the learned Lyapunov NN as an estimate of the value function, we provide theoretical results for the performance of the MPC with an imperfect forward model. These results complement the ones by Lowrey et al. (2018), which only considers the case of a perfect dynamics\nmodel. In our proposed framework, alternate learning is used to train the Lyapunov NN in a supervised manner and to tune the parameters of the MPC. The learned Lyapunov NN is used as the MPC’s terminal cost for obtaining closed-loop stability and robustness margins to model errors. For the resulting controller, we show that the size of the stable region can be larger than that from an MPC demonstrator with a longer prediction horizon. To empirically illustrate the efficacy of our approach, we consider constrained non-linear continuous control tasks: torque-limited inverted pendulum and non-holonomic vehicle kinematics. We show that NLMPC can transfer between using an inaccurate surrogate and a nominal forward model, and outperform several baselines in terms of stability." }, { "heading": "2 PRELIMINARIES AND ASSUMPTIONS", "text": "Controlled Dynamical System Consider a discrete-time, time-invariant, deterministic system:\nx(t+ 1) = f(x(t), u(t)), y(t) = x(t), f(0, 0) = 0, (1)\nwhere t ∈ N is the timestep index, x(t) ∈ Rnx , u(t) ∈ Rnu and y(t) ∈ Rny are, respectively, the state, control input, and measurement at timestep t. We assume that the states and measurements are equivalent and the origin is the equilibrium point. Further, the system (1) is subjected to closed and bounded, convex constraints over the state and input spaces:\nx(t) ∈ X ⊆ Rnx , u(t) ∈ U ⊂ Rnu , ∀t > 0. (2) The system is to be controlled by a feedback policy, K : Rnx → Rnu . The policy K is considered safe if there exists an invariant set, Xs ⊆ X, for the closed-loop dynamics, inside the constraints. The set Xs is also referred to as the safe-set under K. Namely, every trajectory for the closed-loop system that starts at some x ∈ Xs remains inside this set. If x asymptotically reaches the target , x̄T ∈ Xs, then Xs is a Region of Attraction (ROA). In practice, convergence often occurs to a small set, XT .\nLyapunov Conditions and Safety We formally assess the safety of the closed-loop system in terms of the existence of the positively invariant-set, Xs, inside the state constraints. This is done by means of a learned CLF, V (x), given data generated under a (initially unknown) policy, K(x).\nThe candidate CLF needs to satisfy certain properties. First, it needs to be upper and lower bounded by strictly increasing, unbounded, positive (K∞) functions (Khalil, 2014). We focus on optimal control with a quadratic stage cost and assume the origin as the target state:\n�(x, u) = xTQx+ uTRu, Q � 0, R � 0. (3) For above, a possible choice for K∞-function is the scaled sum-of-squares of the states:\nl��x�22 ≤ V (x) ≤ LV �x�22, (4) where l� and LV are the minimum eigenvalue of Q and a Lipschitz constant for V respectively.\nFurther for safety, the convergence to a set, XT ⊂ Xs, can be verified by means of the condition: ∀x ∈ Xs\\XT , u = K (x) ⇒ V (f (x, u))− λV (x) ≤ 0, with λ ∈ [0, 1). (5)\nThis means that to have stability V (x) must decrease along the closed-loop trajectory in the annulus.\nThe sets Xs, XT , satisfying (5), are (positively) invariant. If they are inside constraints, i.e. Xs ⊆ X, then they are safe. For a valid Lyapunov function V , the outer safe-set can be defined as a level-set:\nXs = {x ∈ X : V (x) ≤ ls}. (6) For further definitions, we refer the reader to Blanchini & Miani (2007); Kerrigan (2000). If condition (5), holds everywhere in Xs, then the origin is a stable equilibrium (XT = {0}). If (most likely) this holds only outside a non-empty inner set, XT = {x ∈ X : V (x) ≤ lT } ⊂ Xs, with XT ⊃ {0}, then the system converges to a neighborhood of the origin and remains there in the future.\nApproach Rationale We aim to match or enlarge the safe region of an unknown controller, Ki(x). For a perfect model, f , and a safe set X(i)s , there exists an α � 1, such that the one-step MPC:\nK(x) = argmin u∈U, f(x,u)∈X(i)s αV (f(x, u)) + �(x, u), (7)\nresults in a new safe set, X(i+1)s = C(X(i)s ), the one-step controllable set of X(i)s and the feasible region of (7), X(i+1)s ⊇ X(i)s . We soften the state constraints in (7) and use it recursively to estimate X(j)s , j > i. We formulate an algorithm that learns the parameter α as well as the safe set. We train a neural network via SGD to approximate V , hence the ROA estimate will not always increase through iterations. To aim for max ROA and minimum MPC horizon, we use cross-validation and verification. We motivate our work by extending theoretical results on MPC stability and a sub-optimality bound for approximate f and V . Finally, we provide an error bound on the learned f for having stability.\nLearning and Safety Verification We wish to learn V (x) and Xs from one-step on-policy rollouts, as well as a forward model f̂(x, u). After the learning, the level ls defining set Xs will be refined and its safety formally verified a posteriori. This is done by evaluating (5) using the model, starting from a large set of states sampled uniformly within Xs\\XT . We progressively decrease the level ls starting from the learned one, and also increase the inner level lT , starting from zero, such that condition (5) holds for n ≥ ns samples in Xs\\XT . The number of verifying samples, ns, provides a probability lower bound on safety, namely Psafe(Xs\\XT ) ≥ p(ns), as detailed in (Bobiti & Lazar, 2016). The algorithm is detailed in appendix and based on (Bobiti & Lazar, 2016). For our theoretical results, we assume the search terminates with Psafe(Xs\\XT ) ≈ 1 and consider the condition deterministic.\nNN-based dynamics model In some MPC applications, it might not be possible to gather sufficient data from demonstrations in order to be able to learn a model that predicts over long sequences. One-step or few-steps dynamics learning based on NNs can suffer when the model is unrolled for longer time. For instance, errors can accumulate through the horizon due to small instabilities either from the physical system or as artifacts from short sequence learning. Although some mitigations are possible for specific architectures or longer sequences (Armenio et al., 2019; Doan et al., 2019; Pathak et al., 2017; Ciccone et al., 2018), we formulate our MPC to allow for a very short horizon and unstable dynamics. Since we learn a surrogate NN forward model, f̂(x, u), from one-step trajectories, we will assume this to have a locally bounded one-step-ahead prediction error, w(t), where:\nw = f(x, u)− f̂(x, u), �w�2 ≤ µ, ∀(x, u) ∈ X̃× U, (8)\nfor some compact set of states, X̃ ⊇ X. We also assume that both f and f̂ are locally Lipschitz in this set, with constants Lfx, Lfu, and Lf̂x, Lf̂u respectively. A conservative value of µ can be inferred from these constants as the input and state sets are bounded. It can also be estimated from a test set." }, { "heading": "3 NEURAL LYAPUNOV MPC", "text": "In the context of MPC, a function V , which satisfies the Lyapunov property (5) for some local controller K0, is instrumental to formally guarantee stability (Mayne et al., 2000; Limon et al., 2003). We use this insight and build a general Lyapunov function terminal cost for our MPC, based on neural networks. We discuss the formulation of the Lyapunov network and the MPC in Section 3.1 and Section 3.2 respectively. In order to extend the controller’s ROA while maintaining a short prediction horizon, an alternate optimization scheme is proposed to tune the MPC and re-train the Lyapunov NN. We describe this procedure in Section 3.3 and provide a pseudocode in Algorithm 1." }, { "heading": "3.1 LYAPUNOV NETWORK LEARNING", "text": "We use the Lyapunov function network introduced by Gallieri et al. (2019):\nV (x) = xT � l�I + Vnet(x) TVnet(x) � x, (9)\nwhere Vnet(x) is a (Lipschitz) feedforward network that produces a nV × nx matrix. The scalars nV and l� > 0 are hyper-parameters. It is easy to verify that (9) satisfies the condition mentioned in (4). In our algorithm, we learn the parameters of the network, Vnet(x), and a safe level, ls. Note that equation (5) allows to learn V from demonstrations without explicitly knowing the current policy.\nLoss function Suppose DK denotes a set of state-action-transition tuples of the form (x, u, x+), where x+ is the next state obtained from applying the policy u = K(x). The Lyapunov network is\ntrained using the following loss:\nmin Vnet, ls E(x, u, x+)∈DK �IXs(x) ρ Js(x, u, x +) + Jvol(x, u, x +) � , (10)\nwhere, IXs(x) = 0.5 (sign [ls − V (x)] + 1) , Js(x, u, x+) = max[ΔV (x), 0]V (x)+�V ,\nJvol(x, u, x +) = sign � −ΔV (x) � [ls − V (x)] ,\nΔV (x) = V (x+)− λV (x).\nIn (10), IXs is the indicator function for the safe set Xs, which is multiplied to Js, a function that penalises the instability. The term Jvol is a classification loss that tries to compute the correct boundary between the stable and unstable points. It is also instrumental in increasing the safe set volume. The scalars �V > 0, λ ∈ [0, 1), and 0 < ρ � 1, are hyper-parameters, where the latter trades off volume for stability (we take ρ = 10−3 as in Richards et al. (2018); Gallieri et al. (2019)). To make sure that Xs ⊆ X, we scale-down the learned ls a-posteriori. The loss (10) extends the one proposed by Richards et al. (2018) in the sense that we only use one-step transitions, and safe trajectories are not explicitly labeled before training. This loss is then also used to tune the MPC cost scaling factor for V , namely, α ≥ 1, to guarantee stability. This is discussed next." }, { "heading": "3.2 NEURAL LYAPUNOV MPC", "text": "We improve stability of the initial controller, used to collect data, by replacing it with an MPC solving the following input-limited, soft-constrained, discounted optimal control problem:\nJ�MPC(x(t)) = minu γNαV (x̂(N)) +\n�N−1 i=0 γ i�(x̂(i), û(i)) + �X(s(i))\ns.t. x̂(i+ 1) = f̂(x̂(i), û(i)), (11) x̂(i) + s(i) ∈ X, ∀i ∈ [0, N ],\n�X(s) = η1sT s+ η2�s�1, η1 > 0, η2 � 0, û(i) ∈ U, ∀i ∈ [0, N − 1],\nx̂(0) = x(t),\nwhere x̂(i) and û(i) are the predicted state and the input at i-steps in the future, s(i) are slack variables, u = {u(i)}N−1i=0 , the stage cost � is given by (3), γ ∈ (0, 1] is a discount factor, the function f̂ is a forward model, the function V is the terminal cost, in our case a Lyapunov NN from (9), scaled by a factor α ≥ 1 to provide stability, and x(t) is the measured system state at the current time. The penalty �X is used for state constraint violation, see Kerrigan & Maciejowski (2000).\nProblem (11) is solved online given the current state x(t); then, the first element of the optimal control sequence, u�(0), provides the action for the physical system. Then, a new state is measured, and (11) is again solved, in a receding horizon. The implementation details are given in Appendix.\nStability and safety We extend results from Limon et al. (2003; 2009) to the discounted case and to the λ-contractive V from (5). In order to prove them, we make use of the uniform continuity of the model, the SQP solution and the terminal cost, V , as done by Limon et al. (2009). Consider the set:\nΥN,γ,α = � x ∈ Rnx : J�MPC(x) ≤\n1− γN 1− γ d+ γ Nαls\n� , where d = inf\nx�∈Xs �(x, 0). (12)\nThe following are obtained for system (1) in closed loop with the MPC defined by problem (11). Results are stated for XT = {0}. For XT �= {0}, convergence would occur to a set instead of 0. Theorem 1. Stability and robustness Assume that V (x) satisfies (5), with λ ∈ [0, 1), XT = {0}. Then, given N ≥ 1, for the MPC (11) there exist a constant ᾱ ≥ 0, a discount factor γ̄ ∈ (0, 1], and a model error bound µ̄ such that, if α ≥ ᾱ, µ ≤ µ̄ and γ ≥ γ̄, then, ∀x(0) ∈ C(Xs):\n1. If N = 1, µ = 0, then the system is asymptotically stable for any γ > 0, ∀x(0) ∈ ΥN,γ,α.\n2. If N > 1, µ = 0, then the system reaches a set Bγ that is included in Xs. This set increases with decreasing discount factors, γ, ∀x(0) ∈ ΥN,γ,α. γ = 1 ⇒Bγ = {0}.\n3. If αV (x) is the optimal value function in Xs for the problem, µ = 0, and if C(Xs) �= Xs, then the system is asymptotically stable, ∀x(0) ∈ ΥN,γ,α.\n4. If µ = 0, then α ≥ ᾱ implies that αV (x) ≥ V �(x), ∀x ∈ Xs, where V � is the optimal value function for the infinite horizon problem with cost (3) and subject to (2).\n5. The MPC has a stability margin. If the MPC uses a surrogate model satisfying (8), with one-step error bound �w�22 < µ̄2 = 1−λLV L2N\nf̂x\nls, then the system is Input-to-State (practically)\nStable (ISpS) and there exists a set BN,γ,µ : x(t) → BN,γ,µ, ∀x(0) ∈ βΥN,γ,α, β ≤ 1.\nTheorem 1 states that for a given horizon length N and contraction factor λ, one can find a minimum scaling of the Lyapunov function V and a lower bound on the discount factor such that the system under the MPC is stable. Hence, if the model is perfect, then the state would converge to the origin as time progresses. If the model is not perfect, then the safety of the system depends on the size of the model error. If this error is less than the maximum tolerable error, µ ≤ µ̄, then the system is safe: the state converges to a bound, the size of which increases with the size of the model error, the prediction horizon N , and is inversely proportional to α. In other words, the longer the predictions with an incorrect model, the worse the outcome. Note that the ROA also improves with larger α and γ. The proof of the theorem is provided in Appendix. Results hold with the verified probability, Psafe(Xs). Performance with surrogate models In order to further motivate for the search of a V giving the largest Xs, notice that a larger Xs can allow for shortening the MPC horizon, yielding the same ROA. Contrary to Lowrey et al. (2018), we demonstrate how model mismatch and longer horizons can decrease performance with respect to an infinite-horizon oracle with same cost and perfect model.\nLet ED[JV �(K�)] define the expected infinite-horizon performance of the optimal policy K�, evaluated by using the expected infinite-horizon performance (value function), V �, for the stage cost (3) and subject to (2). Similarly, let Ex∈D[J�MPC(x)] define the MPC’s expected performance with the learned V , when a surrogate model is used and Ex∈D[J�MPC(x; f)] when f is known. Theorem 2. Performance Assume that the value function error is bounded for all x, namely, �V �(x)− αV (x)�22 ≤ �, and that the model error satisfies (8), for some µ > 0. Then, for any δ > 0:\nEx∈D[J � MPC(x)]−Ex∈D[J�V �(x)] ≤ 2γ N � 1−γN + � 1 + 1δ � �Q�2 �N−1 i=0 γ i ��i−1 j=0 L̄ j f �2 µ2\n+ � 1 + 1δ � γNαLV ��N−1 i=0 L̄ i f �2 µ2 + ψ̄(µ)\n+δ Ex∈D [J�MPC(x; f)] ,\nwhere L̄f = min(Lf̂x, Lfx) and ψ̄ is a K∞-function representing the constraint penalty terms.\nTheorem 2 is related to Asadi et al. (2018) for value-based RL. However, here we do not constrain the system and model to be stable, nor assume the MPC optimal cost to be Lipschitz. Theorem 2 shows that a discount γ or a shorter horizon N can mitigate model errors. Since γ � 1 can limit stability (Theorem 1) we opt for the shortest horizon, hence N = 1, γ = 1. Proof of Theorem 2 is in Appendix.\nMPC auto-tuning The stability bounds discussed in Theorem 1 can be conservative and their computation is non-trivial. Theoretically, the bigger the α the larger is the ROA (the safe region) for the MPC, up to its maximum extent. Practically, for a very high α, the MPC solver may not converge due to ill-conditioning. x Initially, by using the tool from Agrawal et al. (2019) within an SQP scheme, we tried to tune the parameters through gradient-based optimization of the loss (10). These attempts were not successful, as expected from the considerations in Amos et al. (2018). Therefore, for practical reasons, in this work, we perform a grid search over the MPC parameter α. Note that the discount factor γ is mainly introduced for Theorem 2 and analysed in Theorem 1 to allow for future combination of stable MPC with value iteration.\n3.3 LEARNING ALGORITHM\nAlgorithm 1 Neural Lyapunov MPC learning In: Ddemo, f̂ , λ ∈ [0, 1), {l�, �ext} > 0, γ ∈ (0, 1], N ≥ 1, αlist ,\nNext, NV , �V , Vinit, �(x, u) Out: Vnet, ls, α�\nD ← Ddemo Vnet ← Vinit for j = 0...NV do\n(Vnet, ls, Xs) ← Adam step on (10) end\nfor i = 0...Next do D ← Ddemo ∩ (1 + �ext)Xs\nfor α ∈ αlist do U�1 ← MPC(Vnet, f̂ ,D;α), from (11) DMPC(α) ← one step sim(f̂ ,D,U�1 ) L(α) ← Evaluate (10) on DMPC(α)\nend α� ← argmin(L(α)) D ← DMPC(α�) Vnet ← Vinit for j = 0...NV do\n(Vnet, ls, Xs) ← Adam step on (10) end\nend Our alternate optimization of the Lyapunov NN, V (x), and the controller is similar to Gallieri et al. (2019). However, instead of training a NN policy, we tune the scaling α and learn V (x) used by the MPC (11). Further, we extend their approach by using a dataset of demonstrations, Ddemo, instead of an explicitly defined initial policy. These are one-step transition tuples, (x(0), u(0), x(1))m, m = 1, . . . ,M , generated by a (possibly sub-optimal) stabilizing policy, K0. Unlike in the approach by Richards et al. (2018), our V is a piece-wise quadratic, and it is learned without labels. We in fact produce our own psuedo-labels using the sign of ΔV (x) in (10) in order to estimate ls. The latter means that we don’t require episode-terminating (long) rollouts, which aren’t always available from data nor accurate when using a surrogate. Also, there is no ambiguity on how to label rollouts.\nOnce the initial V , Xs are learned from the demonstrations, we use V and a learned model, f̂ , within the MPC. We tune the MPC parameter α to minimize the loss defined in (10), using (1 + �ext)Xs as a new enlarged target safe set instead of Xs. This is done to push the safe set to extend. We propose Algorithm 1, which runs multiple iterations where after each of them the tuned MPC serves as a demonstrator for training the next V and Xs to verify the MPC in closed-loop with the model. Since it is not guaranteed that the ROA will increase during learning, we select the Lyapunov function and the MPC using the criteria that the percentage of stable points (ΔV < 0) increases and that of unstable points decreases while iterating over j and i when evaluated on a validation set.\nIn Algorithm 1, MPC denotes the proposed Neural Lyapunov MPC, while one step sim denotes a one-step propagation of the MPC action into the system surrogate model. To train the parameters of V and the level-set ls, Adam optimizer is used Kingma & Ba (2014). A grid search over the MPC parameter α is performed. A thorough tuning of all MPC parameters is also possible, for instance, by using black-box optimisation methods. This is left for future work." }, { "heading": "4 NUMERICAL EXPERIMENTS", "text": "Through our experiments, we show the following: 1) increase in the safe set for the learned controller by using our proposed alternate learning algorithm, 2) robustness of the one-step NLMPC compared to a longer horizon MPC (used as demonstrator) when surrogate model is used for predictions, and 3) effectiveness of our proposed NLMPC against the demonstrator and various RL baselines.\nConstrained inverted pendulum In this task, the pendulum starts near the unstable equilibrium (θ = 0◦). The goal is to stay upright. We bound the input so that the system cannot be stabilized if |θ| > 60◦. We use an MPC with horizon 4 as a demonstrator, with terminal cost, 500xTPLQRx,\nTable 2: Car: Learning on nominal model.\nITER. LOSS VERIF. NOT VERIF.\n1 1.55 92.20 4.42 2 0.87 93.17 4.89 3 0.48 94.87 3.89\nTable 3: Car: Learning on surrogate model.\nITER. LOSS VERIF. NOT VERIF.\n1 1.84 91.74 8.26 2 1.43 92.26 7.74 3 1.65 91.61 8.39\nwhere PLQR is the LQR optimal cost matrix. This is evaluated on 10K equally spaced initial states to generate the dataset Ddemo. We train a grey-box NN model, f̂ using 10K random transition tuples. More details are in Appendix. The learned V and α, obtained from Algorithm 1, produce a one-step MPC that stabilizes both the surrogate and the actual system. Table 1 shows that the loss and percentage of verified points improve across iterations. The final ROA estimate is nearly maximal and is depicted along with the safe trajectories, produced by the MPC while using predictions from the nominal and surrogate models, in Figure 1. The performance matches that of the baseline and the transfer is successful due to the accuracy of the learned model. A full ablation study is in Appendix.\nConstrained car kinematics The goal is to steer the car to the (0, 0) position with zero orientation. This is only possible through non-linear control. The vehicle cannot move sideways, hence policies such as LQR is not usable to generate demonstrations. Thus to create Ddemo, an MPC with horizon 5 is evaluated over 10K random initial states. The surrogate, f̂ is a grey-box NN trained using 10K random transition tuples. More details are in Appendix. Figure 3 shows the learning curves, training of the Lyapunov function over iterations and line-search for the MPC auto-tuning. Table 2 summarises the metrics improvement across the iterations, indicating an increase in the ROA when a perfect model is used. With an imperfect model, the second iteration gives the best results, as shown in Table 3.\nWe test the transfer capability of the approach in two ways. First, we learn using the nominal model and test using the surrogate model for the MPC predictions. This is reported in Appendix for the sake of space. Second, the learning is performed using the surrogate model as in Algorithm 1, and the MPC is then tested on the nominal model while still using the surrogate for prediction. This is depicted in Figure 2. Our MPC works better than the demonstrator when using the incorrect model. The learned MPC transfers successfully and completes the task safely.\nComparison to baselines Prior works such as constrained policy optimization (CPO) (Achiam et al., 2017) provide safety guarantees in terms of constraint satisfaction that hold in expectation. However, due to unavailability of a working implementation, we are unable to compare our approach against it. Instead to enforce safety constraints during training of the RL algorithms, we use two different strategies: v1) early episode termination; v2) reward shaping with a constraint penalty. The v2 formulation is similar to the one used in Ray et al. (2019), which demonstrated its practical equivalence to CPO when tuned. We compare our approach against model-free and model-based\nbaseline algorithms. For the model-free baselines, we consider the on-policy algorithm proximal policy optimization (PPO) (Schulman et al., 2017) and the off-policy algorithm soft actor-critic (SAC) (Haarnoja et al., 2018). For model-based baselines, we consider model-based policy optimization (MBPO) (Janner et al., 2019) and the demonstrator MPC. Further details about the reward shaping and learning curves are in Appendix.\nWe consider the performance of learned controllers in terms stability and safety. Stability performance is analogous to the success rate in performing the set-point tracking task. We consider a task is completed when ||x(T )||2 < 0.2 where T is the final time of the trajectory. For the car, we exclude the orientation from this index. The safety performance combines the former with state constraints satisfaction over the entire trajectory. As shown in Table 4, for the inverted pendulum, all the policies lead to some safe trajectories. Note that the demonstrator (which has an LQR terminal cost) is an optimal controller and is the maximum performance that can be achieved. In terms of stability performance, our approach performs as good as the demonstrator MPC. The RL trained policies give sub-optimal behaviors, i.e. sometimes the system goes to the other equilibria. For the car, the demonstrator MPC is a sub-optimal policy. NLMPC improves upon it in performance and it is on par with it in terms of safety. NLMPC also significantly outperforms all of the considered RL baselines while using lesser number of samples for learning1.\n1For all our experiments, training datapoints: PPO: 4× 106, SAC: 4× 106, MBPO: 2.4× 105, NLMPC: 104 (random) + 104 (demonstrations)." }, { "heading": "5 RELATED WORK", "text": "Stability and robustness of MPC and of discounted optimal control have been studied in several prior works Mayne et al. (2000); Rawlings & Mayne (2009); Limon et al. (2009; 2003); Raković et al. (2012); Gaitsgory et al. (2015). Numerical stability verification was studied in Bobiti (2017); Bobiti & Lazar (2016) and, using neural network Lyapunov functions in Berkenkamp et al. (2017); Gallieri et al. (2019). Neural Lyapunov controllers were also trained in Chang et al. (2019). MPC solvers based on iterative LQR (iLQR) were introduced in Tassa et al. (2012). Sequential Quadratic Program (SQP) was studied in Nocedal & Wright (2006). NNs with structural priors have been studied in Quaglino et al. (2020); Yıldız et al. (2019); Pozzoli et al. (2019). Value functions for planning were learned in Lowrey et al. (2018); Deits et al. (2019); Buckman et al. (2018). Gallieri et al. (2019) learned a NN Lyapunov function and an NN policy with an alternating descent method, initialized using a known stabilizing policy. We remove this assumption and use MPC. Suboptimality was analysed in Grune & Rantzer (2008) for MPC and in Janner et al. (2019) for policies. Differently from NNs, non-parametric models have been largely studied for control, see for instance Koller et al. (2018); Hewing et al. (2020) and references therein for closed-form results using Gaussian processes." }, { "heading": "6 CONCLUSIONS", "text": "We presented Neural Lyapunov MPC, a framework to train a stabilizing non-linear MPC based on learned neural network terminal cost and surrogate model. After extending existing theoretical results for MPC and value-based reinforcement learning, we have demonstrated that the proposed framework can incrementally increase the stability region of the MPC through offline RL and then safely transfer on simulated constrained non-linear control scenarios. Through comparison of our approach with existing RL baselines, we showed how NNs can be leveraged to achieve policies that perform at par with these methods while also having provable safety guarantees.\nFuture work could address the reduction of the proposed sub-optimality bound, for instance through the integration of value learning with Lyapunov function learning as well as the optimal selection of the MPC prediction horizon. A broader class of stage costs and rewards could also be investigated." } ]
2,020
null
SP:e27fedc58e99952aaa61b87bb613b7e2c3e23126
[ "This paper deals with a fair regression problem in which the accuracy disparity is employed as a fairness measure. The authors derived the upper and lower bounds on the difference of accuracy between groups to demonstrate that imbalance in the groups' sizes leads to accuracy disparity. Furthermore, they propose learning algorithms enabling us to mitigate the accuracy disparity, which is accomplished by minimizing the upper bound they derived. The empirical evaluations show that the present methods achieve a better trade-off between accuracy and fairness than some existing fair regression methods.", "This paper theoretically and empirically studies accuracy disparity in regression problems. It proves an information-theoretic lower bound on the joint error and a complementary upper bound on the error gap across groups to depict the feasible region of group-wise errors. It further proposes to achieve accuracy parity theoretically and empirically by learning conditional group-invariant representations using statistical distances. " ]
With the widespread deployment of large-scale prediction systems in high-stakes domains, e.g., face recognition, criminal justice, etc., disparity on prediction accuracy between different demographic subgroups has called for fundamental understanding on the source of such disparity and algorithmic intervention to mitigate it. In this paper, we study the accuracy disparity problem in regression. To begin with, we first propose an error decomposition theorem, which decomposes the accuracy disparity into the distance between label populations and the distance between conditional representations, to help explain why such accuracy disparity appears in practice. Motivated by this error decomposition and the general idea of distribution alignment with statistical distances, we then propose an algorithm to reduce this disparity, and analyze its game-theoretic optima of the proposed objective function. We conduct experiments on four real-world datasets. The experimental results suggest that our proposed algorithms can effectively mitigate accuracy disparity while maintaining the predictive power of the regression models.
[]
[ { "authors": [ "Tameem Adel", "Isabel Valera", "Zoubin Ghahramani", "Adrian Weller" ], "title": "One-network adversarial fairness", "venue": "In Thirty-Third AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Alekh Agarwal", "Miroslav Dudik", "Zhiwei Steven Wu" ], "title": "Fair regression: Quantitative definitions and reduction-based algorithms", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Martin Arjovsky", "Léon Bottou" ], "title": "Towards principled methods for training generative adversarial networks. arxiv e-prints, art", "venue": "arXiv preprint arXiv:1701.04862,", "year": 2017 }, { "authors": [ "Martin Arjovsky", "Soumith Chintala", "Léon Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Eugene Bagdasaryan", "Omid Poursaeed", "Vitaly Shmatikov" ], "title": "Differential privacy has disparate impact on model accuracy", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Solon Barocas", "Andrew D Selbst" ], "title": "Big data’s disparate impact", "venue": "Calif. L. Rev.,", "year": 2016 }, { "authors": [ "Richard Berk", "Hoda Heidari", "Shahin Jabbari", "Michael Kearns", "Aaron Roth" ], "title": "Fairness in criminal justice risk assessments: The state of the art", "venue": "Sociological Methods & Research,", "year": 2018 }, { "authors": [ "Alex Beutel", "Jilin Chen", "Zhe Zhao", "Ed H Chi" ], "title": "Data decisions and theoretical implications when adversarially learning fair representations", "venue": "arXiv preprint arXiv:1707.00075,", "year": 2017 }, { "authors": [ "Jérémie Bigot" ], "title": "Statistical data analysis in the wasserstein space", "venue": "ESAIM: Proceedings and Surveys,", "year": 2020 }, { "authors": [ "Sarah Bird", "Miro Dudík", "Richard Edgar", "Brandon Horn", "Roman Lutz", "Vanessa Milan", "Mehrnoosh Sameki", "Hanna Wallach", "Kathleen Walker" ], "title": "Fairlearn: A toolkit for assessing and improving fairness in AI", "venue": "Technical Report MSR-TR-2020-32,", "year": 2020 }, { "authors": [ "Joy Buolamwini", "Timnit Gebru" ], "title": "Gender shades: Intersectional accuracy disparities in commercial gender classification", "venue": "In Conference on fairness, accountability and transparency,", "year": 2018 }, { "authors": [ "Toon Calders", "Asim Karim", "Faisal Kamiran", "Wasif Ali", "Xiangliang Zhang" ], "title": "Controlling attribute effect in linear regression", "venue": "IEEE 13th international conference on data mining,", "year": 2013 }, { "authors": [ "Irene Chen", "Fredrik D Johansson", "David Sontag" ], "title": "Why is my classifier discriminatory", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Evgenii Chzhen", "Christophe Denis", "Mohamed Hebiri", "Luca Oneto", "Massimiliano Pontil" ], "title": "Fair regression with wasserstein barycenters", "venue": "arXiv preprint arXiv:2006.07286,", "year": 2020 }, { "authors": [ "Constantinos Daskalakis", "Ioannis Panageas" ], "title": "The limit points of (optimistic) gradient descent in min-max optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "William Dieterich", "Christina Mendoza", "Tim Brennan" ], "title": "Compas risk scales: Demonstrating accuracy equity and predictive parity", "venue": "Northpointe Inc,", "year": 2016 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017", "venue": "URL http://archive. ics.uci.edu/ml", "year": 2017 }, { "authors": [ "Cynthia Dwork", "Moritz Hardt", "Toniann Pitassi", "Omer Reingold", "Richard Zemel" ], "title": "Fairness through awareness", "venue": "In Proceedings of the 3rd innovations in theoretical computer science conference,", "year": 2012 }, { "authors": [ "Harrison Edwards", "Amos Storkey" ], "title": "Censoring representations with an adversary", "venue": "arXiv preprint arXiv:1511.05897,", "year": 2015 }, { "authors": [ "Michael Feldman", "Sorelle A Friedler", "John Moeller", "Carlos Scheidegger", "Suresh Venkatasubramanian" ], "title": "Certifying and removing disparate impact", "venue": "In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2015 }, { "authors": [ "Yaroslav Ganin", "Evgeniya Ustinova", "Hana Ajakan", "Pascal Germain", "Hugo Larochelle", "François Laviolette", "Mario Marchand", "Victor Lempitsky" ], "title": "Domain-adversarial training of neural networks", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Alison L Gibbs", "Francis Edward Su" ], "title": "On choosing and bounding probability metrics", "venue": "International statistical review,", "year": 2002 }, { "authors": [ "Moritz Hardt", "Eric Price", "Nati Srebro" ], "title": "Equality of opportunity in supervised learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Ray Jiang", "Aldo Pacchiano", "Tom Stepleton", "Heinrich Jiang", "Silvia Chiappa" ], "title": "Wasserstein fair classification", "venue": "arXiv preprint arXiv:1907.12059,", "year": 2019 }, { "authors": [ "Kory D Johnson", "Dean P Foster", "Robert A Stine" ], "title": "Impartial predictive modeling: Ensuring fairness in arbitrary models", "venue": "arXiv preprint arXiv:1608.00528,", "year": 2016 }, { "authors": [ "Fereshte Khani", "Percy Liang" ], "title": "Noise induces loss discrepancy across groups for linear regression", "venue": "arXiv preprint arXiv:1911.09876,", "year": 2019 }, { "authors": [ "Pauline T Kim" ], "title": "Data-driven discrimination at work", "venue": "Wm. & Mary L. Rev.,", "year": 2016 }, { "authors": [ "Junpei Komiyama", "Akiko Takeda", "Junya Honda", "Hajime Shimao" ], "title": "Nonconvex optimization for regression with fairness constraints", "venue": "In International conference on machine learning,", "year": 2018 }, { "authors": [ "Christos Louizos", "Kevin Swersky", "Yujia Li", "Max Welling", "Richard Zemel" ], "title": "The variational fair autoencoder", "venue": "arXiv preprint arXiv:1511.00830,", "year": 2015 }, { "authors": [ "David Madras", "Elliot Creager", "Toniann Pitassi", "Richard Zemel" ], "title": "Learning adversarially fair and transferable representations", "venue": "arXiv preprint arXiv:1802.06309,", "year": 2018 }, { "authors": [ "David Madras", "Elliot Creager", "Toniann Pitassi", "Richard Zemel" ], "title": "Fairness through causal awareness: Learning causal latent-variable models for biased data", "venue": "In Proceedings of the Conference on Fairness, Accountability, and Transparency,", "year": 2019 }, { "authors": [ "Jérémie Mary", "Clément Calauzènes", "Noureddine El Karoui" ], "title": "Fairness-aware learning for continuous attributes and treatments", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Harikrishna Narasimhan", "Andrew Cotter", "Maya R Gupta", "Serena Wang" ], "title": "Pairwise fairness for ranking and regression", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Arvind Narayanan" ], "title": "Translation tutorial: 21 fairness definitions and their politics", "venue": "In Proc. Conf. Fairness Accountability Transp.,", "year": 2018 }, { "authors": [ "Cédric Villani" ], "title": "Optimal transport: old and new, volume 338", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "Linda F Wightman", "Henry Ramsey" ], "title": "LSAC national longitudinal bar passage study", "venue": "Law School Admission Council,", "year": 1998 }, { "authors": [ "Mohammad Yaghini", "Bogdan Kulynych", "Carmela Troncoso" ], "title": "Disparate vulnerability: On the unfairness of privacy attacks against machine learning", "venue": null, "year": 1906 }, { "authors": [ "Muhammad Bilal Zafar", "Isabel Valera", "Manuel Gomez Rodriguez", "Krishna P Gummadi" ], "title": "Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment", "venue": "In Proceedings of the 26th International Conference on World Wide Web,", "year": 2017 }, { "authors": [ "Muhammad Bilal Zafar", "Isabel Valera", "Manuel Gomez Rogriguez", "Krishna P Gummadi" ], "title": "Fairness constraints: Mechanisms for fair classification", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Rich Zemel", "Yu Wu", "Kevin Swersky", "Toni Pitassi", "Cynthia Dwork" ], "title": "Learning fair representations", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Han Zhao", "Amanda Coston", "Tameem Adel", "Geoffrey J Gordon" ], "title": "Conditional learning of fair representations", "venue": "arXiv preprint arXiv:1910.07162,", "year": 2019 }, { "authors": [ "Anna Zink", "Sherri Rose" ], "title": "Fair regression for health care", "venue": "spending. Biometrics,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent progress in machine learning has led to its widespread use in many high-stakes domains, such as criminal justice, healthcare, student loan approval, and hiring. Meanwhile, it has also been widely observed that accuracy disparity could occur inadvertently under various scenarios in practice (Barocas & Selbst, 2016). For example, errors are inclined to occur for individuals of certain underrepresented demographic groups (Kim, 2016). In other cases, Buolamwini & Gebru (2018) showed that notable accuracy disparity gaps exist across different racial and gender demographic subgroups on several real-world image classification systems. Moreover, Bagdasaryan et al. (2019) found out that a differentially private model even enlarges such accuracy disparity gaps. Such accuracy disparity gaps across demographic subgroups not only raise concerns in high-stake applications but also can be utilized by malicious parties causing information leakage (Yaghini et al., 2019). Despite the ample needs of accuracy parity, most prior work limits its scope to studying the problem in binary classification settings (Hardt et al., 2016; Zafar et al., 2017b; Zhao et al., 2019; Jiang et al., 2019). In a seminal work, Chen et al. (2018) analyzed the impact of data collection on accuracy disparity in general learning models. They provided a descriptive analysis of such parity gaps and advocated for collecting more training examples and introducing more predictive variables. While such a suggestion is feasible in applications where data collection and labeling is cheap, it is not applicable in domains where it is time-consuming, expensive, or even infeasible to collect more data, e.g., in autonomous driving, education, etc. Our Contributions In this paper, we provide a prescriptive analysis of accuracy disparity and aim at providing algorithmic interventions to reduce the disparity gap between different demographic subgroups in the regression setting. To start with, we first formally characterize why accuracy disparity appears in regression problems by depicting the feasible region of the underlying group-wise errors. We also provide a lower bound on the joint error and a complementary upper bound on the error gap across groups. Based on these results, we illustrate why regression models aiming to minimize the global loss will inevitably lead to accuracy disparity if the input distributions or decision functions differ across groups (see Figure 1a). We further propose an error decomposition theorem that decomposes the accuracy disparity into the distance between the label populations and the distance between conditional representations. To mitigate such disparities, we propose two algorithms to reduce accuracy disparity via joint distribution alignment with total variation distance and Wasserstein distance, respectively. Furthermore, we\nanalyze the game-theoretic optima of the objective function and illustrate the principle of our algorithms from a game-theoretic perspective (see Figure 1b). To corroborate the effectiveness of our proposed algorithms in reducing accuracy disparity, we conduct experiments on four real-world datasets. Experimental results suggest that our proposed algorithms help to mitigate accuracy disparity while maintaining the predictive power of the regression models. We believe our theoretical results contribute to the understanding of why accuracy disparity occurs in machine learning models, and the proposed algorithms provides an alternative for intervention in real-world scenarios where accuracy parity is desired but collecting more data/features is time-consuming or infeasible." }, { "heading": "2 PRELIMINARIES", "text": "Notation We use X ⊆ Rd and Y ⊆ R to denote the input and output space. We use X and Y to denote random variables which take values in X and Y , respectively. Lower case letters x and y denote the instantiation ofX and Y . We useH(X) to denote the Shannon entropy of random variable X , H(X | Y ) to denote the conditional entropy of X given Y , and I(X;Y ) to denote the mutual information between X and Y . To simplify the presentation, we use A ∈ {0, 1} as the sensitive attribute, e.g., gender, race, etc. LetH be the hypothesis class of regression models. In other words, for h ∈ H, h : X → Y is a predictor. Note that even if the predictor does not explicitly take the sensitive attribute A as an input variable, the prediction can still be biased due to the correlations with other input variables. In this work we study the stochastic setting where there is a joint distribution D over X,Y and A from which the data are sampled. For a ∈ {0, 1} and y ∈ R, we use Da to denote the conditional distribution of D given A = a and Dy to denote the conditional distribution of D given Y = y. For an event E, D(E) denotes the probability of E under D. Given a feature transformation function g : X → Z that maps instances from the input space X to feature space Z , we define g]D := D ◦ g−1 to be the induced (pushforward) distribution of D under g, i.e., for any event E′ ⊆ Z , g]D(E′) := D({x ∈ X | g(x) ∈ E′}). We use (·)+ to indicate the value of a variable remains unchanged if it is positive or otherwise 0, i.e., (Y )+ equals to Y if the value of Y is positive or otherwise 0. Given a joint distribution D, the error of a predictor h under D is defined as ErrD(h) := ED[(Y − h(X))2]. To make the notation more compact, we may drop the subscript D when it is clear from the context. Furthermore, we also use MSED(Ŷ , Y ) to denote the mean squared loss between the predicted variable Ŷ = h(X) and the true label Y over the joint distribution D. Similarly, we also use CED(A ‖ Â) denote the cross-entropy loss between the predicted variable  and the true label A over the joint distribution D. Throughout the paper, we make the following standard assumption in regression problems: Assumption 2.1. There exists M > 0, such that for any hypothesis H 3 h : X → Y , ‖h‖∞ ≤M and |Y | ≤M . Problem Setup We study the fair regression problem: the goal is to learn a regressor that is fair in the sense that the errors of the regressor are approximately equal across the groups given by the sensitive attribute A. We assume that the sensitive attribute A is only available to the learner during\nthe training phase and is not visible during the inference phase. We would like to point out that there are many other different and important definitions of fairness (Narayanan, 2018) even in the sub-category of group fairness, and our discussion is by no means comprehensive. For example, two frequently used definitions of fairness in the literature are the so-called statistical parity (Dwork et al., 2012) and equalized odds (Hardt et al., 2016). Nevertheless, throughout this paper we mainly focus accuracy parity as our fairness notion, due to the fact that machine learning systems have been shown to exhibit substantial accuracy disparities between different demographic subgroups (Barocas & Selbst, 2016; Kim, 2016; Buolamwini & Gebru, 2018). This observation has already brought huge public attention (e.g., see New York Times, The Verge, and Insurance Journal) and calls for machine learning systems that (at least approximately) satisfy accuracy parity. Formally, accuracy parity is defined as follows: Definition 2.1 (Accuracy Parity). Given a joint distribution D, a predictor h satisfies accuracy parity if ErrD0(h) = ErrD1(h).\nThe violation of accuracy parity is also known as disparate mistreatment (Zafar et al., 2017a). In practice the exact equality of on accuracy between two groups is often hard to ensure, so we define error gap to measure how well the model satisfies accuracy parity: Definition 2.2 (Error Gap). Given a joint distributionD, the error gap of a hypothesis h is ∆Err(h) := |ErrD0(h)− ErrD1(h)|. By definition, if a model satisfies accuracy parity, ∆Err(h) will be zero. Next we introduce two distance metrics that will be used in our theoretical analysis and algorithm design: • Total variation distance: it measures the largest possible difference between the probabilities that\nthe two probability distributions can assign to the same event E. We use dTV(P,Q) to denote the total variation:\ndTV(P,Q) := sup E |P(E)−Q(E)|.\n• Wasserstein distance: the Wasserstein distance between two probability distributions is\nW1(P,Q) = sup f∈{f :‖f‖L≤1} ∣∣∣∣∫ Ω fdP − ∫ Ω fdQ ∣∣∣∣ ,\nwhere ‖f‖L is the Lipschitz semi-norm of a real-valued function of f and Ω is the sample space over which two probability distributions P and Q are defined. By the Kantorovich-Rubinstein duality theorem (Villani, 2008), we recover the primal form of the Wasserstein distance, defined as\nW1(P,Q) := inf γ∈Γ(P,Q)\n∫ d(X,Y ) dγ(X,Y ),\nwhere Γ(P,Q) denotes the collection of all couplings of P andQ, andX and Y denote the random variables with law P and Q respectively. Note that we use L1 distance for d(·, ·) throughout the paper, but the extensions to other distance, e.g., L2 distance, is straightforward." }, { "heading": "3 MAIN RESULTS", "text": "In this section, we first characterize why accuracy disparity arises in regression models. More specifically, given a hypothesis h ∈ H, we first describe the feasible region of ErrD0 and ErrD1 by proving a lower bound of joint errors and an upper bound of the error gap. Then, we give a geometric interpretation to visualize the feasible region of ErrD0 and ErrD1 and illustrate how error gap arises when learning a hypothesis h that minimizes the global squared error. We further analyze the accuracy disparity by decomposing it into the distance between label populations and the distance between conditional representations. Motivated by the decomposition, we propose two algorithms to reduce accuracy disparity, connect the game-theoretic optima of the objective functions in our algorithms with our theorems, and describe the practical implementations of the algorithms. Due to the space limit, we defer all the detailed proofs to the appendix." }, { "heading": "3.1 BOUNDS ON CONDITIONAL ERRORS AND ACCURACY DISPARITY GAP", "text": "When we learn a predictor, the prediction function induces X h−→ Ŷ , where Ŷ is the predicted target variable given by hypothesis h. Hence for any distribution D0 (D1) of X , the predictor also induces a\ndistribution h]D0 (h]D1) of Ŷ . Recall that the Wasserstein distance is metric, hence the following chain of triangle inequalities holds:\nW1(D0(Y ),D1(Y )) ≤W1(D0(Y ), h]D0) +W1(h]D0, h]D1) +W1(h]D1,D1(Y ))\nIntuitively, W1(D0(Y ), h]D0) and W1(h]D1,D1(Y )) measure the distance between the true label distribution and the predicted one on A = 0/1 cases, respectively. This distance is related to the prediction error of function h conditioned on A = a:\nLemma 3.1. Let Ŷ = h(X) ∈ R, then for a ∈ {0, 1}, W1(Da(Y ), h]Da) ≤ √ ErrDa(h).\nWith the above results, we can get the following theorem that characterizes the lower bound of joint error on different groups:\nTheorem 3.1. Let Ŷ = h(X) ∈ R, we have ErrD0(h) + ErrD1(h) ≥ 12 [( W1(D0(Y ),D1(Y )) −\nW1(h]D0, h]D1) )\n+\n]2 .\nIn Theorem 3.1, we see that if the difference between the label distribution across groups is large, then statistical disparity could potentially lead to a large joint error. Moreover, Theorem 3.1 could be extended to give a lower bound on the joint error incurred by h as well:\nCorollary 3.1. Let Ŷ = h(X) ∈ R and α = D(A = 0) ∈ [0, 1], we have ErrD(h) ≥ 12 min{α, 1− α} · [( W1(D0(Y ),D1(Y ))−W1(h]D0, h]D1) ) + ]2 .\nNext, we upper bound the error gap to gain more insights on accuracy disparity. For a ∈ {0, 1}, define the conditional variance VarDa [Y |X] = EDa [(Y − EDa [Y |X])2|X] and it shows up as the irreducible error of predicting Y when we only use the knowledge of X . We also know that the optimal decision function conditioned on A = a under mean squared error to be EDa [Y |X]. The following theorem characterizes the upper bound of the error gap between two groups: Theorem 3.2. For any hypothesisH 3 h : X → Y , if the Assumption 2.1 holds, then:\n∆Err(h) ≤ 8M2 dTV(D0(X),D1(X)) + |ED0 [VarD0 [Y |X]]− ED1 [VarD1 [Y |X]]| + 4M min{ED0 [|ED0(Y |X)− ED1(Y |X)|], ED1 [|ED0(Y |X)− ED1(Y |X)|]}.\nRemark Theorem 3.2 upper bounds the error gap across groups by three terms: the first term corresponds to the distance of input distribution across groups, the second term is the noise (variance) difference, and third term is the discrepancy of optimal decision functions across different groups. In an ideal and fair setting, where both distributions are noiseless, and the optimal decision functions are insensitive to the group membership, then Theorem 3.2 implies a sufficient condition to guarantee accuracy parity is to find group-invariant representation that minimize dTV(D0(X),D1(X)). Geometric Interpretation By Theorem 3.1 and Theorem 3.2, in Figure 1a, we visually illustrate how accuracy disparity arises given data distribution and the learned hypothesis that aims to minimize the global squared error. In Figure 1a, given the hypothesis classH, we use the line ErrD0 + ErrD1 = B to denote the lower bound in Theorem 3.1 and the two lines |ErrD0 − ErrD1 | = A to denote the upper bound in Theorem 3.2. These three lines form a feasible region (the green area) of ErrD0 and ErrD1 under the hypothesis classH. For any optimal hypothesis h which is solely designed to minimize the overall error, the best the hypothesis h can do is to intersect with one of the two bottom vertices. For example, the hypotheses (the red dotted line and the blue dotted line) trying to minimize overall error intersect with the two vertices of the region to achieve the smallest ErrD0-intercept (ErrD1 -intercept), due to the imbalance between these two groups. However, since these two vertices are not on the diagonal of the feasible region, there is no guarantee that the hypothesis can satisfy accuracy parity (ErrD0 = ErrD1 ), unless we can shrink the width of green area to zero. Conditional Distribution Alignment Reduces Accuracy Parity In Theorem 3.2, we illustrate how accuracy disparity arises in regression models due to noise, distance between representations, and distance between decision functions. However, it is nearly impossible to collect noiseless data with group-invariant input distribution. Moreover, there is no guarantee that the upper bound will be lower if we learn the group-invariant representation that minimizes dTV(D0(X),D1(X)) alone, since the learned representation could potentially increase the variance. In this regard, we prove a novel upper bound which is free from the above noise term to motivate aligning conditional distributions to mitigate the error disparity across groups. To do so, we relate the error gap to the label distribution and the predicted distribution condition on Y = y:\nTheorem 3.3. If Assumption 2.1 holds, then for ∀h ∈ H, let Ŷ = h(X), the following inequality holds:\n∆Err(h) ≤ 8M2dTV(D0(Y ),D1(Y )) + 3M min{ED0 [|EDy0 [Ŷ ]− EDy1 [Ŷ ]|], ED1 [|EDy0 [Ŷ ]− EDy1 [Ŷ ]|]}.\nRemark We see that the error gap is upper bounded by two terms: the distance between label distributions and the discrepancy between conditional predicted distributions across groups. Note that this is different from the decomposition we have in Theorem 3.2, where the marginal distribution is on X instead of Y . Given a dataset, the distance of label distributions is a constant since the label distribution is fixed. For the second term, if we can minimize the discrepancy of the conditional predicted distribution across groups, we then have a model that is free of accuracy disparity when the label distribution is well aligned." }, { "heading": "3.2 ALGORITHM DESIGN", "text": "Inspired by Theorem 3.3, we can mitigate the error gap if we align the group distributions via minimizing the distance of the conditional distributions across groups. However, it is intractable to do so explicitly in regression problems since Y can take infinite values on R. Next we will present two algorithms to approximately solve the problem through adversarial representation learning.\nGiven a Markov chain X g−→ Z h−→ Ŷ , we are interested in learning group-invariant conditional representations so that the discrepancy between the induced conditional distributionsDY0 (Z = g(X)) andDY1 (Z = g(X)) is minimized. In this case, the second term of the upper bound in Theorem 3.3 is minimized. However, it is in general not feasible since Y is a continuous random variable. Instead, we propose to learn the representations of Z to minimize the discrepancy between the joint distributions D0(Z = g(X), Y ) and D1(Z = g(X), Y ). Next, we will show the distances between conditional predicted distributions DY0 (Z = g(X)) and DY1 (Z = g(X)) are minimized when we minimize the joint distributions D0(Z = g(X), Y ) and D1(Z = g(X), Y ) in Theorem 3.4 and Theorem 3.5. To proceed, we first consider using the total variation distance to measure the distance between two distributions. In particular, we can choose to learn a binary discriminator f : Z × Y −→  that achieves minimum binary classification error on discriminating between points sampled from two distributions. In practice, we use the cross-entropy loss as a convex surrogate loss. Formally, we are going to consider the following minimax game between g and f :\nmin f∈F max g\nCED(A ‖ f(g(X), Y )) (1)\nNext we show that for the above equation, the optimal feature transformation g corresponds to the one that induces invariant conditional feature distributions. Theorem 3.4. Consider the minimax game in (1). The equilibrium (g∗, f∗) of the game is attained when 1). Z = g∗(X) is independent of A conditioned on Y ; 2). f∗(Z, Y ) = D(A = 1 | Y,Z). Since in the equilibrium of the game Z is independent of A conditioned on Y , the optimal f∗(Z, Y ) could also be equivalently written as f∗(Z, Y ) = D(A = 1 | Y ), i.e., the only useful information for the discriminator in the equilibrium is through the external information Y . In Theorem 3.4, the minimum cross-entropy loss that the discriminator (the equilibrium of the game) can achieve is H(A | Z, Y ) (see Proposition A.1 in Appendix A). By the basic property of conditional entropy, we have:\nmin f∈F\nCED(A ‖ f(g(X), Y )) = H(A | Z, Y ) = H(A | Y )− I(A;Z | Y ).\nWe know that H(A | Y ) is a constant given the data distribution. The maximization of g in (1) is equivalent to the minimization of minZ=g(X) I(A;Z | Y ), and it follows that the optimal strategy for the transformation g is the one that induces conditionally invariant features, e.g., I(A;Z | Y ) = 0. Formally, we arrive at the following minimax problem:\nmin h,g max f∈F\nMSED(h(g(X)), Y )− λ · CED(A ‖ f(g(X), Y )) (2)\nIn the above formulation, the first term corresponds to the minimization of prediction loss of the target task and the second term is the loss incurred by the adversary f . As a whole, the minimax\noptimization problem expresses a trade-off (controlled by the hyper-parameter λ > 0) between accuracy and accuracy disparity through the representation learning function g. Wasserstein Variant Similarly, if we choose to align joint distributions via minimizing Wasstertein distance, the following theorem holds. Theorem 3.5. Let g∗ := arg mingW1(D0(g(X), Y ),D1(g(X), Y )), then DY0 (Z = g∗(X)) = DY1 (Z = g∗(X)) almost surely. One notable advantage of using the Wasserstein distance instead of the TV distance is that, the Wasserstein distance is a continuous functional of both the feature map g as well as the discriminator f (Arjovsky et al., 2017). Furthermore, if both g and f are continuous functions of their corresponding model parameters, (which is the case for models we are going to use in experiments), the objective function will be continuous in both model parameters. This property of the Wasserstein distance makes it more favorable from an optimization perspective. Using the dual formulation, equivalently, we can learn a Lipschitz function f : Z × Y → R as a witness function:\nmin h,g,Z0∼g]D0,Z1∼g]D1 max f :‖f‖L≤1\nMSED(h(g(X)), Y ) + λ · ∣∣f(Z0, Y )− f(Z1, Y )∣∣. (3)\nGame-Theoretic Interpretation To make our algorithms easier to follow, we provide a gametheoretic interpretation of our algorithms in Figure 1b. Consider Alice (encoder) and Bob (discriminator) participate a two-player game: upon receiving a set of inputs X , Alice applies a transformation to the inputs to generate the corresponding features Z and then send them to Bob. Besides the features sent by Alice, Bob also has access to the external information Y , which corresponds to the corresponding labels for the set of features sent by Alice. Once having both the features Z and the corresponding labels Y from external resources, Bob’s goal is to guess the group membership A of each feature sent by Alice, and to maximize his correctness as much as possible. On the other hand, Alice’s goal is to compete with Bob, i.e., to find a transformation to confuse Bob as much as she can. Different from the traditional game without external information, here due to the external information Y Bob has access to, Alice cannot hope to fully fool Bob, since Bob can gain some insights about the group membership A of features from the external label information. Nevertheless, Theorem 3.4 and Theorem 3.5 both state that when Bob uses a binary discriminator or a Wasstertein discriminator to learn A, the best Alice could do is to to learn a transformation g so that the transformed representation Z is insensitive to the values of A conditioned on any values of Y ." }, { "heading": "4 EXPERIMENTS", "text": "Inspired by our theoretical results that decompose accuracy disparity into the distance between label populations and the distance between conditional representations, we propose two algorithms to mitigate it. In this section, we conduct experiments to evaluate the effectiveness of our proposed algorithms in reducing the accuracy disparity." }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "Datasets We conduct experiments on four real-world benchmark datasets: the Adult dataset (Dua & Graff, 2017), COMPAS dataset (Dieterich et al., 2016), Law School dataset (Wightman & Ramsey, 1998), and Communities and Crime dataset (Dua & Graff, 2017). All datasets contain binary sensitive attributes (e.g., male/female, white/non-white). We refer readers to Appendix B for detailed descriptions of the datasets and the data pre-processing pipelines. Methods We term the proposed algorithms CENET and WASSERSTEINNET for our two proposed algorithms respectively. For each dataset, we perform controlled experiments by fixing the regression neural network architecture to be the same. We train the regression nets via mean squared loss. Note that although the Adult dataset and COMPAS dataset are for binary classification tasks, we can still take them as regression tasks with two distinctive ordinal values. To the best of our knowledge, no previous study aims to minimize accuracy disparity in regression using representation learning. However, there are other similar fairness notions and mitigation techniques proposed for regression and we add them as our baselines: (1) Bounded group loss (BGL) (Agarwal et al., 2019), which asks for the prediction errors for any groups to remain below a pre-defined level ; (2) Coefficient of determination (COD) (Komiyama et al., 2018), which asks for the coefficient of determination between the sensitive attributes and the predictions to remain below a pre-defined level .\nAmong all methods, we vary the trade-off parameter (i.e., λ in CENET and WASSERSTEINNET and in BGL and COD) and report and the corresponding R2 scores and the error gap values. For each experiment, we average the results for ten random seeds. We refer readers to Appendix B for detailed parameter and hyper-parameter settings in our experiments. We also defer the additional experimental results and analyses on how the trade-off parameters λ and affects the performance of different algorithms to Appendix C." }, { "heading": "4.2 RESULTS AND ANALYSES", "text": "The overall results are visualized in Figure 2.1 The following summarizes our observations and analyses: (1) Overall, trade-offs exist between the predictive power of the regressors and accuracy parity: for each method we test, the general trend is that with the decrease of the values of error gaps, the values of R2 also decrease. The exception is CENET in the Adult dataset and Crime dataset since training CENET is unstable when λ is large and we will provide more details in Appendix C; (2) Our proposed methods WASSERSTEINNET and CENET are effective in reducing the error gaps while keeping the R2 scores relatively high in the Adult, COMPAS and Crime dataset. In the Law dataset, the error gaps decrease with high utility losses in our proposed methods; (3) Among our\n1COD cannot be implemented on the Adult dataset since the size of the Adult dataset is large and the QCQP optimization algorithm to solve COD needs a quadratic memory usage of the dataset size.\nproposed methods, WASSERSTEINNET achieves better accuracy and accuracy disparity trade-offs while CENET suffers significant accuracy loss and may fail to decrease the error gaps in the Adult and Crime dataset. The reason behind it is that the minimax optimization in the training of CENET could lead to an unstable training process under the presence of a noisy approximation to the optimal discriminator (Arjovsky & Bottou, 2017; Arjovsky et al., 2017); (4) Compared to our proposed methods, BGL and COD can also decrease error gaps to a certain extent. This is because: (i) BGL aims to keep errors remaining relatively low in each group, which helps to reduce accuracy disparity; (ii) CoD aims to reduce the correlation between the sensitive attributes and the predictions (or the inputs) in the feature space, which might somehow reduce the dependency between the distributions of these two variables. In comparison, our proposed methods do better in mitigating the error gaps." }, { "heading": "5 RELATED WORK", "text": "Algorithmic Fairness In the literature, two main notions of fairness, i.e., group fairness and individual fairness, has been widely studied (Dwork et al., 2012; Zemel et al., 2013; Feldman et al., 2015; Hardt et al., 2016; Zafar et al., 2017b; Madras et al., 2019; Khani & Liang, 2019). In particular, Chen et al. (2018) analyzed the impact of data collection on discrimination (e.g., false positive rate, false negative rate, and zero-one loss) from the perspectives of bias-variance-noise decomposition, and they suggested collecting more training examples and collect additional variable to reduce discrimination. In comparison, our work precisely characterizes the disparate predictive accuracy in terms of the distance between label populations and the distance between conditional representation and propose algorithms to reduce accuracy disparity across groups in regression. Fair Regression A series of work focus on fairness under the regression problems (Calders et al., 2013; Johnson et al., 2016; Berk et al., 2018; Komiyama et al., 2018; Chzhen et al., 2020; Bigot, 2020; Zink & Rose, 2020; Mary et al., 2019; Narasimhan et al., 2020). To the best of our knowledge, no previous study aimed to minimize accuracy disparity in regression from representation learning. However, there are other similar fairness notions proposed for regression: Agarwal et al. (2019) proposed fair regression with bounded group loss (i.e., it asks that the prediction error for any protected group remain below some pre-defined level) and used exponentiated-gradient approach to satisfy BGL; Komiyama et al. (2018) aimed to reduce the coefficient of determination between the sensitive attributes between the predictions to some pre-defined level and used off-the-shelf convex optimizer to solve the problem. In contrast, we source out the root of accuracy disparity through the lens of information theory and reducing it via distributional alignment in a minimax game. Fair Representation A line of work focus on building algorithmic fair decision making systems using adversarial techniques to learn fair representations (Edwards & Storkey, 2015; Beutel et al., 2017; Adel et al., 2019; Zhao et al., 2019). The main idea behind is to learn a good representation of the data so that the data owner can maximize the accuracy while removing the information related to the sensitive attribute. Madras et al. (2018) proposed a generalized framework to learn adversarially fair and transferable representations and suggests using the label information in the adversary to learn equalized odds or equal opportunity representations in the classification setting. Apart from adversarial representation, recent work also proposed to use distance metrics, e.g., the maximum mean discrepancy (Louizos et al., 2015) and the Wasserstein distance (Jiang et al., 2019) to remove group-related information. Compared to their work, we propose to align (conditional) distributions across groups to reduce accuracy disparity using minimax optimization and analyze the game-theoretic optima in the minimax game in the regression setting." }, { "heading": "6 CONCLUSION", "text": "In this paper, we theoretically and empirically study accuracy disparity in regression problems. Specifically, we prove an information-theoretic lower bound on the joint error and a complementary upper bound on the error gap across groups to depict the feasible region of group-wise errors. Our theoretical results indicate that accuracy disparity occurs inevitably due to the label distributions differ across groups. To reduce such disparity, we further propose to achieve accuracy parity by learning conditional group-invariant representations using statistical distances. The game-theoretic optima of the objective functions in our proposed methods are achieved when the accuracy disparity is minimized. Our empirical results on four real-world datasets demonstrate that our proposed algorithms help to reduce accuracy disparity effectively. We believe our results take an important step towards better understanding accuracy disparity in machine learning models." }, { "heading": "A MISSING PROOFS", "text": "Lemma 3.1. Let Ŷ = h(X) ∈ R, then for a ∈ {0, 1}, W1(Da(Y ), h]Da) ≤ √ ErrDa(h).\nProof. The prediction error conditioned on a ∈ {0, 1} is ErrDa(h) = E[ ( Y − h(X) )2|A = a] ≥ E2[|Y − h(X)||A = a]\n≥ (\ninf Γ(Da(Y ),Da(h(X)))\nE[|Y − h(X)|] )2\n= W 21 (Da(Y ), h]Da).\nTaking square root at both sides then completes the proof. Theorem 3.1. Let Ŷ = h(X) ∈ R, we have ErrD0(h) + ErrD1(h) ≥ 12 [( W1(D0(Y ),D1(Y )) −\nW1(h]D0, h]D1) )\n+\n]2 .\nProof. Since W1(·, ·) is a distance metric, the result follows immediately the triangle inequality and Lemma 3.1:\nW1(D0(Y ),D1(Y )) ≤ √ ErrD0(h) +W1(h]D0, h]D1) + √ ErrD1(h).\nRearrange the equation above and by AM-GM inequality, we have W1(D0(Y ),D1(Y ))−W1(h]D0, h]D1) ≤ √ ErrD0(h) + √ ErrD1(h) ≤ √ 2(ErrD0(h) + ErrD1(h)).\nTaking square at both sides then completes the proof.\nCorollary 3.1. Let Ŷ = h(X) ∈ R and α = D(A = 0) ∈ [0, 1], we have ErrD(h) ≥ 12 min{α, 1− α} · [( W1(D0(Y ),D1(Y ))−W1(h]D0, h]D1) ) + ]2 .\nProof. The joint error is\nErrD(h)\n= αErrD0(h) + (1− α) ErrD1(h) ≥ min{α, 1− α} ( ErrD0(h) + ErrD1(h) ) ≥ 1\n2 min{α, 1− α}[\n( W1(D0(Y ),D1(Y ))−W1(h]D0, h]D1) ) + ]2. (Theorem 3.1)\nLemma A.1. If Assumption 2.1 holds, then the following inequality holds: |ED0 [(h(X) − ED1 [Y |X])2]− ED1 [(h(X)− ED1 [Y |X])2]| ≤ 8M2dTV(D0(X),D1(X)).\nProof. First, we know that ‖h(X) − EDa [Y |X]‖∞ ≤ 2M , ∀a ∈ {0, 1}, since ‖h‖∞ ≤ M and |Y | ≤M . Now it suffices to bound:\n|ED0 [(h(X)− ED1 [Y |X])2]− ED1 [(h(X)− ED1 [Y |X])2]| = |〈(h(X)− ED1 [Y |X])2, dD0 − dD1〉| ≤ ‖h(X)− ED1 [Y |X]‖2∞‖dD0 − dD1‖1 (Hölder’s inequality) ≤ 4M2 ‖dD0 − dD1‖1 (Assumption 2.1) = 8M2 dTV(D0(X),D1(X)).\nNote that the last equation follows the definition of total variation distance.\nLemma A.2. If Assumption 2.1 holds, then the following inequality holds: |ED0 [(h(X) − ED0 [Y |X])]2 − ED0 [(h(X)− ED1 [Y |X])]2| ≤ 4M ED0 [|ED0 [Y |X]− ED1 [Y |X]|].\nProof.\n|ED0 [(h(X)− ED0 [Y |X])]2 − ED0 [(h(X)− ED1 [Y |X])]2| = |ED0 [h2(X)− 2h(X)ED0 [Y |X] + E2D0 [Y |X]− h\n2(X) + 2h(X)ED1 [Y |X]− E2D1 [Y |X]]| ≤ 2M ED0 [|ED0 [Y |X]− ED1 [Y |X]|] + 2M ED0 [|ED0 [Y |X]− ED1 [Y |X]|] (Assumption 2.1) = 4M ED0 [|ED0 [Y |X]− ED1 [Y |X]|].\nTheorem 3.2. For any hypothesisH 3 h : X → Y , if the Assumption 2.1 holds, then:\n∆Err(h) ≤ 8M2 dTV(D0(X),D1(X)) + |ED0 [VarD0 [Y |X]]− ED1 [VarD1 [Y |X]]| + 4M min{ED0 [|ED0(Y |X)− ED1(Y |X)|], ED1 [|ED0(Y |X)− ED1(Y |X)|]}.\nProof. First, we show that for a ∈ {0, 1},\nErrDa(h)\n= EDa [(h(X)− Y )2] = EDa [(h(X)− EDa [Y |X] + EDa [Y |X]− Y )2] = EDa [(h(X)− EDa [Y |X])2] + EDa [(Y − EDa [Y |X])2] − 2EDa [(h(X)− EDa [Y |X])(Y − EDa [Y |X])]\n= EDa [(h(X)− EDa [Y |X])2] + EDa [(Y − EDa [Y |X])2].\nNote that the last equation holds since\nEDa [(h(X)− EDa [Y |X])(Y − EDa [Y |X])] = EDa(X)[EDa(Y |X)[(h(X)− EDa [Y |X])(Y − EDa [Y |X])|X]] = EDa(X)[(h(X)− EDa [Y |X])EDa(Y |X)(Y − EDa [Y |X]|X)] = EDa(X)[(h(X)− EDa [Y |X])(EDa [Y |X]− EDa [Y |X])] = 0.\nNext we bound the error gap:\n|ErrD0(h)− ErrD1(h)| = |ED0 [(h(X)− ED0 [Y |X])2]− ED1 [(h(X)− ED1 [Y |X])2]\n+ ED0 [(Y − ED0 [Y |X])2]− ED1 [(Y − ED1 [Y |X])2]| ≤ |ED0 [(h(X)− ED0 [Y |X])2]− ED1 [(h(X)− ED1 [Y |X])2]| (Triangle inequality)\n+ |ED0 [VarD0 [Y |X]]− ED1 [VarD1 [Y |X]]|.\nNow it suffices to bound:\n|ED0 [(h(X)− ED0 [Y |X])2]− ED1 [(h(X)− ED1 [Y |X])2]| = |ED0 [(h(X)− ED0 [Y |X])2]− ED0 [(h(X)− ED1 [Y |X])2]\n+ ED0 [(h(X)− ED1 [Y |X])2]− ED1 [(h(X)− ED1 [Y |X])2]| ≤ |ED0 [(h(X)− ED0 [Y |X])2]− ED0 [(h(X)− ED1 [Y |X])2]| (Triangle inequality)\n+ |ED0 [(h(X)− ED1 [Y |X])2]− ED1 [(h(X)− ED1 [Y |X])2]|.\nInvoke Lemma A.1 and Lemma A.2 to bound the above two terms:\n|ED0 [(h(X)− ED0 [Y |X])2]− ED0 [(h(X)− ED1 [Y |X])2]| + |ED0 [(h(X)− ED1 [Y |X])2]− ED1 [(h(X)− ED1 [Y |X])2]|\n≤ 4M ED0 [|ED0 [Y |X]− ED1 [Y |X]|] + 8M2dTV(D0(X),D1(X)). (Lemma A.1 & Lemma A.2)\nBy symmetry, we also have:\n|ED0 [(h(X)− ED0 [Y |X])2]− ED0 [(h(X)− ED1 [Y |X])2]| + |ED0 [(h(X)− ED1 [Y |X])2]− ED1 [(h(X)− ED1 [Y |X])2]|\n≤ 4M ED1 [|ED0 [Y |X]− ED1 [Y |X]|] + 8M2dTV(D0(X),D1(X)). (Lemma A.1 & Lemma A.2)\nCombining the two inequalities above together, we have:\n|ED0(h(X)− ED0 [Y |X])2 − ED0(h(X)− ED1 [Y |X])2| + |ED0(h(X)− ED1 [Y |X])2 − ED1(h(X)− ED1 [Y |X])2|\n≤ 8M2dTV(D0(X),D1(X)) + 4M min{ED0 [|ED0 [Y |X]− ED1 [Y |X]|],ED1 [|ED0 [Y |X]− ED1 [Y |X]|]}.\nIncorporating the two variance terms back to the above inequality then completes the proof.\nTheorem 3.3. If Assumption 2.1 holds, then for ∀h ∈ H, let Ŷ = h(X), the following inequality holds:\n∆Err(h) ≤ 8M2dTV(D0(Y ),D1(Y )) + 3M min{ED0 [|EDy0 [Ŷ ]− EDy1 [Ŷ ]|], ED1 [|EDy0 [Ŷ ]− EDy1 [Ŷ ]|]}.\nProof. First, we show that for a ∈ {0, 1}: ErrDa(h) = EDa [(h(X)− Y )2] = EDa [h2(X)− 2Y h(X) + Y 2] = EDa [h2(X)− 2Y h(X)] + EDa [Y 2]. Next, we bound the error gap:\n|ErrD0(h)− ErrD1(h)| = |ED0 [h2(X)− 2Y h(X)] + ED0 [Y 2]− ED1 [h2(X)− 2Y h(X)]− ED1 [Y 2]| ≤ |ED0 [h2(X)− 2Y h(X)]− ED1 [h2(X)− 2Y h(X)]|+ |ED0 [Y 2]− ED1 [Y 2]|. (Triangle inequality)\nFor the second term, we can easily prove that\n|ED0 [Y 2]−ED1 [Y 2]| = |〈Y 2, dD0− dD1〉| ≤ ‖Y ‖2∞‖dD0− dD1‖1 ≤ 2M2dTV(D0(Y ),D1(Y )), where the second equation follows Hölder’s inequality and the last equation follow the definition of total variation distance. Now it suffices to bound the remaining term:\n|ED0 [h2(X)− 2Y h(X)]− ED1 [h2(X)− 2Y h(X)]|\n= ∣∣∣∣ ∫ h(x)(h(x)− 2y) dµ0(x, y)− ∫ h(x)(h(x)− 2y) dµ1(x, y)∣∣∣∣ ≤ ∣∣∣∣ ∫∫ h(x)(h(x)− 2y) dµ0(x|y)dµ0(y)− ∫∫ h(x)(h(x)− 2y) dµ0(x|y)dµ1(y)∣∣∣∣ (Triangle inequality) +\n∣∣∣∣ ∫∫ h(x)(h(x)− 2y) dµ1(x|y)dµ1(y)− ∫∫ h(x)(h(x)− 2y) dµ0(x|y)dµ1(y)∣∣∣∣. We upper bound the first term:∣∣∣∣ ∫∫ h(x)(h(x)− 2y) dµ0(x|y) dµ0(y)− ∫∫ h(x)(h(x)− 2y) dµ0(x|y) dµ1(y)∣∣∣∣ ≤ ∫∫ ∣∣h(x)(h(x)− 2y)(dµ0(y)− dµ1(y))∣∣dµ0(x|y)\n≤ ∫ ∣∣ dµ0(y)− dµ1(y)∣∣ ∫ ∣∣ sup\nx h(x) ∣∣∣∣h(x)− 2y∣∣dµ0(x|y) ≤M ∫ ED0 [|h(X)− 2Y ||Y = y]\n∣∣dµ0(y)− dµ1(y)∣∣ (Assumption 2.1) ≤ 3M2\n∫ ∣∣dµ0(y)− dµ1(y)∣∣ (Assumption 2.1) ≤ 6M2dTV(D0(Y ),D1(Y )).\nNote that the last equation follows the definition of total variation distance. For the second term, we have:∣∣∣∣ ∫∫ h(x)(h(x)− 2y) dµ1(x|y) dµ1(y)− ∫∫ h(x)(h(x)− 2y) dµ0(x|y) dµ1(y)∣∣∣∣ ≤ ∣∣∣∣ ∫∫ h2(x)(dµ1(x|y)− dµ0(x|y)) dµ1(y)∣∣∣∣+ ∣∣∣∣ ∫∫ 2y h(x)(dµ1(x|y)− dµ0(x|y)) dµ1(y)∣∣∣∣ (Triangle inequality)\n≤ 3M ED1 [|EDy0 [Ŷ ]− EDy1 [Ŷ ]|]. (Assumption 2.1) To prove the last equation, we first see that:∣∣∣∣ ∫∫ h2(x)(dµ1(x|y)− dµ0(x|y)) dµ1(y)∣∣∣∣ ≤ ∣∣∣∣ ∫∫ ( sup\nx h(x)\n) h(x)(dµ1(x|y)− dµ0(x|y)) dµ1(y) ∣∣∣∣ ≤M\n∫ ∣∣ED0 [h(X)|Y = y]− ED1 [h(X)|Y = y]∣∣dµ1(y) (Assumption 2.1) = M ED1 [|EDy0 [Ŷ ]− EDy1 [Ŷ ]|].\nSimilarly, we also have:∣∣∣∣ ∫∫ 2y h(x)(dµ1(x|y)− dµ0(x|y)) dµ1(y)∣∣∣∣ ≤ 2\n∣∣∣∣ ∫∫ (sup y)h(x)(dµ1(x|y)− dµ0(x|y)) dµ1(y)∣∣∣∣ ≤ 2M\n∫ ∣∣ED0 [h(X)|Y = y]− ED1 [h(X)|Y = y]∣∣ dµ1(y) (Assumption 2.1) = 2M ED1 [|EDy0 [Ŷ ]− EDy1 [Ŷ ]|].\nBy symmetry, we can also see that:\n|ED0 [h2(X)− 2Y h(X)]− ED1 [h2(X)− 2Y h(X)]| ≤ 6M2dTV(D0(Y ),D1(Y )) + 3M ED1 [|EDy0 [Ŷ ]− EDy1 [Ŷ ]|].\nCombine the above two equations yielding:\n|ED0 [h2(X)− 2Y h(X)]− ED1 [h2(X)− 2Y h(X)]| ≤ 6M2dTV(D0(Y ),D1(Y )) + 3M min{ED0 [|EDy0 [Ŷ ]− EDy1 [Ŷ ]|],ED1 [|EDy0 [Ŷ ]− EDy1 [Ŷ ]|]}.\nIncorporating the terms back to the upper bound of the error gap then completes the proof.\nTheorem 3.4. Consider the minimax game in (1). The equilibrium (g∗, f∗) of the game is attained when 1). Z = g∗(X) is independent of A conditioned on Y ; 2). f∗(Z, Y ) = D(A = 1 | Y,Z).\nProof. To prove Theorem 3.4, we first give Proposition A.1.\nProposition A.1. For any feature map g : X → Z , assume that F contains all the randomized binary classifiers and F 3 f : Z × Y → A, then minf∈F CED(A ‖ f(g(X), Y )) = H(A | Z, Y ).\nProof. By the definition of cross-entropy loss, we have: CED(A ‖ f) = −ED [I(A = 0) log(1− f(g(X), Y )) + I(A = 1) log(f(g(X), Y ))]\n= −Eg]D [I(A = 0) log(1− f(Z, Y )) + I(A = 1) log(f(Z, Y ))] = −EZ,Y EA|Z,Y [I(A = 0) log(1− f(Z, Y )) + I(A = 1) log(f(Z, Y ))] = −EZ,Y [D(A = 0 | Z, Y ) log(1− f(Z, Y )) +D(A = 1 | Z, Y ) log(f(Z, Y ))] = EZ,Y [DKL(D(A | Z, Y ) ‖ f(Z, Y ))] +H(A | Z, Y ) ≥ H(A | Z, Y ),\nwhere DKL(·‖·) denotes the KL divergence between two distributions. From the above inequality, it is also clear that the minimum value of the cross-entropy loss is achieved when f(Z, Y ) equals the conditional probability D(A = 1 | Z, Y ), i.e., f∗(Z, Y ) = D(A = 1 | Z = g(X), Y ).\nProposition A.1 states that the minimum cross-entropy loss that the discriminator can achieve is H(A | Z, Y ) when f is the conditional distribution D(A = 1 | Z = g(X), Y ). By the basic property of conditional entropy, we have:\nmin f∈F\nCED(A ‖ f(g(X), Y )) = H(A | Z, Y ) = H(A | Y )− I(A;Z | Y ).\nNote that H(A | Y ) is a constant given the distribution D, so the maximization of g is equivalent to the minimization of minZ=g(X) I(A;Z | Y ), and it follows that the optimal strategy for the transformation g is the one that induces conditionally invariant features, e.g., I(A;Z | Y ) = 0. On the other hand, if g∗ plays optimally, then the optimal response of the discriminator f is given by\nf∗(Z, Y ) = D(A = 1 | Z = g∗(X), Y ) = D(A = 1 | Y ).\nTheorem 3.5. Let g∗ := arg mingW1(D0(g(X), Y ),D1(g(X), Y )), then DY0 (Z = g∗(X)) = DY1 (Z = g∗(X)) almost surely.\nProof. By the definition of Wasstertein distance, we have:\nW1(D0(Z, Y ),D1(Z, Y )) = inf γ∈Γ(D0,D1)\n∫ d((z0, y0), (z1, y1)) dγ((z0, y0), (z1, y1))\n= inf γ∈Γ(D0,D1)\n∫∫ d((z0, y0), (z1, y1)) dγ(z0, z1 | y0, y1) dγ(y0, y1)\n= inf γ∈Γ(D0,D1)\n∫∫ ‖z0 − z1‖1 + |y0 − y1|dγ(z0, z1 | y0, y1) dγ(y0, y1)\n≥ inf γ∈Γ(D0,D1)\n∫∫ |y0 − y1|dγ(y0, y1) dγ(z0, z1 | y0, y1)\n= inf γ∈Γ(D0(Y ),D1(Y ))\n∫ |y0 − y1|dγ(y0, y1)\n= W1(D0(Y ),D1(Y )).\nTo finish the proof, next we prove the lower bound is achieved when DY0 (Z = g∗(X)) = DY1 (Z = g∗(X)): it is easy to see W1(DY0 (Z),DY0 (Z)) = ∫ ‖z0 − z1‖1 dγ(z0, z1 | y0, y1) = 0 when the conditional distributions are equal. In this case, when the Wasserstein distance is minimized, then Z is conditionally independent of A given Y ." }, { "heading": "B EXPERIMENTAL DETAILS", "text": "Adult The Adult dataset contains 48,842 examples for income prediction. The task is to predict whether the annual income of an individual is greater or less than 50K/year based on the attributes of the individual, such as education level, age, occupation, etc. In our experiment, we use gender (binary) as the sensitive attribute. The target variable (income) is an ordinal binary variable: 0 if < 50K/year otherwise 1. After data pre-processing, the dataset contains 30,162/15,060 training/test instances where the input dimension of each instance is 113. We show the data distributions for different demographic subgroups in Table 1. To preprocess the dataset, we first filter out the data records that contain the missing values. We then remove the sensitive attribute from the input features and normalize the input features with its means and standard deviations. Note that we use one-hot encoding for the categorical attributes. For our proposed methods, we use a three-layer neural network with ReLU as the activation function of the hidden layers and the sigmoid function as the output function for the prediction task (we take the first two layers as the feature mapping). The number of neurons in the hidden layers is 60. We train the neural networks with the ADADELTA algorithm with the learning rate 0.1 and a batch size of 512. The models are trained in 50 epochs. For the adversary networks in CENET and WASSERSTEINNET, we use a two-layer neural network with ReLU as the activation function. The number of neurons in the hidden layers of the adversary networks is 60. The adversary network in CENET also use sigmoid function as the output function. The weight clipping norm in the adversary\nnetwork of WASSERSTEINNET is 0.005. We use the gradient reversal layer (Ganin et al., 2016) to implement the gradient descent ascent (GDA) algorithm for optimization of the minimax problem since it makes the training process more stable (Daskalakis & Panageas, 2018). For the rest of the datasets we used in our experiments, we also use gradient reversal layer to implement our algorithms. We use the Fairlearn toolkit (Bird et al., 2020) to implement BGL: we use the exponentiated-gradient algorithm with the default setting as the mitigator and vary the upper bound ∈ {0.07, 0.1, 0.2, 0.5} of the bounded group loss constraint. For each value of , we run ten random seeds and compute the means and standard deviations.\nCOMPAS The COMPAS dataset 6,172 instances to predict whether a criminal defendant will recidivate within two years or not. It contains attribute such as age, race, etc. In our experiment, we use race (white or non-white) as the sensitive attribute and recidivism as the target variable. We split the dataset into training and test set with the ratio 7/3. We show the data distributions for different demographic subgroups in Table 2. For all methods, we use a two-layer neural network with ReLU as the activation function of the hidden layers and the sigmoid function as the output function for the prediction task (we take the first layer as the feature mapping). The number of neurons in the hidden layers is 60. We train the neural networks with the ADADELTA algorithm with the learning rate 1.0 and a batch size of 512. The models are trained in 50 epochs. For the adversary networks in CENET and WASSERSTEINNET, we use a two-layer neural network with ReLU as the activation function. The number of neurons in the hidden layers of the adversary networks is 10. The adversary network in CENET also use sigmoid function as the output function. The weight clipping norm in the adversary network of WASSERSTEINNET is 0.05. We use the Fairlearn toolkit to implement BGL: we use the exponentiated-gradient algorithm with the default setting as the mitigator and vary the upper bound ∈ {0.1, 0.2, 0.3, 0.5} of the bounded group loss constraint. For each value of , we run ten random seeds and compute the means and standard deviations. As for COD, we follow the source implementation.2 We use the same hyper-parameter settings as (Komiyama et al., 2018): We use the kernelized optimization with the random Fourier features and the RBF kernel (we vary hyper-parameter of the RBF kernel γ ∈ {0.1, 1.0, 10, 100}) and report the best results with minimal MSE loss for each time we change the fairness budget . We also vary ∈ {0.01, 0.1, 0.5, 1.0} and run ten random seeds and compute the means and standard deviations.\nCommunities and Crime The Communities and Crime dataset contains 1,994 examples of socioeconomic, law enforcement, and crime data about communities in the United States. The task is to predict the number of violent crimes per 100K population. All attributes in the dataset have been curated and normalized to [0, 1]. In our experiment, we use race (binary) as the sensitive attribute: 1 if the population percentage of the white is greater or equal to 80% otherwise 0. After data pre-processing, the dataset contains 1,595/399 training/test instances where the input dimension of each instance is 96. We visualize the data distributions for different demographic subgroups in Figure 3b. To preprocess the dataset, we first remove the non-predictive attributes and sensitive attributes from the input features. Note that all features in the dataset have already been normalized in [0, 1] so that we do not perform additional normalization to the features. We then replace the missing values with the mean values of the corresponding attributes. For all methods, we use a two-layer neural network with ReLU as the activation function of the hidden layers and the sigmoid function as the output function for the prediction task (we take the first\n2https://github.com/jkomiyama/fairregresion\nlayer as the feature mapping). The number of neurons in the hidden layers is 50. We train the neural networks with the ADADELTA algorithm with the learning rate 0.1 and a batch size of 256. The models are trained in 100 epochs. For the adversary networks in CENET and WASSERSTEINNET, we use a two-layer neural network with ReLU as the activation function. The number of neurons in the hidden layers of the adversary networks is 100. The adversary network in CENET also use sigmoid function as the output function. The weight clipping norm in the adversary network of WASSERSTEINNET is 0.002. We use the Fairlearn toolkit to implement BGL: we use the exponentiated-gradient algorithm with the default setting as the mitigator and vary the upper bound ∈ {0.01, 0.02, 0.03, 0.05} of the bounded group loss constraint. For each value of , we run ten random seeds and compute the means and standard deviations. As for COD, we follow the same hyper-parameter settings as (Komiyama et al., 2018): We use the kernelized optimization with the random Fourier features and the RBF kernel (we vary hyperparameter of the RBF kernel γ ∈ {0.1, 1.0, 10, 100}) and report the best results with minimal MSE loss for each time we change the fairness budget . The hyper-parameter settings follow from (Komiyama et al., 2018). We also vary ∈ {0.01, 0.1, 0.5, 1.0} and run ten random seeds and compute the means and standard deviations.\nLaw School The Law School dataset contains 1,823 records for law students who took the bar passage study for Law School Admission3. The features in the dataset include variables such as undergraduate GPA, LSAT score, full-time status, family income, gender, etc. In our experiment, we use gender as the sensitive attribute and undergraduate GPA as the target variable. We split the dataset into training and test set with the ratio 8/2. We show the data distributions for different demographic subgroups in Figure 3a. For all methods, we use a two-layer neural network with ReLU as the activation function of the hidden layers and the sigmoid function as the output function for the prediction task (we take the first layer as the feature mapping). The number of neurons in the hidden layers is 10. We train the neural networks with the ADADELTA algorithm with the learning rate 0.1 and a batch size of 256. The models are trained in 100 epochs. For the adversary networks in CENET and WASSERSTEINNET, we use a two-layer neural network with ReLU as the activation function. The number of neurons in the hidden layers of the adversary networks is 10. The adversary network in CENET also use sigmoid function as the output function. The weight clipping norm in the adversary network of WASSERSTEINNET is 0.2. We use the Fairlearn toolkit to implement BGL: we use the exponentiated-gradient algorithm with the default setting as the mitigator and vary the upper bound ∈ {0.01, 0.02, 0.03, 0.05} of the bounded group loss constraint. For each value of , we run ten random seeds and compute the means and standard deviations.\n3We use the edited public version of the dataset which can be download here: https://github.com/ algowatchpenn/GerryFair/blob/master/dataset/lawschool.csv\nAs for COD, we follow the same hyper-parameter settings as (Komiyama et al., 2018): We use the kernelized optimization with the random Fourier features and the RBF kernel (we vary hyperparameter of the RBF kernel γ ∈ {0.1, 1.0, 10, 100}) and report the best results with minimal MSE loss for each time we change the fairness budget . The hyper-parameter settings follow from (Komiyama et al., 2018). We also vary ∈ {0.01, 0.1, 0.5, 1.0} and run ten random seeds and compute the means and standard deviations." }, { "heading": "C ADDITIONAL EXPERIMENTAL RESULTS AND ANALYSES", "text": "In this section, we provide additional experimental results and analyses.\nC.1 IMPACT OF FAIRNESS TRADE-OFF PARAMETERS\nWe present additional experimental results and analyses to gain more insights into how the fairness trade-off parameters (e.g., λ and ) affect the performance of the model predictive performance and accuracy disparity in each methods.\nTable 3 shows R2 regression scores and error gaps when λ changes in CENET and WASSERSTEINNET. We see that the error gap gradually decreases with the increase of the trade-off parameter λ in most scenarios with small accuracy loss (except for CENET in Adult dataset and Crime dataset when λ is large), which demonstrates the overall effectiveness of our proposed algorithms. Plus, the increase of λ generally leads to the instability of training processes with larger variances of both values of R2 and error gap. In contrast to WASSERSTEINNET, CENET outperforms in mitigating the accuracy disparity while achieving similar or better accuracy in COMPAS and Law dataset. In Adult and Crime dataset, when λ is small, CENET also does better in reducing the error gap than WASSERSTEINNET with similar accuracy loss. The results follow the fact that minimizing total variation distance between two continuous distributions ensures the minimization of Wasserstein distance (Gibbs & Su, 2002). However, when λ increases, WASSERSTEINNET achieves better accuracy and performance disparity trade-off while CENET suffers significant accuracy loss and may fail to decrease the error gap. It is not surprising since the estimation of total variation in minimax optimization could lead to an unstable training process (Arjovsky & Bottou, 2017; Arjovsky et al., 2017). Table 4 shows R2 regression scores and error gaps when changes in BGL. We see that with the decrease of the trade-off parameter , both the values of R2 and error gaps decrease. This is because when upper bound of BGL is small, the accuracy disparity is also mitigated. When is above/below a certain threshold, the values of R2 and error gaps then increase/decrease. It is also worth to note that the exponentiated-gradient approach to solve BGL does not introduce the randomness during optimization.\nTable 5 shows R2 regression scores and error gaps when changes in COD. We see that with the decrease of the trade-off parameter , both the values of R2 and error gaps decrease. It is worth to note that the the optimization of QCQP to solve COD does not introduce the randomness, and the only randomness introduced in COMPAS dataset is because using the random Fourier features in prediction achieves the best performance in COMPAS dataset.\nC.2 VISUALIZATION OF TRAINING PROCESSES\nWe visualize the training processes of our proposed methods CENET and WASSERSTEINNET in the Adult dataset and COMPAS dataset in Figure 4 and Figure 5, respectively. We also compare their training dynamics with the model performance that we solely minimize the MSE loss (i.e., λ = 0) and we term it as NO DEBIAS.\nIn Figure 4 and Figure 5, we can see that as the training progress, the MSE losses in both datasets are decreasing and finally converge. However, the training dynamics of error gaps are much more complex even in the NO DEBIAS case. Before convergence, the training dynamics of error gaps differs among different datasets. Our methods enforce the models to converge to the points where error gap are smaller while preserving the models’ predictive performance. It is also worth to note that minimax optimization makes the training processes somehow unstable." } ]
2,020
null
SP:e1b814eef558840aef2fba9092482c1b09b1ef30
[ "This paper works on the problem of collaborative learning while preserving both confidentiality and privacy of the data points. It combines techniques from secure multi-party computation and differential privacy for the same, and improves on confidential inference and PATE in the process. The new technique is called CaPC. Finally, it states empirical results as evidence for the improved accuracy.", "The authors combine several cryptographic techniques to create a federated systems that allows several entities to run classification against all the model held be the participants without revealing information in the process. In particular, the sample to be classified is not revealed to any other party, and differential privacy is used to protect the training data that was used to train the models. A central semi-honest coordinator is used to aggregate the results and add the differential privacy without learning any private information." ]
Machine learning benefits from large training datasets, which may not always be possible to collect by any single entity, especially when using privacy-sensitive data. In many contexts, such as healthcare and finance, separate parties may wish to collaborate and learn from each other’s data but are prevented from doing so due to privacy regulations. Some regulations prevent explicit sharing of data between parties by joining datasets in a central location (confidentiality). Others also limit implicit sharing of data, e.g., through model predictions (privacy). There is currently no method that enables machine learning in such a setting, where both confidentiality and privacy need to be preserved, to prevent both explicit and implicit sharing of data. Federated learning only provides confidentiality, not privacy, since gradients shared still contain private information. Differentially private learning assumes unreasonably large datasets. Furthermore, both of these learning paradigms produce a central model whose architecture was previously agreed upon by all parties rather than enabling collaborative learning where each party learns and improves their own local model. We introduce Confidential and Private Collaborative (CaPC) learning, the first method provably achieving both confidentiality and privacy in a collaborative setting. We leverage secure multiparty computation (MPC), homomorphic encryption (HE), and other techniques in combination with privately aggregated teacher models. We demonstrate how CaPC allows participants to collaborate without having to explicitly join their training sets or train a central model. Each party is able to improve the accuracy and fairness of their model, even in settings where each party has a model that performs well on their own dataset or when datasets are not IID and model architectures are heterogeneous across parties.1
[ { "affiliations": [], "name": "Christopher A. Choquette-Choo" }, { "affiliations": [], "name": "Natalie Dullerud" }, { "affiliations": [], "name": "Adam Dziedzic" }, { "affiliations": [], "name": "Yunxiang Zhang" }, { "affiliations": [], "name": "Somesh Jha" }, { "affiliations": [], "name": "Xiao Wang" } ]
[ { "authors": [ "Martin Abadi", "Andy Chu", "Ian Goodfellow", "H Brendan McMahan", "Ilya Mironov", "Kunal Talwar", "Li Zhang" ], "title": "Deep learning with differential privacy", "venue": "In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2016 }, { "authors": [ "Fabian Boemer", "Yixing Lao", "Rosario Cammarota", "Casimir Wierzynski" ], "title": "Ngraph-he: A graph compiler for deep learning on homomorphically encrypted data", "venue": "In Proceedings of the 16th ACM International Conference on Computing Frontiers,", "year": 2019 }, { "authors": [ "Fabian Boemer", "Rosario Cammarota", "Daniel Demmler", "Thomas Schneider", "Hossein" ], "title": "Yalame. MP2ML: a mixed-protocol machine learning framework for private inference", "venue": "ARES 2020: The 15th International Conference on Availability, Reliability and Security, Virtual Event,", "year": 2020 }, { "authors": [ "Zvika Brakerski", "Craig Gentry", "Vinod Vaikuntanathan" ], "title": "leveled) fully homomorphic encryption without bootstrapping", "venue": "ACM Transactions on Computation Theory (TOCT),", "year": 2014 }, { "authors": [ "Nicholas Carlini", "Chang Liu", "Úlfar Erlingsson", "Jernej Kos", "Dawn Song" ], "title": "The secret sharer: Evaluating and testing unintended memorization in neural networks", "venue": "In 28th {USENIX} Security Symposium ({USENIX} Security", "year": 2019 }, { "authors": [ "Nitesh V Chawla", "Kevin W Bowyer", "Lawrence O Hall", "W Philip Kegelmeyer" ], "title": "Smote: synthetic minority over-sampling technique", "venue": "Journal of artificial intelligence research,", "year": 2002 }, { "authors": [ "Jung Hee Cheon", "Andrey Kim", "Miran Kim", "Yongsoo Song" ], "title": "Homomorphic encryption for arithmetic of approximate numbers", "venue": "In International Conference on the Theory and Application of Cryptology and Information Security,", "year": 2017 }, { "authors": [ "Cynthia Dwork", "Frank McSherry", "Kobbi Nissim", "Adam Smith" ], "title": "Calibrating noise to sensitivity in private data analysis", "venue": "In Theory of cryptography conference,", "year": 2006 }, { "authors": [ "Cynthia Dwork", "Guy N Rothblum", "Salil Vadhan" ], "title": "Boosting and differential privacy", "venue": "IEEE 51st Annual Symposium on Foundations of Computer Science,", "year": 2010 }, { "authors": [ "David Evans", "Yan Huang", "Jonathan Katz", "Lior Malka" ], "title": "Efficient privacy-preserving biometric identification", "venue": "In Proceedings of the 17th conference Network and Distributed System Security Symposium, NDSS,", "year": 2011 }, { "authors": [ "Reza Zanjirani Farahani", "Masoud Hekmatfar" ], "title": "Facility location: concepts, models, algorithms and case studies", "venue": null, "year": 2009 }, { "authors": [ "Craig Gentry" ], "title": "A fully homomorphic encryption scheme, volume 20", "venue": "Stanford university Stanford,", "year": 2009 }, { "authors": [ "Ran Gilad-Bachrach", "Nathan Dowlin", "Kim Laine", "Kristin Lauter", "Michael Naehrig", "John Wernsing" ], "title": "Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Moritz Hardt", "Eric Price", "Nati Srebro" ], "title": "Equality of opportunity in supervised learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Yangsibo Huang", "Zhao Song", "Danqi Chen", "Kai Li", "Sanjeev Arora" ], "title": "Texthide: Tackling data privacy in language understanding tasks", "venue": "arXiv preprint 2010.06053,", "year": 2020 }, { "authors": [ "Yangsibo Huang", "Zhao Song", "Kai Li", "Sanjeev Arora" ], "title": "Instahide: Instance-hiding schemes for private distributed learning", "venue": "arXiv preprint 2010.02772,", "year": 2020 }, { "authors": [ "Yangsibo Huang", "Yushan Su", "Sachin Ravi", "Zhao Song", "Sanjeev Arora", "Kai Li" ], "title": "Privacy-preserving learning via deep net pruning", "venue": "arXiv preprint 2003.01876,", "year": 2020 }, { "authors": [ "Yuval Ishai", "Joe Kilian", "Kobbi Nissim", "Erez Petrank" ], "title": "Extending oblivious transfers efficiently", "venue": "In Annual International Cryptology Conference,", "year": 2003 }, { "authors": [ "Chiraag Juvekar", "Vinod Vaikuntanathan", "Anantha Chandrakasan" ], "title": "Gazelle: A low latency framework for secure neural network inference", "venue": "In 27th USENIX Security Symposium (USENIX Security", "year": 2018 }, { "authors": [ "Jakub Konečnỳ", "H Brendan McMahan", "Felix X Yu", "Peter Richtárik", "Ananda Theertha Suresh", "Dave Bacon" ], "title": "Federated learning: Strategies for improving communication efficiency", "venue": "arXiv preprint arXiv:1610.05492,", "year": 2016 }, { "authors": [ "David D. Lewis", "William A. Gale" ], "title": "A sequential algorithm for training text classifiers", "venue": "In Proceedings of the 17th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval. Dublin, Ireland,", "year": 1994 }, { "authors": [ "Dahlia Malkhi", "Noam Nisan", "Benny Pinkas", "Yaron Sella" ], "title": "Fairplay—a secure two-party computation system", "venue": "In Proceedings of the 13th Conference on USENIX Security Symposium - Volume 13,", "year": 2004 }, { "authors": [ "Alessandro Mantelero" ], "title": "The eu proposal for a general data protection regulation and the roots of the ‘right to be forgotten", "venue": "Computer Law & Security Review,", "year": 2013 }, { "authors": [ "H Brendan McMahan", "Daniel Ramage", "Kunal Talwar", "Li Zhang" ], "title": "Learning differentially private recurrent language models", "venue": "arXiv preprint arXiv:1710.06963,", "year": 2017 }, { "authors": [ "Ilya Mironov" ], "title": "Rényi differential privacy", "venue": "IEEE 30th Computer Security Foundations Symposium (CSF),", "year": 2017 }, { "authors": [ "Pratyush Mishra", "Ryan Lehmkuhl", "Akshayaram Srinivasan", "Wenting Zheng", "Raluca Ada Popa" ], "title": "Delphi: A cryptographic inference service for neural networks", "venue": "In 29th USENIX Security Symposium (USENIX Security", "year": 2020 }, { "authors": [ "Payman Mohassel", "Yupeng Zhang" ], "title": "Secureml: A system for scalable privacy-preserving machine learning", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Nicolas Papernot", "Martín Abadi", "Úlfar Erlingsson", "Ian J. Goodfellow", "Kunal Talwar" ], "title": "Semisupervised knowledge transfer for deep learning from private training data", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Nicolas Papernot", "Shuang Song", "Ilya Mironov", "Ananth Raghunathan", "Kunal Talwar", "Úlfar Erlingsson" ], "title": "Scalable private learning with PATE", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Benny Pinkas", "Mike Rosulek", "Ni Trieu", "Avishay Yanai" ], "title": "Psi from paxos: Fast, malicious private set intersection", "venue": "In Annual International Conference on the Theory and Applications of Cryptographic Techniques,", "year": 2020 }, { "authors": [ "Michael O. Rabin" ], "title": "How to exchange secrets with oblivious transfer", "venue": "Cryptology ePrint Archive,", "year": 2005 }, { "authors": [ "Tobias Scheffer", "Christian Decomain", "Stefan Wrobel" ], "title": "Active hidden markov models for information extraction", "venue": "In International Symposium on Intelligent Data Analysis,", "year": 2001 }, { "authors": [ "Ozan Sener", "Silvio Savarese" ], "title": "Active learning for convolutional neural networks: A core-set approach", "venue": "arXiv preprint arXiv:1708.00489,", "year": 2017 }, { "authors": [ "Burr Settles" ], "title": "Active learning literature survey", "venue": "Computer Sciences Technical Report 1648,", "year": 2009 }, { "authors": [ "Adi Shamir" ], "title": "How to share a secret", "venue": "Communications of the ACM,", "year": 1979 }, { "authors": [ "Claude E Shannon" ], "title": "A mathematical theory of communication", "venue": "Bell system technical journal,", "year": 1948 }, { "authors": [ "Micah J. Sheller", "Brandon Edwards", "G. Anthony Reina", "Jason Martin", "Sarthak Pati", "Aikaterini Kotrotsou", "Mikhail Milchenko", "Weilin Xu", "Daniel Marcus", "Rivka R. Colen", "Spyridon Bakas" ], "title": "Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data", "venue": "Scientific Reports,", "year": 2020 }, { "authors": [ "Reza Shokri", "Marco Stronati", "Congzheng Song", "Vitaly Shmatikov" ], "title": "Membership inference attacks against machine learning models", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Xiao Wang", "Alex J. Malozemoff", "Jonathan Katz" ], "title": "EMP-toolkit: Efficient MultiParty computation toolkit", "venue": "https://github.com/emp-toolkit,", "year": 2016 }, { "authors": [ "Andrew Chi-Chih Yao" ], "title": "How to generate and exchange secrets (extended", "venue": "Annual Symposium on Foundations of Computer Science,", "year": 1986 }, { "authors": [ "2020). Gazelle" ], "title": "Delphi and MP2ML largely support non-polynomial activation functions encountered in convolutional neural networks, such as maximum pooling and rectified linear unit (ReLU) operations. Gazelle introduced several improvements over previous methods for secure NN inference primarily relating to latency and confidentiality", "venue": null, "year": 2020 }, { "authors": [ "logE[exp(λCM(aux" ], "title": "d′))] is obtained by taking the logarithm of the privacy loss random variable. As a natural relaxation to the conventional (ε, δ)-differential privacy, Rényi differential privacy (RDP) (Mironov, 2017) provides a more convenient and accurate approach to estimating privacy loss under heterogeneous composition", "venue": null, "year": 2017 } ]
[ { "heading": null, "text": "Machine learning benefits from large training datasets, which may not always be possible to collect by any single entity, especially when using privacy-sensitive data. In many contexts, such as healthcare and finance, separate parties may wish to collaborate and learn from each other’s data but are prevented from doing so due to privacy regulations. Some regulations prevent explicit sharing of data between parties by joining datasets in a central location (confidentiality). Others also limit implicit sharing of data, e.g., through model predictions (privacy). There is currently no method that enables machine learning in such a setting, where both confidentiality and privacy need to be preserved, to prevent both explicit and implicit sharing of data. Federated learning only provides confidentiality, not privacy, since gradients shared still contain private information. Differentially private learning assumes unreasonably large datasets. Furthermore, both of these learning paradigms produce a central model whose architecture was previously agreed upon by all parties rather than enabling collaborative learning where each party learns and improves their own local model. We introduce Confidential and Private Collaborative (CaPC) learning, the first method provably achieving both confidentiality and privacy in a collaborative setting. We leverage secure multiparty computation (MPC), homomorphic encryption (HE), and other techniques in combination with privately aggregated teacher models. We demonstrate how CaPC allows participants to collaborate without having to explicitly join their training sets or train a central model. Each party is able to improve the accuracy and fairness of their model, even in settings where each party has a model that performs well on their own dataset or when datasets are not IID and model architectures are heterogeneous across parties.1" }, { "heading": "1 INTRODUCTION", "text": "The predictions of machine learning (ML) systems often reveal private information contained in their training data (Shokri et al., 2017; Carlini et al., 2019) or test inputs. Because of these limitations, legislation increasingly regulates the use of personal data (Mantelero, 2013). The relevant ethical\n∗Equal contributions, authors ordered alphabetically. †Work done while the author was at Vector Institute. ‡Equal contributions, authors ordered alphabetically. 1Code is available at: https://github.com/cleverhans-lab/capc-iclr.\nto evaluate Enc(q) onMi and outputs encrypted logits Enc(ri). 1b Each answering party, Pi, generates a random vector r̂i, and sends Enc(ri − r̂i) to the querying party, Pi∗ , who decrypts to get ri − r̂i. 1c Each answering party Pi runs Yao’s garbled circuit protocol (Yi) with querying party Pi∗ to get si for Pi∗ and ŝi for Pi s.t. si + ŝi is the one-hot encoding of argmax of logits. 2 Each answering party sends ŝi to the privacy guardian (PG). The PG sums ŝi from each Pi and adds Laplacian or Gaussian noise for DP. The querying party sums si from each Yi computation. 3 The PG and the querying party run Yao’s garbled circuit Ys to obtain argmax of querying party and PG’s noisy share. The label is output to the querying party.\nconcerns prompted researchers to invent ML algorithms that protect the privacy of training data and confidentiality of test inputs (Abadi et al., 2016; Konečnỳ et al., 2016; Juvekar et al., 2018).\nYet, these algorithms require a large dataset stored either in a single location or distributed amongst billions of participants. This is the case for example with federated learning (McMahan et al., 2017). Prior algorithms also assume that all parties are collectively training a single model with a fixed architecture. These requirements are often too restrictive in practice. For instance, a hospital may want to improve a medical diagnosis for a patient using data and models from other hospitals. In this case, the data is stored in multiple locations, and there are only a few parties collaborating. Further, each party may also want to train models with different architectures that best serve their own priorities.\nWe propose a new strategy that lets fewer heterogeneous parties learn from each other collaboratively, enabling each party to improve their own local models while protecting the confidentiality and privacy of their data. We call this Confidential and Private Collaborative (CaPC) learning.\nOur strategy improves on confidential inference (Boemer, 2020) and PATE, the private aggregation of teacher ensembles (Papernot et al., 2017). Through structured applications of these two techniques, we design a strategy for inference that enables participants to operate an ensemble of heterogeneous models, i.e. the teachers, without having to explicitly join each party’s data or teacher model at a single location. This also gives each party control at inference, because inference requires the agreement and participation of each party. In addition, our strategy provides measurable confidentiality and privacy guarantees, which we formally prove. We use the running example of a network of hospitals to illustrate our approach. The hospitals participating in CaPC protocol need guarantees on both confidentiality (i.e., data from a hospital can only be read by said hospital) and privacy (i.e., no hospital can infer private information about other hospitals’ data by observing their predictions).\nFirst, one hospital queries all the other parties over homomorphic encryption (HE), asking them to label an encrypted input using their own teacher models. This can prevent the other hospitals from reading the input (Boemer et al., 2019), an improvement over PATE, and allows the answering hospitals to provide a prediction to the querying hospital without sharing their teacher models.\nThe answering hospitals use multi-party computation (MPC) to compute an aggregated label, and add noise during the aggregation to obtain differential privacy guarantees (Dwork et al., 2014). This is achieved by a privacy guardian (PG), which then relays the aggregated label to the querying hospital. The PG only needs to be semi-trusted: we operate under the honest-but-curious assumption. The use of MPC ensures that the PG cannot decipher each teacher model’s individual prediction, and the noise added via noisy argmax mechanism gives differential privacy even when there are few participants.\nThis is a significant advantage over prior decentralized approaches like federated learning, which require billions of participants to achieve differential privacy, because the sensitivity of the histogram used in our aggregation is lower than that of the gradients aggregated in federated learning. Unlike our approach, prior efforts involving few participants thus had to prioritize model utility over privacy and only guarantee confidentiality (Sheller et al., 2020).\nFinally, the querying hospital can learn from this confidential and private label to improve their local model. Since the shared information is a label rather than a gradient, as used by federated learning, CaPC participants do not need to share a common model architecture; in fact, their architectures can vary throughout the participation in the protocol. This favors model development to a degree which is not possible in prior efforts such as federated learning.\nWe show how participants can instantiate various forms of active and online learning with the labels returned by our protocol: each party participating in the CaPC protocol may (a) identify deficiencies of its model throughout its deployment and (b) finetune the model with labels obtained by interacting with other parties. Intuitively, we achieve the analog of a doctor querying colleagues for a second opinion on a difficult diagnostic, without having to reveal the patient’s medical condition. This protocol leads to improvements in both the accuracy and fairness (when there is a skew in the data distribution of each participating hospital) of model predictions for each of the CaPC participants.\nTo summarize, our contributions are the following:\n• We introduce CaPC learning: a confidential and private collaborative learning platform that provides both confidentiality and privacy while remaining agnostic to ML techniques. • Through a structured application of homomorphic encryption, secure MPC, and private\naggregation, we design a protocol for CaPC. We use two-party deep learning inference and design an implementation of the noisy argmax mechanism with garbled circuits. • Our experiments on SVHN and CIFAR10 demonstrate that CaPC enables participants to\ncollaborate and improve the utility of their models, even in the heterogeneous setting where the architectures of their local models differ, and when there are only a few participants. • Further, when the distribution of data drifts across participating parties, we show that CaPC\nsignificantly improves fairness metrics because querying parties benefit from knowledge learned by other parties on different data distributions, which is distilled in their predictions. • We release the source code for reproducing all our experiments." }, { "heading": "2 BACKGROUND", "text": "Before introducing CaPC, we first go over elements of cryptography and differential privacy that are required to understand it. Detailed treatment of these topics can be found in Appendices A and B." }, { "heading": "2.1 CRYPTOGRAPHIC PRELIMINARIES FOR CONFIDENTIALITY", "text": "The main cryptographic tool used in CaPC is secure multi-party computation (MPC) (Yao, 1986). MPC allows a set of distrusting parties to jointly evaluate a function on their input without revealing anything beyond the output. In general, most practical MPC protocols can be classified into two categories: 1) generic MPC protocols that can compute any function with the above security goal (Malkhi et al., 2004); and 2) specialized MPC protocols that can be used to compute only selected functions (e.g., private set intersection (Pinkas et al., 2020), secure machine learning (Mohassel & Zhang, 2017)). Although specialized MPC protocols are less general, they are often more efficient in execution time. Protocols in both categories use similar cryptographic building blocks, including (fully) homomorphic encryption (Gentry, 2009), secret sharing (Shamir, 1979), oblivious transfer (Rabin, 2005), garbled circuits (Yao, 1986). To understand our protocol, it is not necessary to know all details about these cryptographic building blocks and thus we describe them in Appendix A.1. Our work uses these cryptographic preliminaries for secure computation at prediction time, unlike recent approaches, which explore new methods to achieving confidentiality at training time (Huang et al., 2020a;b).\nThe cryptographic protocol designed in this paper uses a specialized MPC protocol for securely evaluating a private ML model on private data, and a generic two-party computation protocol to compute an argmax in different forms. For the generic two-party computation, we use a classical Yao’s\ngarbled-circuit protocol that can compute any function in Boolean circuit. For secure classification of neural networks, our protocol design is flexible to work with most existing protocols (Boemer et al., 2020; 2019; Gilad-Bachrach et al., 2016; Mishra et al., 2020). Most existing protocols are different in how they handle linear layers (e.g. convolution) and non-linear layers (e.g. ReLU). For instance, one can perform all computations using a fully homomorphic encryption scheme resulting in low communication but very high computation, or using classical MPC techniques with more communication but less computation. Other works (Juvekar et al., 2018) use a hybrid of both and thus enjoy further improvement in performance (Mishra et al., 2020). We discuss it in more details in Appendix A.2." }, { "heading": "2.2 DIFFERENTIAL PRIVACY", "text": "Differential privacy is the established framework for measuring the privacy leakage of a randomized algorithm (Dwork et al., 2006). In the context of machine learning, it requires the training algorithm to produce statistically indistinguishable outputs on any pair of datasets that only differ by one data point. This implies that an adversary observing the outputs of the training algorithm (e.g., the model’s parameters, or its predictions) can improve its guess at most by a bounded probability when inferring properties of the training data points. Formally, we have the following definition. Definition 1 (Differential Privacy). A randomized mechanism M with domain D and range R satisfies (ε, δ)-differential privacy if for any subset S ⊆ R and any adjacent datasets d, d′ ∈ D, i.e. ‖d− d′‖1 ≤ 1, the following inequality holds:\nPr [M(d) ∈ S] ≤ eεPr [M(d′) ∈ S] + δ (1)\nIn our work, we obtain differential privacy by post-processing the outputs of an ensemble of models with the noisy argmax mechanism of Dwork et al. (2014) (for more details on differential privacy, please refer to Appendix B), à la PATE (Papernot et al., 2017). We apply the improved analysis of PATE (Papernot et al., 2018) to compute the privacy guarantees obtained (i.e., a bound on ε). Our technique differs from PATE in that each of the teacher models is trained by different parties whereas PATE assumes a centralized learning setting where all of the training and inference is performed by a single party. Note that our technique is used at inference time, which differs from recent works in differential privacy that compare neuron pruning during training with mechanisms satisfying differential privacy (Huang et al., 2020c). We use cryptography to securely decentralize computations." }, { "heading": "3 THE CAPC PROTOCOL", "text": "We now introduce our protocol for achieving both confidentiality and privacy in collaborative (CaPC) learning. To do so, we formalize and generalize our example of collaborating hospitals from Section 1." }, { "heading": "3.1 PROBLEM DESCRIPTION", "text": "A small number of parties {Pi}i∈[1,K], each holding a private dataset Di = {(xj , yj or∅)j∈[1,Ni]} and capable of fitting a predictive modelMi to it, wish to improve the utility of their individual models via collaboration. Due to the private nature of the datasets in question, they cannot directly share data or by-products of data (e.g., model weights) with each other. Instead, they will collaborate by querying each other for labels of the inputs about which they are uncertain. In the active learning paradigm, one party Pi∗ poses queries in the form of data samples x and all the other parties {Pi}i 6=i∗ together provide answers in the form of predicted labels ŷ. Each model {Mi}i∈[1,K] can be exploited in both the querying phase and the answering phase, with the querying party alternating between different participants {Pi}i∈[1,K] in the protocol.\nThreat Model. To obtain the strong confidentiality and privacy guarantees that we described, we require a semi-trusted third party called the privacy guardian (PG). We assume that the PG does not collude with any party and that the adversary can corrupt any subset of C parties {Pi}i∈[1,C]. When more than one party gets corrupted, this has no impact on the confidentiality guarantee, but the privacy budget obtained will degrade by a factor proportional to C because the sensitivity of the aggregation mechanism increases (see Section 3.3). We work in the honest-but-curious setting, a commonly adopted assumption in cryptography which requires the adversary to follow the protocol description correctly but will try to infer information from the protocol transcript." }, { "heading": "3.2 CAPC PROTOCOL DESCRIPTION", "text": "Our protocol introduces a novel formulation of the private aggregation of teachers, which implements two-party confidential inference and secret sharing to improve upon the work of Papernot et al. (2017) and guarantee confidentiality. Recall that the querying party Pi∗ initiates the protocol by sending an encrypted input x to all answering parties Pi, i 6= i∗. We use sk and pk to denote the secret and public keys owned by party Pi∗ . The proposed protocol consists of the following steps:\n1. For each i 6= i∗, Pi (with model parametersMi as its input) and Pi∗ (with x, sk, pk as its input) run a secure two-party protocol. As the outcome, Pi obtains ŝi and Pi∗ obtains si such that si + ŝi = OneHot(arg max(ri)) where ri are the predicted logits. This step could be achieved by the following:\na) Pi∗ and Pi run a secure two-party ML classification protocol such that Pi∗ learns nothing while Pi learns Encpk(ri), where ri are the predicted logits. b) Pi generates a random vector r̂i , performs the following computation on the encrypted data Encpk(ri)− Encpk(r̂i) = Encpk(ri − r̂i), and sends the encrypted difference to Pi∗ , who decrypts and obtains (ri − r̂i). c) Pi (with r̂i as input) and Pi∗ (with ri − r̂i as input) engage in Yao’s two-party garbledcircuit protocol to obtain vector si for Pi∗ and vector ŝi for Pi, such that si + ŝi = OneHot(arg max(ri)).\n2. Pi sends ŝi to the PG. The PG computes ŝ = ∑ i 6=i∗ ŝi + DPNoise( ), where DPNoise() is\nelement-wise Laplacian or Gaussian noise whose variance is calibrated to obtain a desired differential privacy guarantee ε; whereas Pi∗ computes s = ∑ i6=i∗ si.\n3. The PG and Pi∗ engage in Yao’s two-party garbled-circuit protocol for computing the argmax: Pi∗ gets arg max(ŝ + s) and the PG gets nothing.\nNext, we elaborate on the confidentiality and privacy guarantees achieved by CaPC." }, { "heading": "3.3 CONFIDENTIALITY AND DIFFERENTIAL PRIVACY GUARANTEES", "text": "Confidentiality Analysis. We prove in Appendix E that the above protocol reveals nothing to Pi or the PG and only reveals the final noisy results to Pi∗ . The protocol is secure against a semi-honest adversary corrupting any subset of parties. Intuitively, the proof can be easily derived based on the security of the underlying components, including two-party classification protocol, secret sharing, and Yao’s garbled circuit protocol. As discussed in Section 4.1 and Appendix A.1, for secret sharing of unbounded integers, we need to make sure the random padding is picked from a domain much larger than the maximum possible value being shared. Given the above, a corrupted Pi∗ cannot learn anything aboutMi of the honest party due to the confidentiality guarantee of the secure classification protocol; similarly, the confidentiality of x against corrupted Pi is also protected. Intermediate values are all secretly shared (and only recovered within garbled circuits) so they are not visible to any party.\nDifferential Privacy Analysis. Here, any potential privacy leakage in terms of differential privacy is incurred by the answering parties {Pi}i6=i∗ for their datasets {Di}i 6=i∗ , because these parties share the predictions of their models. Before sharing these predictions to Pi∗ , we follow the PATE protocol: we compute the histogram of label counts ŷ, then add Laplacian or Gaussian noise using a sensitivity of 1, and finally return the argmax of ŷσ to Pi∗ . Since Pi∗ only sees this noisily aggregated label, both the data-dependent and data-independent differential privacy analysis of PATE apply to Pi∗ (Papernot et al., 2017; 2018). Thus, when there are enough parties with high consensus, we can obtain a tighter bound on the privacy budget as the true plurality will more likely be returned (refer to Appendix B for more details on how this is achieved in PATE). This setup assumes that only one answering party can be corrupted. If instead C parties are corrupted, the sensitivity of the noisy aggregation mechanism will be scaled by C and the privacy guarantee will deteriorate. There is no privacy leakage to the PG; it does not receive any part of the predictions from {Pi}i 6=i∗ ." }, { "heading": "4 EXPERIMENTS", "text": "CaPC aims to improve the model utility of collaborating parties by providing them with new labelled data for training their respective local models. Since we designed the CaPC protocol with techniques\nfor confidentiality (i.e., confidential inference and secret sharing) and differential privacy (i.e., private aggregation), our experiments consider the following three major dimensions:\n1. How well does collaboration improve the model utility of all participating parties?\n2. What requirements are there to achieve privacy and how can these be relaxed under different circumstances? What is the trade-off between the privacy and utility provided by CaPC?\n3. What is the resulting computational cost for ensuring confidentiality?" }, { "heading": "4.1 IMPLEMENTATION", "text": "We use the HE-transformer library with MPC (MP2ML) by Boemer (2020) in step 1a of our protocol for confidential two-party deep learning inference. To make our protocol flexible to any private inference library, not just those that return the label predicted by the model (HE-transformer only returns logits), we incorporate steps 1b and 1c of the protocol outside of the private inference library. The EMP toolkit (Wang et al., 2016) for generic two-party computation is used to compute the operations including argmax and sum via the garbled circuits. To secret share the encrypted values, we first convert them into integers over a prime field according to the CKKS parameters, and then perform secret sharing on that domain to obtain perfect secret sharing. We use the single largest logit value for eachMi obtained on its training set Di in plain text to calculate the necessary noise." }, { "heading": "4.2 EVALUATION SETUP", "text": "Collaboration. We use the following for experiments unless otherwise noted. We uniformly sample from the training set in use2, without replacement, to create disjoint partitions, Di, of equal size and identical data distribution for each party. We select K = 50 and K = 250 as the number of parties for CIFAR10 and SVHN, respectively (the number is larger for SVHN because we have more data). We select Q = 3 querying parties, Pi∗ , and similarly divide part of the test set into Q separate private pools for each Pi∗ to select queries, until their privacy budget of is reached (using Gaussian noise with σ = 40 on SVHN and 7 on CIFAR10). We are left with 1, 000 and 16, 032 evaluation data points from the test set of CIFAR10 and SVHN, respectively. We fix = 2 and 20 for SVHN and CIFAR10, respectively (which leads to ≈ 550 queries per party), and report accuracy on the evaluation set. Querying models are retrained on their Di plus the newly labelled data; the difference in accuracies is their accuracy improvement.\nWe use shallower variants of VGG, namely VGG-5 and VGG-7 for CIFAR10 and SVHN, respectively, to accommodate the small size of each party’s private dataset. We instantiate VGG-7 with 6 convolutional layers and one final fully-connected layer, thus there are 7 functional layers overall. Similarly, VGG-5 has 4 convolutional layers followed by a fully connected layer. The ResNet-10 architecture starts with a single convolutional layer, followed by 4 basic blocks with 2 convolutional layers in each block, and ends with a fully-connected layer, giving 10 functional layers in total. The ResNet-8 architecture that we use excludes the last basic block and increases the number of neurons in the last (fully-connected) layer. We present more details on architectures in Appendix F.2.\nWe first train local models for all parties using their non-overlapping private datasets. Next, we run the CaPC protocol to generate query-answer pairs for each querying party. Finally, we retrain the local model of each querying party using the combination of their original private dataset and the newly obtained query-answer pairs. We report the mean accuracy and class-specific accuracy averaged over 5 runs for all retrained models, where each uses a different random seed.\nHeterogeneity and Data Skew. Where noted, our heterogeneous experiments (recall that this is a newly applicable setting that CaPC enables) use VGG-7, ResNet-8 and ResNet-10 architectures for K 3 parties, each. One model of each architecture is used for each of Q = 3 querying parties. Our data skew experiments use 80% less data samples for the classes ‘horse’, ‘ship’, and ‘truck’ on CIFAR10 and 90% less data for the classes 1 and 2 on SVHN. In turn, unfair ML algorithms perform worse on these specific classes, leading to worse balanced accuracy (see Appendix D). We adopt balanced accuracy instead of other fairness metrics because the datasets we use have no sensitive attributes, making them inapplicable. We employ margin, entropy, and greedy k-center active learning strategies\n2For the SVHN dataset, we combine its original training set and extra set to get a larger training set.\n(described in Appendix C) to encourage ML algorithms to sample more queries from regimes that have been underrepresented and to improve their fairness performance." }, { "heading": "4.3 COLLABORATION ANALYSIS", "text": "We first investigate the benefits of collaboration for improving each party’s model performance in several different settings, namely: homogeneous and heterogeneous model architectures across querying and answering parties, and uniform and non-uniform data sampling for training data. From these experiments, we observe: increased accuracy in both homogeneous settings and heterogeneous settings to all model architectures (Section 4.3.1) and improved balanced accuracy when there is data skew between parties, i.e., non-uniform private data (Section 4.3.2)." }, { "heading": "4.3.1 UNIFORMLY SAMPLED PRIVATE DATA", "text": "The first setting we consider is a uniform distribution of data amongst the parties—there is no data drift among parties. Our set up for the uniform data distribution experiments is detailed in Section 4.2. We evaluate the per-class and overall accuracy before and after CaPC in both homogeneous and heterogeneous settings on the CIFAR10 and SVHN datasets.\nIn Figure 2, we see there is a consistent increase in accuracy for each class and overall in terms of mean accuracy across all parties on the test sets. We observe these improvements in both the homogeneous and heterogeneous settings for both datasets tested. As demonstrated in Figure 2, there is a greater climb in mean accuracy for the heterogeneous setting than the homogeneous setting on SVHN. Figures 5, 6, and 7 provide a breakdown of the benefits obtained by each querying party. We can see from these figures that all querying parties observe an increase in overall accuracy in heterogeneous and homogeneous settings with both datasets; additionally, the jump in accuracy is largely constant between different model architectures. In only 6.67% of all cases were any class-specific accuracies degraded, but they still showed a net increase in overall model accuracy." }, { "heading": "4.3.2 NON-UNIFORMLY SAMPLED PRIVATE DATA", "text": "In this section, we focus our analysis on two types of data skew between parties: varying size of data per class and total size of data provided; the setup is described in Section 4.2. To analyze data skew, we explore the balanced accuracy (which measures mean recall on a per-class basis, see Appendix D). We use balanced accuracy in order to investigate aggregate fairness gains offered by CaPC. Random sampling from non-uniform distributions leads to certain pitfalls: e.g., underrepresented classes are not specifically targeted in sampling. Thus, we additionally utilize active learning techniques, namely entropy, margin, and greedy-k-center (see Definitions 6-8 in Appendix C), and analyze balanced accuracy with each strategy.\nIn Figure 3, we see that CaPC has a significant impact on the balanced accuracy when there is data skew between the private data of participating parties. Even random sampling can drastically improve balanced accuracy. Leveraging active learning techniques, we can achieve additional benefits in\nbalanced accuracy. In particular, we observe that entropy and margin sampling achieves the greatest improvement over random sampling in per-class accuracy for the less represented classes ‘horse’, ‘ship’, and ‘truck’ on CIFAR10 and classes 1 and 2 on SVHN. These enhancements can be explained by the underlying mechanisms of margin and entropy sampling because the less-represented classes have a higher margin/entropy; the queries per class for each method are shown in Figure 9. Through these experiments, we show that in data skew settings, the CaPC protocol can significantly improve the fair performance of models (as measured by balanced accuracy), especially when combined with active learning techniques. Note that we see similar trends with (normal) accuracy as well." }, { "heading": "4.4 PRIVACY VERSUS UTILITY", "text": "We now study the trade-off between privacy and utility of our obtained models. Recall that we add Gaussian (or Laplacian) noise to the aggregate of predicted labels of all parties. Under the uniform setting, we choose the standard deviation σ by performing a (random) grid search and choosing the highest noise before a significant loss in accuracy is observed. In doing so, each query uses minimal ε while maximizing utility. Figure 11 in Appendix F shows a sample plot for K = 250 models. For more details on how ε is calculated, please refer to Appendix B.\nAs we increase the number of parties, we can issue more queries for a given privacy budget (ε) which leads to a higher accuracy gain. In Figure 4, we report the accuracy gain achieved using CaPC with various numbers of parties, K. With a fixed total dataset size, increasing the number of parties decreases their training data size, leading to worse performing models. These models see the largest benefit from CaPC but, importantly, we always see a net improvement across all values of K.\nNumber of parties 150 200 250 300 400\nAccuracy gain (%) 0.62 1.45 2.39 3.07 3.87\nBest ε 3.50 3.32 2.60 2.40 1.91" }, { "heading": "4.5 COMPUTATIONAL COSTS OF CONFIDENTIALITY", "text": "The incorporation of confidentiality in CaPC increases computational costs. We segment the analysis of computational overhead of CaPC into three parts corresponding to sequential steps in the protocol: (1) inference, (2) secret sharing between each querying and answering party, and (3) secret sharing between the querying party and the PG. Each of these steps is analyzed in terms of the wall-clock time (in seconds). We use the default encryption setting in HE-transformer and vary the modulus range, N , which denotes the max value of a given plain text number to increase the maximum security level possible. HE-transformer only supports inference on CPUs and is used in step (1).\nStep (1) with neural network inference using MPC incurs the highest CPU and network costs (see Table 1 and Figure 13 in Appendix F). Even the base level of security increases computational cost by 100X, and high security levels see increases up to 1000X, in comparison to the non-encrypted inference on CPU. Compared to step (1), the rest of the CaPC protocol incurs a negligible overhead to perform secret sharing. Overall, CaPC incurs only a low additional cost over the underlying MP2ML framework, as shown in Figure 13, which enables applicability and scalability as these tools progress." }, { "heading": "5 DISCUSSION AND CONCLUSIONS", "text": "CaPC is a secure and private protocol that protects both the confidentiality of test data and the privacy of training data, which are desired in applications like healthcare and finance. Our framework facilitates collaborative learning using heterogeneous model architectures and separate private datasets, even if the number of parties involved is small. It offers notable advantages over recent methods for learning with multiple participants, such as federated learning, which assumes training of a single fixed model architecture. CaPC does not assume a homogeneous model architecture and allows parties to separately and collaboratively train different models optimized for their own purposes. Federated learning also requires a large number of parties while CaPC provides gains in accuracy with significantly fewer participants, even in contexts where each party already has a model with high accuracy. Notably, CaPC incurs low overhead on top of underlying tools used for secure neural network inference.\nThrough our experiments, we also demonstrate that CaPC facilitates collaborative learning even when there exists non i.i.d (highly skewed) private data among parties. Our experiments show that CaPC improves on the fair performance of participating querying models as indicated by improvements in the balanced accuracy, a common fairness metric. Further, we observe a significant increase in per-class accuracy on less-represented classes on all datasets tested. Notably, CaPC is easily configured to leverage active learning techniques to achieve additional fairness improvement gains or to learn from other heterogeneous models trained with fairness techniques, e.g., with synthetic minority oversampling (Chawla et al., 2002). In future work, we look to analyzing the fairness implications of CaPC in contexts where there is discrimination over a private dataset’s sensitive attributes, not just class labels. In these cases, other fairness metrics like equalized odds and equal opportunity (see Appendix D) can be explored.\nWe note some limitations of the proposed protocol. HE-transformer does not prevent leaking certain aspects of the model architecture, such as the type of non-linear activation functions and presence of MaxPooling layers. CaPC improves upon existing methods in terms of the necessary number of parties; however, it would be favorable to see this number decreased under 50 for better flexibility and applicability in practice.\nIn the face of this last limitation, when there are few physical parties, we can generate a larger number of virtual parties for CaPC, where each physical party subdivides their private dataset into disjoint partitions and trains multiple local models. This would allow CaPC to tolerate more noise injected during aggregation and provide better privacy guarantees. Note that each physical party could select queries using a dedicated strong model instead of the weak models used for answering queries in CaPC. This setting is desirable in cases where separate models are required within a single physical party, for example, in a multi-national organization with per-country models." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to acknowledge our sponsors, who support our research with financial and in-kind contributions: Microsoft, Intel, CIFAR through the Canada CIFAR AI Chair and AI catalyst programs, NFRF through an Exploration grant, and NSERC COHESA Strategic Alliance. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute www.vectorinstitute.ai/partners. Finally, we would like to thank members of CleverHans Lab for their feedback, especially: Tejumade Afonja, Varun Chandrasekaran, Stephan Rabanser, and Jonas Guan." }, { "heading": "A MORE BACKGROUND ON CRYPTOGRAPHY", "text": "A.1 CRYPTOGRAPHIC BUILDING BLOCKS\nHomomorphic encryption. Homomorphic encryption defines an encryption scheme such that the encryption and decryption functions are homomorphic between plaintext and ciphertext spaces. Although it is known that fully homomorphic encryption can be constructed based on lattice-based assumptions, most applications only require a weaker version with bounded number of multiplications on each ciphertext. Schemes with this constraint are much more practical, including for example, BGV (Brakerski et al., 2014), CKKS (Cheon et al., 2017), etc.\nSecret sharing. Secret sharing denotes a scheme in which a datum, the secret, is shared amongst a group of parties by dividing the secret into parts such that each party only has one part, or ‘share’ of the secret. The secret can only be recovered if a certain number of parties conspire to combine their shares. It is easy to construct secret sharing modulo a positive integer. If the application does not allow modular operation, one can still achieve statistically secure secret sharing by using random shares that are much larger than the secret being shared (Evans et al., 2011).\nOblivious transfer. Oblivious transfer involves two parties: the sending party and the receiving party. The sending party has two pieces of information, s0 and s1, and the receiver wants to receive sb, where b ∈ {0, 1}, such that the sending party cannot learn b and the receiving party cannot learn s¬b. In general, oblivious transfer requires public-key operations, however, it is possible to execute a large number of oblivious transfers with only a very small number of public-key operations based on oblivious transfer extension (Ishai et al., 2003).\nGarbled circuits. In Yao’s garbled circuit protocol for two-party computation, each of the two parties assumes a role, that of garbler or that of evaluator. The function f on which to compute each of the two parties’ inputs is described as a Boolean circuit. The garbler randomly generates aliases (termed labels) representing 0 and 1 in the Boolean circuit describing f and replaces the binary values with the generated labels for each wire in the circuit. At each gate in the circuit, which can be viewed as a truth table, the garbler uses the labels of each possible combination of inputs to encrypt the corresponding outputs, and permutes the rows of the truth table. The garbler then uses the generated labels for 0 and 1 to encode their own input data and sends these labels and the garbled Boolean circuit to the evaluator. The evaluator now converts their binary input data to the corresponding labels through a 1-2 oblivious transfer protocol with the garbler. After receiving the labels for their input, the evaluator evaluates the garbled circuit by trying to decrypt each row in the permutable truth tables at each gate using the input labels; only one row will be decryptable at each gate, which is the output label for the outgoing wire from the gate. The evaluator eventually finishes evaluating the garbled circuit and obtains the label for the output of the function f computed on the garbler’s and the evaluator’s input. The garbler then must provide the true value for the output label so that both parties can get the output.\nA.2 PROTECTING CONFIDENTIALITY USING MPC\nNeural networks present a challenge to secure multi-party computation protocols due to their unique structure and exploitative combination of linear computations and non-linear activation functions. Cryptographic inference with neural networks can be considered in two party computation case in which one party has confidential input for which they wish to obtain output from a model and the other party stores the model; in many cases the party storing the model also wishes that the model remains secure.\nConfidential learning and inference with neural networks typically uses homomorphic encryption (HE) or secure multi-party computation (MPC) methods. Many libraries support pure HE or MPC protocols for secure inference of neural networks; a comprehensive list can be viewed in (Boemer et al., 2020). Notably, libraries such as nGraph-HE (Boemer et al., 2019) and CryptoNets (GiladBachrach et al., 2016) provide pure homomorphic encryption solutions to secure neural network inference. nGraph-HE, an extension of graph compiler nGraph, allows secure inference of DNNs through linear computations at each layer using CKKS homomorphic encryption scheme (Cheon et al., 2017; Boemer et al., 2019). CryptoNets similarly permit confidential neural network inference using another leveled homomorphic encryption scheme, YASHE’ (Gilad-Bachrach et al., 2016). On the other hand, several libraries employing primarily MPC methods in secure NN inference frameworks rely on ABY, a tool providing support for common non-polynomial activation functions in NNs through use of both Yao’s GC and GMW.\nIn DL contexts, while pure homomorphic encryption methods maintain model security, their failure to support common non-polynomial activation functions leads to leaking of pre-activation values (feature maps at hidden layers). Tools that use solely MPC protocols avoid leaking pre-activation values as they can guarantee data confidentiality on non-polynomial activation functions but may compromise the security of the model architecture by leaking activation functions or model structure.\nRecent works on secure NN inference propose hybrid protocols that combine homomorphic encryption schemes, and MPC methods to build frameworks that try to reduce leakages common in pure HE and MPC protocols. Among recent works that use hybrid protocols and do not rely on trusted third parties are Gazelle (Juvekar et al., 2018), Delphi (Mishra et al., 2020), and MP2ML (Boemer et al., 2020).\nGazelle, Delphi and MP2ML largely support non-polynomial activation functions encountered in convolutional neural networks, such as maximum pooling and rectified linear unit (ReLU) operations. Gazelle introduced several improvements over previous methods for secure NN inference primarily relating to latency and confidentiality. In particular, Gazelle framework provides homomorphic encryption libraries with low latency implementations of algorithms for single instruction multiple data (SIMD) operations, ciphertext permutation, and homomorphic matrix and convolutional operations, pertinent to convolutional neural networks. Gazelle utilizes kernel methods to evaluate homomorphic operations for linear components of networks, garbled circuits to compute non-linear activation functions confidentially and additive secret sharing to quickly switch between these cryptographic protocols. Delphi builds on Gazelle, optimizing computation of both linear and non-linear com-\nputations in CNNs by secret sharing model weights in the pre-processing stage to speed up linear computations later, and approximating certain activation functions such as ReLU with polynomials. MP2ML employs nGraph-HE for homomorphic encryption and ABY framework for evaluation of non-linear functions using garbled circuits." }, { "heading": "B MORE BACKGROUND ON DIFFERENTIAL PRIVACY", "text": "One of the compelling properties of differential privacy is that it permits the analysis and control of cumulative privacy cost over multiple consecutive computations. For instance, strong composition theorem (Dwork et al., 2010) gives a tight estimate of the privacy cost associated with a sequence of adaptive mechanisms {Mi}i∈I . Theorem 1 (Strong Composition). For ε, δ, δ′ ≥ 0, the class of (ε, δ)-differentially private mechanisms satisfies (ε′, kδ + δ′)-differential privacy under k-fold adaptive composition for:\nε′ = ε √ 2k log(1/δ′) + kε(eε − 1) (2)\nTo facilitate the evaluation of privacy leakage resulted by a randomized mechanismM, it is helpful to explicitly define its corresponding privacy loss cM and privacy loss random variableCM. Particularly, the fact thatM is (ε, δ)-differentially private is equivalent to a certain tail bound on CM. Definition 2 (Privacy Loss). Given a pair of adjacent datasets d, d′ ∈ D and an auxiliary input aux, the privacy loss cM of a randomized mechanismM evaluated at an outcome o ∈ R is defined as:\ncM(o | aux, d, d′) , log Pr[M(aux, d) = o] Pr[M(aux, d′) = o]\n(3)\nFor an outcome o ∈ R sampled fromM(d), CM(aux, d, d′) takes the value cM(o | aux, d, d′).\nBased on the definition of privacy loss, Abadi et al. (Abadi et al., 2016) introduced the moments accountant to track higher-order moments of privacy loss random variable and achieved even tighter privacy bounds for k-fold adaptive mechanisms. Definition 3 (Moments Accountant). Given any adjacent datasets d, d′ ∈ D and any auxiliary input aux, the moments accountant of a randomized mechanismM is defined as:\nαM(λ) , max aux,d,d′\nαM(λ | aux, d, d′) (4)\nwhere αM(λ | aux, d, d′) , logE[exp(λCM(aux, d, d′))] is obtained by taking the logarithm of the privacy loss random variable.\nAs a natural relaxation to the conventional (ε, δ)-differential privacy, Rényi differential privacy (RDP) (Mironov, 2017) provides a more convenient and accurate approach to estimating privacy loss under heterogeneous composition. Definition 4 (Rényi Divergence). For two probability distributions P and Q defined over R, the Rényi divergence of order λ > 1 between them is defined as:\nDλ(P ||Q) , 1\nλ− 1 logEx∼Q\n[ (P (x)/Q(x))λ ] = 1\nλ− 1 logEx∼P\n[ (P (x)/Q(x))λ−1 ] (5)\nDefinition 5 (Rényi Differential Privacy). A randomized mechanismM is said to satisfy ε-Rényi differential privacy of order λ, or (λ, ε)-RDP for short, if for any adjacent datasets d, d′ ∈ D:\nDλ(M(d) ||M(d′)) = 1\nλ− 1 logEx∼M(d) [( Pr[M(d) = x] Pr[M(d′) = x] )λ−1] ≤ ε (6)\nTheorem 2 (From RDP to DP). If a randomized mechanismM guarantees (λ, ε)-RDP, then it also satisfies (ε+ log(1/δ)λ−1 , δ)-differential privacy for any δ ∈ (0, 1).\nBuilding upon the moments accountant and RDP techniques, Private Aggregation of Teacher Ensembles (PATE) (Papernot et al., 2017) provides a flexible approach to training machine learning models with strong privacy guarantees. Precisely, rather than directly learning from labeled private\ndata, the model that gets released instead learns from unlabeled public data by querying a teacher ensemble for predicted labels. Models in the ensemble are themselves trained on disjoint partitions of the private dataset, while privacy guarantees are enabled by applying the Laplace mechanism to the ensemble’s aggregated label counts. Coupled with data-dependent privacy analysis, PATE achieves a tighter estimate of the privacy loss associated with label queries, especially when the consensus among teacher models is strong. Given this motivation, the follow-up work of PATE (Papernot et al., 2018) further improves the privacy bound both by leveraging a more concentrated noise distribution to strengthen consensus and by rejecting queries that lack consensus." }, { "heading": "C MORE BACKGROUND ON ACTIVE LEARNING", "text": "Active learning, sometimes referred to as query learning, exploits the intuition that machine learning algorithms will be able to learn more efficiently if they can actively select the data from which they learn. For certain supervised learning tasks, this insight is of particularly important implications, as labeled data rarely exists in abundance and data labeling can be very demanding (Settles, 2009).\nIn order to pick queries that will most likely contribute to model learning, various pool sampling methods have been proposed to estimate the informativeness of unlabeled samples. Uncertainty-based approaches (Lewis & Gale, 1994), such as margin sampling and entropy sampling, typically achieve a satisfactory trade-off between sample utility and computational efficiency. We also explore a core-set approach to active learning using greedy-k-center sampling (Sener & Savarese, 2017). Definition 6 (Margin Sampling (Scheffer et al., 2001)). Given an unlabeled dataset d and a classification model with conditional label distribution Pθ(y |x), margin sampling outputs the most informative sample:\nx∗ = arg min x∈d Pθ(ŷ1 |x)− Pθ(ŷ2 |x) (7)\nwhere ŷ1 and ŷ2 stand for the most and second most probable labels for x, according to the model. Definition 7 (Entropy Sampling). Using the setting and notations in Definition 6, margin sampling can be generalized by using entropy (Shannon, 1948) as an uncertainty measure as follows:\nx∗ = arg max x∈d − ∑ i Pθ(yi |x) logPθ(yi |x) (8)\nwhere yi ranges over all possible labels. Definition 8 (Greedy-K-center Sampling). We aim to solve the k-center problem defined by Farahani & Hekmatfar (2009), which is, intuitively, the problem of picking k center points that minimize the largest distance between a data point and its nearest center. Formally, this goal is defined as\nmin S:|S∪D|≤k max i min j∈S∪D ∆(xi,xj) (9)\nwhere D is the current training set and S is our new chosen center points. This definition can can be solved greedily as shown in (Sener & Savarese, 2017)." }, { "heading": "D MORE BACKGROUND ON FAIRNESS", "text": "Due to the imbalance in sample quantity and learning complexity, machine learning models may have disparate predictive performance over different classes or demographic groups, resulting in unfair treatment of certain population. To better capture this phenomenon and introduce tractable countermeasures, various fairness-related criteria have been proposed, including balanced accuracy, demographic parity, equalized odds (Hardt et al., 2016), etc. Definition 9 (Balanced Accuracy). Balanced accuracy captures model utility in terms of both accuracy and fairness. It is defined as the average of recall scores obtained on all classes.\nAmong the criteria that aim to alleviate discrimination against certain protected attributes, equalized odds and equal opportunity Hardt et al. (2016) are of particular research interests. Definition 10 (Equalized Odds). A machine learning model is said to guarantee equalized odds with respect to protected attribute A and ground truth label Y if its prediction Ŷ and A are conditionally independent given Y . In the case of binary random variables A, Y, Ŷ , this is equivalent to:\nPr [ Ŷ = 1 |A = 0, Y = y ] = Pr [ Ŷ = 1 |A = 1, Y = y ] , y ∈ {0, 1} (10)" }, { "heading": "To put it another way, equalized odds requires the model to have equal true positive rates and equal false positive rates across the two demographic groups A = 0 and A = 1.", "text": "Definition 11 (Equal Opportunity). Equal opportunity is a relaxation of equalized odds that requires non-discrimination only within a specific outcome group, often referred to as the advantaged group. Using previous notations, the binary case with advantaged group Y = 1 is equivalent to:\nPr [ Ŷ = 1 |A = 0, Y = 1 ] = Pr [ Ŷ = 1 |A = 1, Y = 1 ] (11)" }, { "heading": "E PROOF OF CONFIDENTIALITY", "text": "Here we prove that our protocol described in the main body does not reveal anything except the final noised result to Pi∗ . In can be proven in the standard real-world ideal-world paradigm, where the ideal functionality takes inputs from all parties and sends the final results to Pi∗ . We use A to denote the set of corrupted parties. Below, we describe the simulator (namely S). The simulator strategy depends on if i∗ is corrupted.\nIf i∗ ∈ A, our simulator works as below:\n1.a) The simulator simulates what honest parties would do.\n1.b) For each i /∈ A, S sends fresh encryption of a random ri to Pi∗ . 1.c) For each i /∈ A, S sends random si to Pi∗ on be half of the 2PC functionality between Pi\nand Pi∗ . 2-3 S sends the output of the whole computation to Pi∗ on behalf of the 2PC functionality\nbetween PG and Pi∗\nIf i∗ /∈ A, our simulator works as below:\n1.a) If i∗ /∈ A, for each i ∈ A, S computes a fresh encryption of zero and sends it to Pi on behalf of Pi∗ .\n1.b) The simulator simulates what honest parties would do.\n1.c) For each i ∈ A, S sends random ŝi to Pi on behalf of the 2PC functionality between Pi and Pi∗ .\n2-3 The simulator simulates what honest parties would do.\nAssuming that the underlying encryption scheme is CPA secure and that 2PC protocols used in step 1, 2 and 3 are secure with respect to standard definitions (i.e., reveals nothing beyond the outputs), our simulation itself is perfect." }, { "heading": "F DETAILS ON EXPERIMENTAL SETUP", "text": "F.1 MNIST AND FASHION-MNIST\nWe use the same setup as for CIFAR10 and SVHN datasets with the following adjustments. We select K = 250 as the default number of parties. For the imbalanced classes we select classes 1 and 2 for MNIST as well as Trouser and Pullover for Fashion-MNIST. We use the Gaussian noise with σ = 40 (similarly to SVHN). We are left with 1, 000 evaluation data points from the test set (similarly to CIFAR10). We fix the default value of = 2.35 for MNIST and = 3.89 for Fashion-MNIST. We use a variant of the LeNet architecture.\nF.2 DETAILS ON ARCHITECTURES\nTo train the private models on subsets of datasets, we downsize the standard architectures, such as VGG-16 or ResNet-18. Below is the detailed list of layers in each of the architectures used (generated using torchsummary). The diagram for ResNet-10 also includes skip connections and convolutional layers for adjusting the sizes of feature maps.\nVGG-7 for SVHN: ---------------------------------------------------------------- Layer type Output Shape Param # ================================================================\nConv2d-1 [-1, 64, 32, 32] 1,728 BatchNorm2d-2 [-1, 64, 32, 32] 128\nReLU-3 [-1, 64, 32, 32] 0 MaxPool2d-4 [-1, 64, 16, 16] 0\nConv2d-5 [-1, 128, 16, 16] 73,728 BatchNorm2d-6 [-1, 128, 16, 16] 256\nReLU-7 [-1, 128, 16, 16] 0 MaxPool2d-8 [-1, 128, 8, 8] 0\nConv2d-9 [-1, 256, 8, 8] 294,912 BatchNorm2d-10 [-1, 256, 8, 8] 512\nReLU-11 [-1, 256, 8, 8] 0 Conv2d-12 [-1, 256, 8, 8] 589,824\nBatchNorm2d-13 [-1, 256, 8, 8] 512 ReLU-14 [-1, 256, 8, 8] 0\nMaxPool2d-15 [-1, 256, 4, 4] 0 Conv2d-16 [-1, 512, 4, 4] 1,179,648 BatchNorm2d-17 [-1, 512, 4, 4] 1,024 ReLU-18 [-1, 512, 4, 4] 0 Conv2d-19 [-1, 512, 4, 4] 2,359,296 BatchNorm2d-20 [-1, 512, 4, 4] 1,024\nReLU-21 [-1, 512, 4, 4] 0 Linear-22 [-1, 10] 5,130\n================================================================ Total params: 4,507,722 Params size MB: 17.20 ----------------------------------------------------------------\nResNet-10: ---------------------------------------------------------------- Layer type Output Shape Param # ================================================================\nConv2d-1 [-1, 64, 32, 32] 1,728 BatchNorm2d-2 [-1, 64, 32, 32] 128 Conv2d-3 [-1, 64, 32, 32] 36,864 BatchNorm2d-4 [-1, 64, 32, 32] 128 Conv2d-5 [-1, 64, 32, 32] 36,864 BatchNorm2d-6 [-1, 64, 32, 32] 128 BasicBlock-7 [-1, 64, 32, 32] 0 Conv2d-8 [-1, 128, 16, 16] 73,728 BatchNorm2d-9 [-1, 128, 16, 16] 256\nConv2d-10 [-1, 128, 16, 16] 147,456 BatchNorm2d-11 [-1, 128, 16, 16] 256 Conv2d-12 [-1, 128, 16, 16] 8,192 BatchNorm2d-13 [-1, 128, 16, 16] 256 BasicBlock-14 [-1, 128, 16, 16] 0 Conv2d-15 [-1, 256, 8, 8] 294,912 BatchNorm2d-16 [-1, 256, 8, 8] 512 Conv2d-17 [-1, 256, 8, 8] 589,824 BatchNorm2d-18 [-1, 256, 8, 8] 512 Conv2d-19 [-1, 256, 8, 8] 32,768 BatchNorm2d-20 [-1, 256, 8, 8] 512 BasicBlock-21 [-1, 256, 8, 8] 0 Conv2d-22 [-1, 512, 4, 4] 1,179,648 BatchNorm2d-23 [-1, 512, 4, 4] 1,024\nConv2d-24 [-1, 512, 4, 4] 2,359,296 BatchNorm2d-25 [-1, 512, 4, 4] 1,024 Conv2d-26 [-1, 512, 4, 4] 131,072 BatchNorm2d-27 [-1, 512, 4, 4] 1,024 BasicBlock-28 [-1, 512, 4, 4] 0\nLinear-29 [-1, 10] 5,130 ================================================================ Total params: 4,903,242 Params size MB: 18.70 ----------------------------------------------------------------\nLeNet style architecture for MNIST: ---------------------------------------------------------------- Layer type Output Shape Param # ================================================================\nConv2d-1 [-1, 20, 24, 24] 520 MaxPool2d-2 Conv2d-3 [-1, 50, 8, 8] 25,050 MaxPool2d-4 Linear-5 [-1, 500] 400,500 ReLU-6 Linear-7 [-1, 10] 5,010\n================================================================ Total params: 431,080 Trainable params: 431,080 Non-trainable params: 0 ---------------------------------------------------------------- Input size MB: 0.00 Forward/backward pass size MB: 0.12 Params size MB: 1.64 Estimated Total Size MB: 1.76 ----------------------------------------------------------------\nG ADDITIONAL EXPERIMENTS AND FIGURES\nNumber of parties 150 200 250 300 400\nAccuracy gain (%) 4.11 3.33 4.50 4.69 8.39\nBest ε 4.50 2.50 2.35 2.00 1.63" } ]
2,021
null
SP:aabb111652a2063c12c8faf92abc12e446d5d377
[ "This paper delves into a stability analysis of reweighting and resampling for overcoming imbalanced data in supervised learning. Reweighting employs the use of importance ratios to modify a samples weight to the training in turn changing the effective distribution. There are several resampling procedures which all have a similar effect in the analysis, and the authors consider several algorithms for resampling in their experiments. The reweighting approach, while convenient, leads to poorer stability under simplifying assumptions. While this is interesting in its own right, they show that under certain distributions of the data reweighting will actually not converge to the optimal minima, while resampling will. This is motivated by a large collection of work developing resampling methods for imbalanced data, which all come to similar conclusions (i.e. that following a resampling procedure outperforms a reweighting procedure in many, but not all, settings). They follow up with a SDE analysis in another toy problem, which they then extend to more realistic assumptions.", "This paper provides an analysis of why resampling can be better than reweighting in some cases. By observing the behaviour of resampling and reweighting in simple optimizations with SGD, the theoretical results show that resampling tends to be more stable. The general analysis is based on SDE approximation. Experiments on classification and off-policy evaluation show that resampling can be better in some cases." ]
A data set sampled from a certain population is biased if the subgroups of the population are sampled at proportions that are significantly different from their underlying proportions. Training machine learning models on biased data sets requires correction techniques to compensate for the bias. We consider two commonlyused techniques, resampling and reweighting, that rebalance the proportions of the subgroups to maintain the desired objective function. Though statistically equivalent, it has been observed that resampling outperforms reweighting when combined with stochastic gradient algorithms. By analyzing illustrative examples, we explain the reason behind this phenomenon using tools from dynamical stability and stochastic asymptotics. We also present experiments from regression, classification, and off-policy prediction to demonstrate that this is a general phenomenon. We argue that it is imperative to consider the objective function design and the optimization algorithm together while addressing the sampling bias.
[ { "affiliations": [], "name": "Jing An" }, { "affiliations": [], "name": "Lexing Ying" }, { "affiliations": [], "name": "Yuhua Zhu" } ]
[ { "authors": [ "Alexander Amini", "Ava P Soleimany", "Wilko Schwarting", "Sangeeta N Bhatia", "Daniela Rus" ], "title": "Uncovering and mitigating algorithmic bias through learned latent structure", "venue": "In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society,", "year": 2019 }, { "authors": [ "Nils Berglund" ], "title": "Kramers’ law: Validity, derivations and generalisations", "venue": "Markov Processes Relat. Fields,", "year": 2011 }, { "authors": [ "Tolga Bolukbasi", "Kai-Wei Chang", "James Y Zou", "Venkatesh Saligrama", "Adam T Kalai" ], "title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Anton Bovier", "Michael Eckhoff", "Véronique Gayrard", "Markus Klein" ], "title": "Metastability in reversible diffusion processes i. sharp asymptotics for capcities and exit times", "venue": "Journal of the European Mathematical Society,", "year": 2004 }, { "authors": [ "Anton Bovier", "Véronique Gayrard", "Markus Klein" ], "title": "Metastability in reversible diffusion processes ii: Precise asymptotics for small eigenvalues", "venue": "Journal of the European Mathematical Society,", "year": 2005 }, { "authors": [ "Aylin Caliskan", "Joanna J Bryson", "Arvind Narayanan" ], "title": "Semantics derived automatically from language corpora contain human-like", "venue": "biases. Science,", "year": 2017 }, { "authors": [ "Flavio Calmon", "Dennis Wei", "Bhanukiran Vinzamuri", "Karthikeyan Natesan Ramamurthy", "Kush R Varshney" ], "title": "Optimized pre-processing for discrimination prevention", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Haw-Shiuan Chang", "Erik Learned-Miller", "Andrew McCallum" ], "title": "Active bias: Training more accurate neural networks by emphasizing high variance samples", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Mikel Galar", "Alberto Fernandez", "Edurne Barrenechea", "Humberto Bustince", "Francisco Herrera" ], "title": "A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches", "venue": "IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews),", "year": 2011 }, { "authors": [ "Haixiang Guo", "Yijing Li", "Jennifer Shang", "Mingyun Gu", "Yuanyue Huang", "Bing Gong" ], "title": "Learning from class-imbalanced data: Review of methods and applications", "venue": "Expert Systems with Applications,", "year": 2017 }, { "authors": [ "Haibo He", "Edwardo A Garcia" ], "title": "Learning from imbalanced data", "venue": "IEEE Transactions on knowledge and data engineering,", "year": 2009 }, { "authors": [ "Haibo He", "Yunqian Ma" ], "title": "Imbalanced learning: foundations, algorithms, and applications", "venue": null, "year": 2013 }, { "authors": [ "Faisal Kamiran", "Toon Calders" ], "title": "Data preprocessing techniques for classification without discrimination", "venue": "Knowledge and Information Systems,", "year": 2012 }, { "authors": [ "Matthew Kay", "Cynthia Matuszek", "Sean A Munson" ], "title": "Unequal representation and gender stereotypes in image search results for occupations", "venue": "In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Bartosz Krawczyk" ], "title": "Learning from imbalanced data: open challenges and future directions", "venue": "Progress in Artificial Intelligence,", "year": 2016 }, { "authors": [ "M Pawan Kumar", "Benjamin Packer", "Daphne Koller" ], "title": "Self-paced learning for latent variable models", "venue": "In Advances in neural information processing systems,", "year": 2010 }, { "authors": [ "Qianxiao Li", "Cheng Tai", "E Weinan" ], "title": "Stochastic modified equations and adaptive stochastic gradient algorithms", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Qianxiao Li", "Cheng Tai", "E Weinan" ], "title": "Stochastic modified equations and dynamics of stochastic gradient algorithms i: Mathematical foundations", "venue": "J. Mach. Learn. Res.,", "year": 2019 }, { "authors": [ "Victoria López", "Alberto Fernández", "Salvador Garcı́a", "Vasile Palade", "Francisco Herrera" ], "title": "An insight into classification with imbalanced data: Empirical results and current trends on using data intrinsic characteristics", "venue": "Information sciences,", "year": 2013 }, { "authors": [ "Tomasz Maciejewski", "Jerzy Stefanowski" ], "title": "Local neighbourhood extension of smote for mining imbalanced data", "venue": "IEEE symposium on computational intelligence and data mining (CIDM),", "year": 2011 }, { "authors": [ "Tomasz Malisiewicz", "Abhinav Gupta", "Alexei A Efros" ], "title": "Ensemble of exemplar-svms for object detection and beyond", "venue": "In 2011 International conference on computer vision,", "year": 2011 }, { "authors": [ "Inderjeet Mani", "I Zhang" ], "title": "knn approach to unbalanced data distributions: a case study involving information extraction", "venue": "In Proceedings of workshop on learning from imbalanced datasets,", "year": 2003 }, { "authors": [ "Sachit Menon", "Alexandru Damian", "Shijia Hu", "Nikhil Ravi", "Cynthia Rudin" ], "title": "Pulse: Selfsupervised photo upsampling via latent space exploration of generative models", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Sérgio Moro", "Paulo Cortez", "Paulo Rita" ], "title": "A data-driven approach to predict the success of bank telemarketing", "venue": "Decision Support Systems,", "year": 2014 }, { "authors": [ "Thomas Müller-Gronbach", "Larisa Yaroslavtseva" ], "title": "On the performance of the euler–maruyama scheme for sdes with discontinuous drift coefficient", "venue": "In Annales de l’Institut Henri Poincaré, Probabilités et Statistiques,", "year": 2020 }, { "authors": [ "Grant Rotskoff", "Eric Vanden-Eijnden" ], "title": "Parameters as interacting particles: long time convergence and asymptotic error scaling of neural networks", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Matthew Schlegel", "Wesley Chung", "Daniel Graves", "Jian Qian", "Martha White" ], "title": "Importance resampling for off-policy prediction", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Chris Seiffert", "Taghi M Khoshgoftaar", "Jason Van Hulse", "Amri Napolitano" ], "title": "Resampling or reweighting: A comparison of boosting implementations", "venue": "In 2008 20th IEEE International Conference on Tools with Artificial Intelligence,", "year": 2008 }, { "authors": [ "Bin Shi", "Simon S Du", "Weijie Su", "Michael I Jordan" ], "title": "Acceleration via symplectic discretization of high-resolution differential equations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Latanya Sweeney" ], "title": "Discrimination in online ad delivery", "venue": null, "year": 2013 }, { "authors": [ "Lei Wu", "Chao Ma", "Weinan E" ], "title": "How sgd selects the global minima in over-parameterized learning: A dynamical stability perspective", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Han Zhao", "Amanda Coston", "Tameem Adel", "Geoffrey J Gordon" ], "title": "Conditional learning of fair representations", "venue": "In International Conference on Learning Representations,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "A data set sampled from a certain population is called biased if the subgroups of the population are sampled at proportions that are significantly different from their underlying population proportions. Applying machine learning algorithms naively to biased training data can raise serious concerns and lead to controversial results (Sweeney, 2013; Kay et al., 2015; Menon et al., 2020). In many domains such as demographic surveys, fraud detection, identification of rare diseases, and natural disasters prediction, a model trained from biased data tends to favor oversampled subgroups by achieving high accuracy there while sacrificing the performance on undersampled subgroups. Although one can improve by diversifying and balancing during the data collection process, it is often hard or impossible to eliminate the sampling bias due to historical and operational issues.\nIn order to mitigate the biases and discriminations against the undersampled subgroups, a common technique is to preprocess the data set by compensating the mismatch between population proportion and the sampling proportion. Among various approaches, two commonly-used choices are reweighting and resampling. In reweighting, one multiplies each sample with a ratio equal to its population proportion over its sampling proportion. In resampling, on the other hand, one corrects the proportion mismatch by either generating new samples for the undersampled subgroups or selecting a subset of samples for the oversampled subgroups. Both methods result in statistically equivalent models in terms of the loss function (see details in Section 2). However, it has been observed in practice that resampling often outperforms reweighting significantly, such as boosting algorithms in classification (Galar et al., 2011; Seiffert et al., 2008), off-policy prediction in reinforcement learning (Schlegel et al., 2019) and so on. The obvious question is why.\nMain contributions. Our main contribution is to provide an answer to this question: resampling outperforms reweighting because of the stochastic gradient-type algorithms used for training. To the best of our knowledge, our explanation is the first theoretical quantitative analysis for this phenomenon. With stochastic gradient descent (SGD) being the dominant method for model training, our analysis is based on some recent developments for understanding SGD. We show via simple and\nexplicitly analyzable examples why resampling generates expected results while reweighting performs undesirably. Our theoretical analysis is based on two points of view, one from the dynamical stability perspective and the other from stochastic asymptotics.\nIn addition to the theoretical analysis, we present experimental examples from three distinct categories (classification, regression, and off-policy prediction) to demonstrate that resampling outperforms reweighting in practice. This empirical study illustrates that this is a quite general phenomenon when models are trained using stochastic gradient type algorithms.\nOur theoretical analysis and experiments show clearly that adjusting only the loss functions is not sufficient for fixing the biased data problem. The output can be disastrous if one overlooks the optimization algorithm used in the training. In fact, recent understanding has shown that objective function design and optimization algorithm are closely related, for example optimization algorithms such as SGD play a key role in the generalizability of deep neural networks. Therefore in order to address the biased data issue, we advocate for considering data, model, and optimization as an integrated system.\nRelated work. In a broader scope, resampling and reweighting can be considered as instances of preprocessing the training data to tackle biases of machine learning algorithms. Though there are many well-developed resampling (Mani & Zhang, 2003; He & Garcia, 2009; Maciejewski & Stefanowski, 2011) and reweighting (Kumar et al., 2010; Malisiewicz et al., 2011; Chang et al., 2017) techniques, we only focus on the reweighting approaches that do not change the optimization problem. It has been well-known that training algorithms using disparate data can lead to algorithmic discrimination (Bolukbasi et al., 2016; Caliskan et al., 2017), and over the years there have been growing efforts to mitigate such biases, for example see (Amini et al., 2019; Kamiran & Calders, 2012; Calmon et al., 2017; Zhao et al., 2019; López et al., 2013). We also refer to (Guo et al., 2017; He & Ma, 2013; Krawczyk, 2016) for a comprehensive review of this growing research field.\nOur approaches for understanding the dynamics of resampling and reweighting under SGD are based on tools from numerical analysis for stochastic systems. Connections between numerical analysis and stochastic algorithms have been rapidly developing in recent years. The dynamical stability perspective has been used in (Wu et al., 2018) to show the impact of learning rate and batch size in minima selection. The stochastic differential equations (SDE) approach for approximating stochastic optimization methods can be traced in the line of work (Li et al., 2017; 2019; Rotskoff & VandenEijnden, 2018; Shi et al., 2019), just to mention a few." }, { "heading": "2 PROBLEM SETUP", "text": "Let us consider a population that is comprised of two different groups, where a proportion a1 of the population belongs to the first group, and the rest with the proportion a2 = 1 − a1 belongs to the second (i.e., a1, a2 > 0 and a1 + a2 = 1). In what follows, we shall call a1 and a2 the population proportions. Consider an optimization problem for this population over a parameter θ. For simplicity, we assume that each individual from the first group experiences a loss function V1(θ), while each individual from the second group has a loss function of type V2(θ). Here the loss function V1(θ) is assumed to be identical across all members of the first group and the same for V2(θ) across the second group, however it is possible to extend the formulation to allow for loss function variation within each group. Based on this setup, a minimization problem over the whole population is to find\nθ∗ = arg min θ V (θ), where V (θ) ≡ a1V1(θ) + a2V2(θ). (1)\nFor a given set Ω of N individuals sampled uniformly from the population, the empirical minimization problem is\nθ∗ = arg min θ\n1\nN ∑ r∈Ω Vir (θ), (2)\nwhere ir ∈ {1, 2} denotes which group an individual r belongs to. When N grows, the empirical loss in (2) is consistent with the population loss in (1) as there are approximately a1 fraction of samples from the first group and a2 fraction of samples from the second.\nHowever, the sampling can be far from uniformly random in reality. Let n1 and n2 with n1+n2 = N denote the number of samples from the first and the second group, respectively. It is convenient to define fi, i = 1, 2 as the sampling proportions for each group, i.e., f1 = n1/N and f2 = n2/N with f1 + f2 = 1. The data set is biased when the sampling proportions f1 and f2 are different from the population proportions a1 and a2. In such a case, the empirical loss is f1V1(θ) + f2V2(θ), which is clearly wrong when compared with (1).\nLet us consider two basic strategies to adjust the model: reweighting and resampling. In reweighting, one assigns to each sample r ∈ Ω a weight air/fir and the reweighting loss function is\nVw(θ) ≡ 1\nN ∑ r∈Ω air fir Vir (θ) = a1V1(θ) + a2V2(θ). (3)\nIn resampling, one either adds samples to the minority group (i.e., oversampling) or removing samples from the majority group (i.e., undersampling). Although the actual implementation of oversampling and undersampling could be quite sophisticated in order to avoid overfitting or loss of information, mathematically we interpret the resampling as constructing a new set of samples of size M , among which a1M samples are of the first group and a2M samples of the second. The resampling loss function is\nVs(θ) ≡ 1\nM ∑ s Vis(θ) = a1V1(θ) + a2V2(θ). (4)\nNotice that both Vw(θ) and Vs(θ) are consistent with the population loss function V (θ). This means that, under mild conditions on V1(θ) and V2(θ), a deterministic gradient descent algorithm from a generic initial condition converges to similar solutions for Vw(θ) and Vs(θ). For a stochastic gradient descent algorithm, the expectations of the stochastic gradients of Vw(θ) and Vs(θ) also agree at any θ value. However, as we shall explain below, the training behavior can be drastically different for a stochastic gradient algorithm. The key reason is that the variances experienced for Vw(θ) and Vs(θ) can be drastically different: computing the variances of gradients for resampling and reweighting reveals that\nV [ ∇V̂s(θ) ] = a1∇V1(θ)∇V1(θ)T + a2∇V2(θ)∇V2(θ)T − (E[∇V̂s(θ)])2,\nV [ ∇V̂w(θ) ] = a21 f1 ∇V1(θ)∇V1(θ)T + a22 f2 ∇V2(θ)∇V2(θ)T − (E[∇V̂w(θ)])2.\n(5)\nThese formulas indicate that, when f1/f2 is significantly misaligned with a1/a2, the variance of reweighting can be much larger. Without knowing the optimal learning rates a priori, it is difficult to select an efficient learning rate for reliable and stable performance for stiff problems, when only reweighting is used. In comparison, resampling is more favorable especially when the choice of learning rates is restrictive." }, { "heading": "3 STABILITY ANALYSIS", "text": "Let us use a simple example to illustrate why resampling outperforms reweighting under SGD, from the viewpoint of stability. Consider two loss functions V1 and V2 with disjoint supports,\nV1(θ) =\n{ 1 2 (θ + 1)\n2 − 12 , θ ≤ 0 0, θ > 0, V2(θ) = { 0, θ ≤ 0 1 2 (θ − 1) 2 − 12 , θ > 0, (6)\neach of which is quadratic on its support. The population loss function is V (θ) = a1V1(θ)+a2V2(θ), with two local minima at θ = −1 and θ = 1. The gradients for V1 and V2 are\n∇V1(θ) = { θ + 1, θ ≤ 0 0, θ > 0. , ∇V2(θ) = { 0, θ ≤ 0 θ − 1, θ > 0.\nSuppose that the population proportions satisfy a2 > a1, then θ = 1 is the global minimizer and it is desired that SGD should be stable near it. However, as shown in Figure 1, when the sampling proportion f2 is significantly less than the population proportion a2, for reweighting θ = 1 can easily become unstable: even if one starts near the global minimizer θ = 1, the trajectories for reweighting\nalways gear towards θ = −1 after a few steps (see Figure 1(1)). On the other hand, for resampling θ = 1 is quite stable (see Figure 1(2)).\nThe expectations of the stochastic gradient are the same for both methods. It is the difference in the second moment that explains why trajectories near the two minima exhibit different behaviors. Our explanation is based on the stability analysis framework used in (Wu et al., 2018). By definition, a stationary point θ∗ is stochastically stable if there exists a uniform constant 0 < C ≤ 1 such that E[‖θk − θ∗‖2] ≤ C‖θ0 − θ∗‖2, where θk is the k-th iterate of SGD. The stability conditions for resampling and reweighting are stated in the following two lemmas, in which we use η to denote the learning rate. Lemma 1. For resampling, the conditions for the SGD to be stochastically stable around θ = −1 and θ = 1 are respectively\n(1− ηa1)2 + η2a1a2 ≤ 1, (1− ηa2)2 + η2a1a2 ≤ 1. Lemma 2. For reweighting, the condition for the SGD to be stochastically stable around θ = −1 and θ = 1 are respectively\n(1− ηa1)2 + η2f1f2 ( a1 f1 )2 ≤ 1, (1− ηa2)2 + η2f1f2 ( a2 f2 )2 ≤ 1.\nNote that the stability conditions for resampling are independent of the sampling proportions (f1, f2), while the ones for reweighting clearly depend on (f1, f2). We defer the detailed computations to Appendix A.\nLemma 2 shows that reweighting can incur a more stringent stability criterion. Let us consider the case a1 = 12 − , a2 = 1 2 + with a small constant > 0 and f2/f1 1. For reweighting, the global minimum θ = 1 is stochastically stable only if η(1 + f1/f2) ≤ 4 + O( ). This condition becomes rather stringent in terms of the learning rate η since f1/f2 1. On the other hand, the local minimizer θ = −1 is stable if η(1 + f2/f1) ≤ 4 +O( ), which could be satisfied for a broader range of η because f2/f1 1. In other words, for a fixed learning rate η, when the ratio f2/f1 between the sampling proportions is sufficiently small, the desired minimizer θ = 1 is no longer statistically stable with respect to SGD." }, { "heading": "4 SDE ANALYSIS", "text": "The stability analysis can only be carried for a learning rate η of a finite size. However, even for a small learning rate η, one can show that the reweighting method is still unreliable from a different perspective. This section applies stochastic differential equation analysis to demonstrate it.\nLet us again use a simple example to illustrate the main idea. Consider the following two loss functions,\nV1(θ) = { |θ + 1| − 1, θ ≤ 0 θ, θ > 0 , V2(θ) = { − θ, θ ≤ 0 |θ − 1| − 1, θ > 0 ,\nwith 0 < 1. The population loss function is V (θ) = a1V1(θ) + a2V2(θ) with local minimizers θ = −1 and θ = 1. Note that theO( ) terms are necessary. Without it, if the SGD starts in (−∞, 0), all iterates will stay in this region because there is no drift from V2(θ). Similarly, if the SGD starts in (0,∞), no iterates will move to (−∞, 0). That means the result of SGD only depends on the initialization when O( ) term is absent.\nIn Figure 2, we present numerical simulations of the resampling and reweighting methods for the designed loss function V (θ). If a2 > a1, then the global minimizer of V (θ) is θ = 1 (see the Figure 2(1)). Consider a setup with population proportions a1/a2 = 0.4/0.6 along sampling proportions f1/f2 = 0.9/0.1, which are quite different. Figures 2(2) and (3) show the dynamics under the reweighting and resampling methods, respectively. The plots show that, while the trajectory for resampling is stable across time, the trajectory for reweighting quickly escapes to the (non-global) local minimizer θ = −1 even when it starts near the global minimizer θ = 1.\nWhen the learning rate is sufficiently small, one can approximate the SGD by an SDE, which in this piece-wise linear loss example is approximately a Langevin dynamics with a piecewise constant mobility. In particular when the dynamics reaches equilibrium, the stationary distribution of the stochastic process is approximated by a Gibbs distribution, which gives the probability densities at the stationary points. Let us denote ps(θ) and pw(θ) as the stationary distribution over θ under resampling and reweighting, respectively. Following lemmas quantitatively summarize the results. Lemma 3. When a2 > a1, V (1) < V (−1). The stationary distribution for resampling satisfies the relationship\nps(1)\nps(−1) = exp ( − 2 a1a2η (V (1)− V (−1)) ) +O ( ) > 1.\nLemma 4. With a2 > a1, V (1) < V (−1) < 0. Under the condition f2f1 ≤ a2 a1 √ V (−1) V (1) for the sampling proportions, the stationary distribution for reweighting satisfies the relationship\npw(1) pw(−1) = a21/f 2 1 a22/f 2 2 exp\n( −2f2/f1\na22η V (1) + 2f1/f2 a21η\nV (−1) ) +O( ) < 1.\nThe proofs of the above two lemmas can be found in Appendix B. Lemma 3 shows that for resampling it is always more likely to find θ at the global minimizer 1 than at the local minimizer −1. Lemma 4 states that for reweighting it is more likely to find θ at the local minimizer −1 when f2 f1 ≤ a2a1 √ V (−1) V (1) . Together, they explain the phenomenon shown in Figure 2.\nTo better understand the condition in Lemma 4, let us consider the case a1 = 12 − , a2 = 1 2 + with a small constant > 0. Under this setup, V (−1)/V (1) ≈ 1. Whenever the ratio of the sampling proportions f2/f1 is significantly less than the ratio of the population proportions a2/a1 ≈ 1, reweighting will lead to the undesired behavior. The smaller the ratio f2/f1 is, the less likely the global minimizer will be visited.\nThe reason for constructing the above piecewise linear loss function is to obtain an approximately explicitly solvable SDE with a constant coefficient for the noise. One can further extend the results in 1D for piecewise strictly convex function with two local minima (See Lemmas 9 and 10 in Appendix B.3). Here we present the most general results in 1D, that is, piecewise strictly convex function with finite number of local minima. One may consider the population loss function V (θ) = ∑k i=1 aiVi(θ) with Vi(θ) = hi(θ) for θi−1 < θ ≤ θi and Vi(θ) = O( ) otherwise, where hi(θ) are strictly convex functions and continuously differentiable, O( ) term is sufficiently small and smooth. Here {θi}k−1i=1 are k − 1 disjoint points, and θ0 = −∞, θk =∞. We assume that V (θ) has k local minimizers θ∗i for θ ∗ i ∈ (θi−1, θi). We present the following two lemmas with suitable assumptions (See Appendix B.3 for details of assumptions, the proof and follow-up discussions). Lemma 5. The stationary distribution for resampling at any two local minizers θ∗p, θ∗q with p > q satisfies the relationship\nps(θ ∗ p) ps(θ∗q ) = exp\n[ 2\nη ∫ θp θ∗p 1 h′p(θ) dθ ( 1 1− ap − 1 1− aq )] +O( ) = { > 1, if ap > aq; < 1, if ap < aq,\nLemma 6. The stationary distribution for reweighting at any two local minizers θ∗p, θ∗q with p > q satisfies the relationship\npw(θ ∗ p) pw(θ∗q ) = exp\n[ 2\nη ∫ θp θ∗p 1 h′p(θ) dθ ( fp ap(1− fp) − fq aq(1− fq) )] +O( ).\nMulti-dimensional results. Let us now consider the minimization of V (θ) = a1V1(θ) + a2V2(θ) for more general V1, V2 and also θ in high dimensions. It is in fact not clear how to extend the above stochastic analysis to more general functions V (θ). Instead we focus on the transition time from one stationary point to another in order to understand the behavior of resampling and reweighting. For this purpose, we again resort to the SDE approximation of the SGD in the continuous time limit.\nSuch a SDE approximation, first introduced in (Li et al., 2017), involves a data-dependent covariance coefficient for the diffusion term and is justified in the weak sense with an error of order O( √ η). More specifically, the dynamics can be approximated by dΘ = −∇V (Θ)dt+√ηΣ(Θ)1/2dB, (7) where Θ(t = kη) ≈ θk for the step k parameter θk, η is the learning rate, and Σ(Θ) is the covariance of the stochastic gradient at location Θ. In the SDE theory, the drift term∇V (·) is usually assumed to be Lipschitz. However, in machine learning (for example neural network training with non-smooth activation functions), it is common to encounter non-Lipschitz gradients of loss functions (as in the example presented in Section 3). To fill this gap, we provide in Appendix C a justification of SDE approximation for the drift with jump discontinuities, based on the proof presented in (MüllerGronbach et al., 2020). The following two lemmas summarize the transition times between the two local minimizers. Lemma 7. Assume that there are only two local minimizers θ∗1 , θ∗2 for the objective function V (θ). Let τθ∗1→θ∗2 be the transition time for Θ(t) in (7) from the -neighborhood of θ ∗ 1 (a closed ball of radius centered at θ∗1) to the -neighborhood of θ ∗ 2 and τθ∗2→θ∗1 be the transition time in the opposite direction. Then\nE[τθ∗1→θ∗2 ] E[τθ∗2→θ∗1 ] = √ det(∇2L(θ∗2)) det(∇2L(θ∗1)) exp ( 2 η ( δV (θ∗1) Σ(θ∗1) − δV (θ ∗ 2) Σ(θ∗2) )) +O( √ ).\nHere det(∇2L(θ∗1)) and det(∇2L(θ∗2)) are the determinants of the Hessians at θ∗1 and θ∗2 , respectively. δV (θ∗k) ≡ V (θ◦)− V (θ∗k) for k = 1, 2 where θ◦ is the saddle point between θ∗1 , θ∗2 .1\n1The formal definition of θ◦: Let θ(t) be a path with θ(0) = θ∗1 , θ(1) = θ∗2 , then θ̂(t) = argminθ(t) supt∈[0,1] V (θ(t)) is the path with minimal saddle point height among all continuous paths. θ◦ = supt∈(0,1) θ̂(t) is the saddle point of this path.\nThis lemma is known in the diffusion process literature as the Eyring-Kramers formula; see, e.g., (Berglund, 2011; Bovier et al., 2004; 2005). Using the above lemma, we obtain the following result for the transition times for resampling and reweighting. Lemma 8. Assume that there are only two local minimizers θ∗1 , θ∗2 for the objective function V (θ). Also assume that the loss function V1(·) for the first group is O( ) in the -neighborhood of θ∗2 and the loss function V2(·) for the second group is O( ) in the -neighborhood of θ∗1 . In addition, assume that the determinants of the Hessian at two local minimizers are the same. Then the ratio of the transition times between the two local minimizers for resampling is\nE[τsθ∗1→θ∗2 ] E[τsθ∗2→θ∗1 ] = exp\n( 2\nη\n( δV (θ∗1)\na1∇V1(θ∗1)∇V1(θ∗1)> − δV (θ\n∗ 2)\na2∇V2(θ∗2)∇V2(θ∗2)>\n)) +O( √ )\nand the ratio for reweighting is E[τwθ∗1→θ∗2 ] E[τwθ∗2→θ∗1 ] = exp ( 2 η ( f1δV (θ ∗ 1) a21∇V1(θ∗1)∇V1(θ∗1)> − f2δV (θ ∗ 2) a22∇V2(θ∗2)∇V2(θ∗2)> )) +O( √ ).\nSee Appendix B for the proof. When the ratio is larger than 1, it means that θ∗1 is more stable than θ∗2 . This result shows that for reweighting the relative stability of the two minimizers highly depends on the sampling proportions (f1, f2). On the other hand, for resampling it is independent of (f1, f2), which is precisely the desired result for a bias correction procedure. To see how the sampling proportions affect the behavior of reweighting, let us consider a simple case where θ∗1 is the global minimizer, ∇V1(θ∗1)∇V1(θ∗1)> = ∇V2(θ∗2)∇V2(θ∗2)>, a1 = 12 + , a2 = 1 2 − , and f1 f2. This ensures that δV (θ∗1) > δV (θ∗2) and the above ratio for resampling is larger than 1, which is the desired result. However, f1 f2 implies that f1a21 1, f2 a22 1, and the above ratio for reweighting is much smaller than 1, which means that the local minimizer θ∗2 is more stable than the global minimizer θ∗1 ." }, { "heading": "5 EXPERIMENTS", "text": "This section examines the empirical performance of resampling and reweighting for problems from classification, regression, and reinforcement learning. As mentioned in the previous sections, the noise of stochastic gradient algorithms makes optimal learning rate selections much more restrictive for reweighting, when the data sampling is highly biased. In order to achieve good learning efficiency and reasonable performance in a neural network training, adaptive stochastic gradient methods such as Adam (Kingma & Ba, 2014) are applied in the first two experiments. We observe that resampling consistently outperforms reweighting with various sampling ratios when combined with these adaptive learning methods.\nClassification. This experiment uses the Bank Marketing data set from (Moro et al., 2014) to predict if a client will subscribe a term deposit. After preprocessing, the provided data distribution over the variable “y” that indicates the subscription, is highly skewed: the ratio of “yes” and “no” is f1/f2 = 4640/36548 ≈ 1/7.88. We assume that the underlying population distribution is a1/a2 = 1. We setup a 3-layer neural network with the binary cross-entropy loss function and train with the default Adam optimizer. The training and testing data set is obtained using train test split provided in sklearn2. The training takes 5 epochs with the batch-size equal to 100. The performance is compared among the baseline (i.e. trained without using either resampling or reweighting), resampling (oversample the minority group uses the sample with replacement), and reweighting. We run the experiments 10 times for each case, and then compute and plot results by averaging.\nTo estimate the performance, rather than using the classification accuracy that can be misleading for biased data, we use the metric that computes the area under the receiver operating characteristic curve (ROC-AUC) from the prediction scores. The ROC curves plots the true positive rate on the y-axis versus the false positive rate on the x-axis. As a result, a larger area under the curve indicates a better performance of a classifier. From both Table 1 and Figure 3, we see that the oversampling has the best performance compared to others. We choose oversampling rather than undersampling for the resampling method, because if we naively down sample the majority group, we throw away many information that could be useful for the prediction.\n2https://scikit-learn.org/stable\nNonlinear Regression. This experiment uses the California Housing Prices data set3 to predict the median house values. The target median house values, ranging from 15k to 500k, are distributed quite non-uniformly. We select subgroups with median house values > 400k (1726 in total) and < 200k (11767 in total) and combine them to make our dataset. In the preprocessing step, we drop the “ocean proximity” feature and randomly set 30% of the data to be the test data. The remaining training data set with 8 features is fed into a 3-layer neural network. The population proportion of two subgroups is assumed to be a1/a2 ≈ 1, while resampling and reweighting are tested with various sampling ratios f1/f2 near 11767/1726. Their performance of is compared also with the baseline. In each test, the mean squared error (MSE) is chosen as the loss function and Adam\nis used as the optimizer in the model. The batch-size is 32 and the number of epochs is 400 for each case. As shown in Table 2, resampling significantly outperforms reweighting for all sampling ratios in terms of a lower averaged MSE, and its good stability is reflected in its lowest standard deviation for multiple runs.\nOff-policy prediction. In the off-policy prediction problem in reinforcement learning, the objective is to find the value function of policy π using the trajectory {(at, st, st+1)}Tt=1 generated by a behavior policy µ. To achieve this, the standard approach is to update the value function based on the behavior policy’s temporal difference (TD) error δ(st) = R(st) + γV (st+1) − V (st) with an importance weight Eπ[δ|st = s] = ∑ a∈A π(a|s) µ(a|s)E[δ|st = s, at = a]µ(a|s), where the summation is taken over the action space A. The resulting reweighting TD learning for policy π is\nVt+1(st) = Vt(st) + η π(at|st) µ(at|st) (R(st) + γVt(st+1)− Vt(st)),\nwhere η is the learning rate. This update rule is an example of reweighting. On the other hand, the expected TD error can also be written in the resampling form, Eπ[δ|st = s] = ∑ a∈A E[δ|st =\n3https://www.kaggle.com/camnugent/california-housing-prices\ns, at = a]π(a|s) = ∑ a∈A ∑π(a|s)N j=1 E[δj |st = s, at = a], where N is the total number of samples for st = s. This results to a resampling TD learning algorithm: at step t, Vt+1(st) = Vt(st) + η(R(sk) + γVt(sk+1)− Vt(sk)), where (ak, sk, sk+1) is randomly chosen from the data set {(aj , sj , sj+1)}sj=st with probability π(ak|st).\nConsider a simple example with discrete state space S = {i}n−1i=0 , action space A = {±1}, discount factor γ = 0.9 and transition dynamics st+1 = mod(st + at, n), where the operator mod (m,n) gives the remainder of m divided by n. Figure 4 shows the results of the off-policy TD learning by these two approaches, with the choice of n = 32 and r(s) = 1 + sin(2πs/n) and learning rate η = 0.1. The target policy is π(ai|s) = 12 while the behavior policy is µ(ai|s) = 1 2 + cai. The difference between the two policies becomes larger as the constant c ∈ [0, 1/2] increases. From the previous analysis, if one group has much fewer samples as it should have, then the minimizer of the reweighting method is highly affected by the sampling bias. This is verified in the plots: as c becomes larger, the performance of reweighting deteriorates, while resampling is rather stable and almost experiences no difference with the on-policy prediction in this example." }, { "heading": "6 DISCUSSIONS", "text": "This paper examines the different behaviors of reweighting and resampling for training on biasedly sampled data with the stochastic gradient descent. From both the dynamical stability and stochastic asymptotics viewpoints, we explain why resampling is numerically more stable and robust than reweighting. Based on this theoretical understanding, we advocate for considering data, model, and optimization as an integrated system, while addressing the bias.\nAn immediate direction for future work is to apply the analysis to more sophisticated stochastic training algorithms and understand their impact on resampling and reweighting. Another direction is to extend our analysis to unsupervised learning problems. For example, in the principal component analysis one computes the dominant eigenvectors of the covariance matrix of a data set. When the data set consists of multiple subgroups sampled with biases and a stochastic algorithm is applied to compute the eigenvectors, then an interesting question is how resampling or reweighting would affect the result." }, { "heading": "ACKNOWLEDGEMENTS", "text": "The work of L.Y. and Y.Z. is partially supported by the U.S. Department of Energy via Scientific Discovery through Advanced Computing (SciDAC) program and also by the National Science Foundation under award DMS-1818449. J.A. is supported by Joe Oliger Fellowship from Stanford University." }, { "heading": "A PROOFS IN SECTION 3", "text": "" }, { "heading": "A.1 PROOF OF LEMMA 1", "text": "Proof. In resampling, near θ = −1 the gradient is θ + 1 with probability a1 and 0 with probability a2. Let us denote the random gradient at each step by W (θ + 1), where W is a Bernoulli random\nvariable with mean E(W ) = a1 and variance V(W ) = a1a2. At the learning rate η, the iteration can be written as (θk+1 + 1) = (1− ηW )(θk + 1). The first and second moments of the iterates are\nE[θk + 1] = (1− ηa1)k(θ0 + 1), E[(θk + 1)2] = ((1− ηa1)2 + η2a1a2)k(θ0 + 1)2.\n(8)\nAccording to the definition of the stochastic stability, SGD is stable around θ = −1 if the multiplicative factor of the second equation is bounded by 1, i.e.\n(1− ηa1)2 + η2a1a2 ≤ 1. (9)\nConsider now the stability around θ = 1, the iteration can be written as\n(θk+1 − 1) = (1− ηW )(θk − 1),\nwhere W is again a Bernoulli random variable with E(W ) = a2 and V(W ) = a1a2. The same computation shows that the second moment follows\nE[(θk − 1)2] = ((1− ηa2)2 + η2a1a2)k(θ0 − 1)2.\nTherefore, the condition for the SGD to be stable around θ = 1 is\n(1− ηa2)2 + η2a1a2 ≤ 1. (10)" }, { "heading": "A.2 PROOF OF LEMMA 2", "text": "Proof. In reweighting, near θ = −1 the gradient is a1f1 (θ + 1) with probability f1 and 0 with probability f2. Let us denote the random gradient at each step by W (θ + 1), where W is a Bernoulli\nrandom variable with E(W ) = a1 and V(W ) = f1f2 ( a1 f1 )2 . At the learning rate η, the iteration can be written as (θk+1 + 1)← (1− ηW )(θk + 1).\nHence the second moments of the iterates are given by\nE[(θk + 1)2] = ((1− ηa1)2 + η2f1f2(a1/f1)2)k(θ0 + 1)2.\nTherefore, the condition for the SGD to be stable around θ = −1 is (1− ηa1)2 + η2f1f2 ( a1 f1 )2 ≤ 1.\nConsider now the stability around θ = 1, the gradient is 0 with probability f1 and a2f2 (θ − 1) with probability f2. An analysis similar to the case θ = −1 shows that the condition for the SGD to be stable around θ = 1 is\n(1− ηa2)2 + η2f1f2 ( a2 f2 )2 ≤ 1." }, { "heading": "B PROOFS IN SECTION 4", "text": "" }, { "heading": "B.1 PROOF OF LEMMA 3", "text": "Proof. In resampling, with probability a1 the gradients over the four intervals (−∞,−1), (−1, 0), (0, 1), and (1,∞) are −1, 1, , and . With probability a2, they are − , − , −1, and 1 across these four intervals. The variances of the gradients are a1a2(1 − )2, a1a2(1 + )2, a1a2(1 + )2, a1a2(1− )2, respectively, across the same intervals.\nSince 1, the variance can be written as a1a2+O( ) across all intervals. Then the SGD dynamics with learning rate η can be approximated by\nθk+1 ← θk − η ( V ′(θk) + √ a1a2 +O( )W ) ,\nwhere W ∼ N (0, 1) is a normal random variable. When η is small, one can approximate the dynamics by a stochastic differential equation of form\ndΘ = −V ′(Θ)dt+√η √ a1a2 +O( )dB\nby identifying θk ≈ Θ(t = kη) (see Appendix C for details). The stationary distribution of this stochastic process is\nps(θ) = 1\nZ exp\n( − 2\n(a1a2 +O( ))η V (θ)\n) ,\nwhere Z is a normalization constant. Plugging in θ = −1, 1 results in\nps(1)\nps(−1) = exp\n( − 2\n(a1a2 +O( ))η (V (1)− V (−1))\n) = exp ( − 2 a1a2η (V (1)− V (−1)) +O( ) )\n= exp ( − 2 a1a2η (V (1)− V (−1)) ) +O( ).\nUnder the assumption that 1, the last term is negligible. When a2 > a1, V (θ) is minimized at θ = 1, which implies −(V (1)− V (−1)) > 0. Hence, this ratio is larger than 1." }, { "heading": "B.2 PROOF OF LEMMA 4", "text": "Proof. In reweighting, with probability f1 the gradients are −a1f1 , a1 f1 , a1f1 , and a1 f1 over the four intervals (−∞,−1), (−1, 0), (0, 1), and (1,∞), respectively. With probability f2, they are −a2f2 , −a2f2 , − a2 f2 , and a2f2 . The variances of the gradients are f1f2( a1 f1 − a2f2 ) 2, f1f2(a1f1 + a2 f2 )2, f1f2( a1 f1 + a2f2 ) 2, and f1f2(a1f1 − a2 f2 )2, respectively, across the same intervals.\nSince 1, the variance can be written as f1f2 a 2 1\nf21 +O( ) for θ < 0 and f1f2 a22 f22 +O( ) for θ > 0.\nWith θk ≈ Θ(kη), the approximate SDE for θ < 0 is given by\ndΘ = −V ′(Θ)dt+√η √ f1f2\na21 f21 +O( )dB\nwhile the one for θ > 0 is\ndΘ = −V ′(Θ)dt+√η √ f1f2\na22 f22 +O( )dB\n(see Appendix C for the SDE derivations). The stationary distributions for θ < 0 and θ > 0 are, respectively,\n1\nZ1 exp − 2( f1f2\na21 f21\n+O( ) ) η V (θ) , 1 Z2 exp − 2( f1f2\na22 f22\n+O( ) ) η V (θ) . Plugging in θ = −1, 1 results in\npw(1) pw(−1) = Z1 Z2 exp − 2( f1f2\na22 f22\n+O( ) ) η V (1) + 2( f1f2\na21 f21\n+O( ) ) η V (−1) = Z1 Z2 exp ( −2f2/f1 a22η V (1) + 2f1/f2 a21η V (−1) +O( ) ) .\n(11)\nThe next step is to figure out the relationship between Z1 and Z2. Consider an SDE with non-smooth diffusion dΘ = −V ′(Θ)dt+ σdB. The Kolmogorov equation for the stationary distribution is\n0 = pt =\n( V ′(θ)p+ ( σ2\n2 p ) θ ) θ . (12)\nThis suggests that σ2p is continuous at the discontinuity θ = 0. In our setting, since V (0) = 0, this simplifies to (\nf1f2 a21 f21 +O( )\n) η · 1\nZ1 =\n( f1f2\na22 f22 +O( )\n) η · 1\nZ2 .\nThis simplifies to\nZ1 Z2\n= f1f2\na21 f21 +O( )\nf1f2 a22 f22\n+O( ) = a21/f 2 1 a22/f 2 2 +O( ).\nInserting this into (11) results in\npw(1)\npw(−1) =\n( a21/f 2 1\na22/f 2 2\n+O( ) ) exp ( −2f2/f1\na22η V (1) + 2f1/f2 a21η\nV (−1) +O( ) )\n= a21/f 2 1\na22/f 2 2\nexp ( −2f2/f1\na22η V (1) + 2f1/f2 a21η\nV (−1) ) +O( ).\nBy the assumption f2f1 ≤ a2 a1 √ V (−1) V (1) and V (1) < V (−1) < 0, one has ( a1 a2 )2 ( f2 f1 )2 ≤ V (−1)V (1) < 1 and − f2/f1 a22 V (1) ≤ − f1/f2 a21 V (−1). Hence the above ratio is less than 1.\nB.3 EXTENDED RESULTS FOR 1-DIMENSION\nLet us consider the population loss function V (θ) = a1V1(θ) + a2V2(θ) with,\nV1(θ) = { h1(θ), θ ≤ 0 θ, θ > 0 , V2(θ) = { − θ, θ ≤ 0 h2(θ), θ > 0 ,\nwhere h1, h2 are strictly convex functions and continuously differentiable. We assume V (θ) has two local minimizers θ1 < 0, θ2 > 0 and the values are negative at local minima. Therefore, when a2 > a1, θ2 should be the global minimizer. In addition, we assume that the geometries of h1, h2 at two local minimizers are similar, i.e., h1(θ1) = h2(θ2), h′1(θ1) = h ′ 2(θ2); if we set gi(θ) to be the anti-derivative of 1/h′i(θ), then g1(θ1) = g2(θ2). Moreover, we assume that the two disjoint convex functions are smooth at the disjoint point, i.e., h′1(0) = h ′ 2(0) and g1(0) = g2(0). The following two lemmas extend Lemmas 3 and 4 to piecewise strictly convex function based on the above assumptions.\nLemma 9. When a2 > a1, V (θ2) < V (θ1). The stationary distribution for resampling satisfies the relationship\nps(θ2) ps(θ1) = exp\n( 2\nη\n( 1\na1 − 1 a2 )∫ 0 θ1 1 h′1(θ) dθ ) +O( ) > 1.\nProof. In resampling, with probability a1 the gradients in the two intervals (−∞, 0), (0,∞) are h′1(θ), respectively; with probability a2 the gradients are − , h′2(θ) respectively. Therefore, the expectation of the gradients µ(θ) is a1h′1(θ) +O( ) in (−∞, 0) and a2h′2(θ) +O( ) in (0,∞). The variance of the gradients σ(θ) is a1a2h′1(θ)\n2 +O( ) in (−∞, 0) and a1a2h′2(θ)2 +O( ) in (0,∞). The p.d.f ps(t, θ) satisfies\n∂tps = ∂θ ( µps + η\n2 ∂θ(σps)\n) ,\ntherefore, the stationary distribution ps(θ) satisfies( µ+ η\n2 ∂θσ\n) ps + ησ\n2 ∂θps = 0, or equivalently,\n( 2µ\nησ + ∂θσ σ\n) ps + ∂θps = 0,\nwhich implies ps(θ) = 1Z e −F (θ) with normalization constant Z = ∫∞ −∞ e −F (θ), where\nF (θ) = ∫ θ −∞ 2µ(ξ) ησ(ξ) + ∂ξσ(ξ) σ(ξ) dξ = { F1(θ)− F1(−∞), θ ≤ 0, F2(θ)− F2(0) + F1(0)− F1(−∞), θ > 0.\n(13)\nBy inserting µ, σ in different intervals, one has F1(θ) = 2 ηa2 ∫ 1 h′1 dθ + log(a1a2(h ′ 1) 2) +O( ); F2(θ) = 2\nηa1\n∫ 1\nh′2 dθ + log(a1a2(h\n′ 2) 2) +O( ).\nHence, the ratio of the stationary probabiliy at two local minimizers θ1 < 0, θ2 > 0 is\nps(θ1) ps(θ2) = exp(−F (θ1) + F (θ2)) = exp(−F1(θ1) + F2(θ2) + (F1(0)− F2(0)))\n= exp ( − 2 ηa2 g1(θ1) + 2 ηa1 g2(θ2) + log ( h′2(θ2) 2 h′1(θ1) 2 )) ·\nexp\n( 2\nηa2 g1(0)−\n2\nηa1 g2(0) + log\n( h′1(0) 2\nh′2(0) 2\n)) +O( ),\nwhere gi(θ) = ∫\n1 h′i dθ, i = 1, 2. By the assumption that g1(θ1) = g2(θ2) and h′1(θ1) = h ′ 2(θ2),\ng1(0) = g2(0) and h′1(0) = h ′ 2(0) one has,\nps(θ1) ps(θ2) = exp\n( 2\nη (g1(0)− g1(θ1))\n( 1\na2 − 1 a1\n)) +O( ),\nSince a2 > a1 > 0, 1a2 − 1 a1 < 0. Because of the strictly convexity of h1, h′1(θ) > 0 in (θ1, 0), therefore, one has g1(0)− g1(θ1) = ∫ 0 θ1 1 h′1(θ) dθ > 0. Therefore\nps(θ1) ps(θ2) = exp\n( 2\nη (g1(0)− g1(θ1))\n( 1\na2 − 1 a1\n)) +O( ) < 1,\nLemma 10. When a2 > a1, V (θ2) < V (θ1). Under the condition f1f2 > √ a1 a2 , the stationary distribution for resampling satisfies the relationship\npw(θ2) pw(θ1) = exp\n( 2\nη ( f2 f1a2 − f1 f2a1 )∫ 0 θ1 1 h′1(θ) dθ ) +O( ) < 1.\nOne sufficient condition such that f1f2 > √ a1 a2 is when f1, f2 is significantly different from a1, a2 in the sense that f1 > f2 when the actually population proportion a1 < a2.\nProof. In reweighting, with probability f1 the gradients over the two intervals (−∞, 0), (0,∞) are a1f1 h ′ 1(θ), a1 f1 respectively; with probability f2 the gradients are −a2f2 , a2 f2 h′2(θ) respectively. Therefore, the expectation of the gradients µ(θ) is a1h′1(θ)+O( ) in (−∞, 0) and a2h′2(θ)+O( ) in (0,∞). The variance of the gradients σ(θ) is f2f1 a 2 1h ′ 1(θ) 2+O( ) in (−∞, 0) and f1f2 a 2 2h ′ 2(θ) 2+O( ) in (0,∞). From the similar analysis as in Lemma 9, the stationary distribution is pw(θ) = 1Z e −F (θ) with the same F (θ) defined in equation 13, but F1, F2 are defined as follows F1(θ) = 2f1 ηf2a1 ∫ 1 h′1 dθ + log ( f2a 2 1 f1 (h′1) 2 ) +O( );\nF2(θ) = 2f2 ηf1a2\n∫ 1\nh′2 dθ + log\n( f1a 2 2\nf2 (h′2) 2\n) +O( ).\nHence, the ratio of the stationary probabiliy at two local minimizers θ1 < 0, θ2 > 0 is pw(θ1)\npw(θ2) = exp(−F1(θ1) + F2(θ2) + (F1(0)− F2(0)))\n= exp ( − 2f1 ηf2a1 g1(θ1) + 2f2 ηf1a2 g2(θ2) + log ( f21a 2 2 f22a 2 1 h′2(θ2) 2 h′1(θ1) 2 )) ·\nexp ( 2f1 ηf2a1 g1(0)− 2f2 ηf1a2 g2(0) + log ( f22a 2 1 f21a 2 2 h′1(0) 2 h′2(0) 2 )) +O( ),\nwhere gi(θ) = ∫\n1 f ′i dθ, i = 1, 2. By the assumption that g1(θ1) = g2(θ2) and h′1(θ1) = h ′ 2(θ2),\ng1(0) = g2(0) and h′1(0) = h ′ 2(0) one has,\npw(θ1) pw(θ2) = exp\n( 2\nη (g1(0)− g1(θ1)) ( f1 f2a1 − f2 f1a2 )) +O( ).\nBecause of the strictly convexity of h1, one has g1(0)− g1(θ1) > 0. By the assumption f1f2 > √ a1 a2 ,\nthen (\nf1 f2a1 − f2f1a2 ) > 0, which gives ps(θ1)ps(θ2) > 1.\nProof of Lemmas 5 and 6 We can further extend the results in 1D for a finite number of local minima as presented in Lemmas 5 and 6. In the same way as in the two local minima case, we assume that hi(θ) has a similar geometry at the minimizers and hi(θ), hi+1(θ) are smooth enough at the disjoint point θi. In order to obtain the ratio of the stationary distribution at two arbitrary local minimizes, we take an additional assumption that gi(θi−1) = gi(θi) for all i, where gi(θ) is the antiderivative of 1/h′i(θ). Intuitively, this assumption requires that each local minimum has an equal barrier on both sides. To be more specific, the assumptions we mentioned above are the following: at all the local minimizers, hi(θ∗i ) = hj(θ ∗ j ) < 0, h ′ i(θ ∗ i ) = h ′ j(θ ∗ j ), let gi(θ) = ∫ 1\nh′i(θ) dθ, then\ngi(θ ∗ i ) = gj(θ ∗ j ) for any i 6= j; at all the disjoint points, h′i(θi) = hi+1(θi), gi(θi−1) = gi(θi) = gi+1(θi) for all i. Lemmas 5 and 6 are under the above assumptions.\nProof of Lemma 5. For resampling, with probability ai, the gradient is h′i(θ) for θ ∈ (θi−1, θi), and O( ) for θ /∈ (θi−1, θi). Therefore, the expectation and variance in (θi−1, θi) are µ = aih′i(θ)+O( ) and σ = ai(1− ai)h′i(θ)2 +O( ). The stationary solution is\nps(θ) = 1\nZ e−F (θ), with F (θ) = Fi(θ)− Fi(θi−1) + i−1∑ j=1 Fj(θj)− Fj(θj−1), for θ ∈ (θi−1, θi),\nwhere Z = ∫∞ −∞ e −F (θ) is a normalization constant and\nFi(θ) = 2\nη\n∫ 1\nh′i(θ) dθ + log\n( ai(1− ai)h′i(θ)2 ) +O( ).\nTherefore, the ratio of the stationary probability at any two local minimizers θ∗p, θ ∗ q is\nps(θ ∗ p) ps(θ∗q ) = exp\n− Fp(θ∗p)− Fp(θp−1) + p−1∑\nj=1\nFj(θj)− Fj(θj−1) +\nFq(θ∗q )− Fq(θq−1) + q−1∑ j=1 Fj(θj)− Fj(θj−1) = exp\n−Fp(θ∗p) + Fq(θ∗q ) + q−1∑ j=p Fj(θj)− Fj+1(θj) = exp ( − 2 η(1− ap) gp(θ ∗ p) + 2 η(1− aq) gq(θ ∗ q ) + log ( aq(1− aq)h′q(θ∗q )2 aq(1− ap)h′p(θ∗p)2 )) ·\nexp q−1∑ j=p\n2\nη(1− aj) gj(θj)−\n2\nη(1− aj+1) gj+1(θ\n∗ j ) + log\n( aj(1− aj)h′j(θj)2\naj+1(1− aj+1)h′j+1(θj)2\n)+O( ).\nBy the assumption that gp(θ∗p) = gq(θ ∗ q ), h ′ p(θ ∗ p) = h ′ q(θ ∗ q ) and gi(θi−1) = gi(θi) = gi+1(θi), h ′ i(θi) = h ′ i+1(θi) for all i, then the above ratio can be simplified to\nps(θ ∗ p) ps(θ∗q ) = exp\n[ 2\nη\n( gp(θp)− gp(θ∗p) )( 1 1− ap − 1 1− aq )] +O( ) = { > 1, if ap > aq; < 1, if ap < aq,\nwhere the last inequality can be easily derived from that gp(θp)−gp(θ∗p) = ∫ θp θ∗p 1 h′p(θ)\ndθ > 0 because of the strictly convexity of hp.\nProof of Lemma 6. For reweighting, with probability fi, the gradient is aifi h ′ i(θ) for θ ∈ (θi−1, θi), and O( ) for θ /∈ (θi−1, θi). Therefore, the expectation and variance in (θi−1, θi) are µ = aih′i(θ)+ O( ) and σ = (1−fi)a 2 i\nfi h′i(θ) 2 +O( ). The stationary solution\npw(θ) = 1\nZ e−F (θ), with F (θ) = Fi(θ)−Fi(θi−1) + i−1∑ j=1 Fj(θj)−Fj(θj−1), for θ ∈ (θi−1, θi),\nwhere Z = ∫∞ −∞ e −F (θ) is a normalization constant and\nFi(θ) = 2fi\nηai(1− fi)\n∫ 1\nh′i(θ) dθ + log\n( (1− fi)a2i\nfi h′i(θ) 2\n) +O( )\nTherefore, the ratio of the stationary probability at any two local minimizers θ∗p, θ ∗ q is\npw(θ ∗ p) pw(θ∗q ) = exp −Fp(θ∗p) + Fq(θ∗q ) + q−1∑ j=p Fj(θj)− Fj+1(θj) = exp ( − 2fp ηap(1− fp) gp(θ ∗ p) + 2fq ηaq(1− fq) gq(θ ∗ q ) + log ( fp(1− fq)a2qh′q(θ∗q )2 fq(1− fp)a2ph′p(θ∗p)2 )) ·\nexp q−1∑ j=p\n2\nη(1− aj) gj(θj)−\n2\nη(1− aj+1) gj+1(θ\n∗ j ) + log\n( fj(1− fj)a2jh′j(θj)2\nfj+1(1− fj+1)a2j+1h′j+1(θj)2 )+O( ) By the assumption that gp(θ∗p) = gq(θ ∗ q ), h ′ p(θ ∗ p) = h ′ q(θ ∗ q ) and gi(θi−1) = gi(θi) = gi+1(θi), h ′ i(θi) = h ′ i+1(θi) for all i, then the above ratio can be simplified to\npw(θ ∗ p) pw(θ∗q ) = exp\n[ 2\nη\n( gp(θp)− gp(θ∗p) )( fp ap(1− fp) − fq aq(1− fq) )] +O( )." }, { "heading": "Follow-up discussions of Lemma 5 and 6", "text": "We first note that ∫ θp θ∗p 1 h′p(θ)\ndθ > 0 due to the strictly convexity of hp. Therefore, one can see from Lemma 5 that for resampling, the stationary solution always has the highest probability at the global minimizer. On the other hand, for the stationary solution of reweighting in Lemma 6, let us consider the case when ap > aq . In this case, V (θ∗p) < V (θ ∗ q ), therefore, one expects the above ratio larger than 1, which implies that fpap(1−fp) − fq\naq(1−fq) > 0. Note that if fp = ap, fq = aq , then this term is always larger than 0, but when fp, fq are significantly different from ap, aq in the sense that fp < fq and fp < ap, fq > aq , then fp\nap(1−fp) − fq aq(1−fq) < 0, which will lead to ps(θ\n∗ p)\nps(θ∗q ) < 1, i.e.,\nhigher probability of converging to θ∗q , which is not desirable. To sum up, Lemma 6 shows that for reweighting, the stationary solution won’t have the highest probability at the global minimizer if the empirical proportion is significantly different fron the population proportion." }, { "heading": "B.4 PROOF OF LEMMA 8", "text": "Proof. By the variance of the gradients for resampling and reweighting in (5), and given that at the stationary point E[∇V (θ∗1)] = E[∇V (θ∗2)] = 0, one can omit the last term in the variance. In addition, since ∇V1(θ∗2),∇V2(θ∗1) = O( ) ∇V1(θ∗1),∇V2(θ∗2) by assumption, all the higher order terms are included in an O( √ ) term. One can then derive Lemma 8 from Lemma 7." }, { "heading": "C A JUSTIFICATION OF THE SDE APPROXIMATION", "text": "The stochastic differential equation approximation of SGD involving data-dependent covariance coefficient Gaussian noise was first introduced in (Li et al., 2017) and justified in the weak sense. Consider the SDE\ndΘ = b(Θ)dt+ σ(Θ)dB. (14)\nThe Euler-Maruyama discretization with time step η results in\nΘk+1 = Θk + ηb(Θk) + √ ησ(Θk)Zk, Zk ∼ N (0, 1), Θ0 = θ0. (15)\nIn our case, b(·) = −V ′(·). When b satisfies Lipschitz continuity and some technical smoothness conditions, according to (Li et al., 2017) for any function g from a smooth class M, there exists C > 0 and α > 0 such that for all k = 0, 1, 2, · · · , N ,\n|E[g(Θkη)]− E[g(θk)]| ≤ Cηα. However, as the loss function considered in this paper has jump discontinuous in the first derivative, the classical approximation error results for SDE do not apply. In fact, the problem V /∈ C1(Rn) is a common issue in machine learning and deep neural networks, as many loss functions involves non-smooth activation functions such as ReLU and leaky ReLU. In our case, we need to justify the SDE approximation adopted in Section 3. It turns out that strong approximation error can be obtained if\n• the noise coefficient σ is Lipschitz continuous and non-degenerate, and • the drift coefficient b is piece-wise Lipschitz continuous, in the sense that b has finitely\nmany discontinuity points −∞ = ξ0 < ξ1 < · · · < ξm < ξm+1 =∞ and in each interval (ξi−1, ξi), b is Lipschitz continuous.\nUnder these conditions, the following approximation result holds: for all k = 0, 1, 2, · · · , N , there exists C > 0 such that\nE[|Θkη − θk|] ≤ C √ η. (16)\nHere Θkη is the solution to SDE at time kη. The proof strategy closely follows from (MüllerGronbach et al., 2020). The key is to construct a bijective mapping G : R→ R that transforms (14) to SDE with Lipschitz continuous coefficients. With such a bijection G, one can define a stochastic process Z : [0, T ]× Ω→ R by Zt = G(Θt) and the transformed SDE is\ndZt = b̃(Zt)dt+ σ̃dBt, t ∈ [0, T ], Z0 = G(Θ0), (17)\nwith b̃ = (G′ · b+ 1 2 G′′ · σ2) ◦G−1 and σ̃ = (G′ · σ) ◦G−1. (18)\nAs the SGD updates can essentially be viewed as data from the Euler-Maruyama scheme, considering Zk as updates from Euler-Maruyama scheme leads to\nE[|Θkη − θk|] ≤ c1E[|Zkη −G ◦ θk|] = c1E[|Zkη − Zk + Zk −G ◦ θk|] ≤ c2 √ η + c1E[|Zk −G ◦ θk|].\nTo control the second item, we introduce θt := θk + b(θk)(t− kη) + √ t− kησ(θk)Zk,\nwhere t ∈ [0, kη]. Then as shown in (Müller-Gronbach et al., 2020),\nE[|Zk −G ◦ θk|] ≤ c √ η + cE [∣∣∣∣∣ ∫ kη\n0\n1B(θt, θk)dt ∣∣∣∣∣ ] ,\nwith B being the set of pairs (y1, y2) ∈ R2 where the joint Lipschitz estimate |b(y1) − b(y2)| does not apply due to at least one discontinuity. In (Müller-Gronbach et al., 2020), it is estimated by\nE [∣∣∣∣∣ ∫ kη\n0\n1B(θt, θk)dt ∣∣∣∣∣ ] ≤ c√η,\nwhich leads us to (16)." }, { "heading": "D NUMERICAL COMPARISONS WITH DIFFERENT LEARNING RATES", "text": "In this section, we present extensive numerical results to show the effect of learning rates in our toy examples. The Figure 5 corresponds to the example in Section 3, and Figure 6 corresponds to the example in Section 4." } ]
2,021
WHY RESAMPLING OUTPERFORMS REWEIGHTING FOR CORRECTING SAMPLING BIAS WITH STOCHASTIC GRA- DIENTS
SP:a92ce63df0b4384bf0304661c8a8c80553377d57
[ ".** Authors present a novel theoretical framework for assessing the effect of data augmentation (e.g. mini batch SGD), noise addition and the learning rate setup in gradient-based optimization with overparametrized models. Despite the analysis is only performed for linear regression, results extend the well-known Monro-Robbins theorem on rates of convergence. The manuscript is a first step for future analysis of the aforementioned techniques with other type of models and/or loss functions.", "The paper considers stochastic gradient descent with noisy gradients. In contrast to the standard setting (e.g., gradient Langevin dynamics) where additive Gaussian noise is added to the model gradient, this work focuses on additive perturbations of data instances. As a result of this, the optimization objective changes throughout the training process because the data is no longer static/fixed but assumed to be sampled from some distribution governing the perturbation process (see Eq. 3.3)." ]
We present a theoretical framework recasting data augmentation as stochastic optimization for a sequence of time-varying proxy losses. This provides a unified language for understanding techniques commonly thought of as data augmentation, including synthetic noise and label-preserving transformations, as well as more traditional ideas in stochastic optimization such as learning rate and batch size scheduling. We then specialize our framework to study arbitrary augmentations in the context of a simple model (overparameterized linear regression). We extend in this setting the classical Monro-Robbins theorem to include augmentation and obtain rates of convergence, giving conditions on the learning rate and augmentation schedule under which augmented gradient descent converges. Special cases give provably good schedules for augmentation with additive noise, minibatch SGD, and minibatch SGD with noise.
[]
[ { "authors": [ "Francis Bach", "Eric Moulines" ], "title": "Non-strongly-convex smooth stochastic approximation with convergence rate o (1/n)", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Peter L Bartlett", "Philip M Long", "Gábor Lugosi", "Alexander Tsigler" ], "title": "Benign overfitting in linear regression", "venue": "Proceedings of the National Academy of Sciences,", "year": 2020 }, { "authors": [ "Chris M Bishop" ], "title": "Training with noise is equivalent to tikhonov regularization", "venue": "Neural computation,", "year": 1995 }, { "authors": [ "Léon Bottou", "Frank E Curtis", "Jorge Nocedal" ], "title": "Optimization methods for large-scale machine learning", "venue": "Siam Review,", "year": 2018 }, { "authors": [ "Olivier Chapelle", "Jason Weston", "Léon Bottou", "Vladimir Vapnik" ], "title": "Vicinal risk minimization", "venue": "Advances in Neural Information Processing Systems", "year": 2001 }, { "authors": [ "Shuxiao Chen", "Edgar Dobriban", "Jane H Lee" ], "title": "Invariance reduces variance: Understanding data augmentation in deep learning and beyond", "venue": null, "year": 2019 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Dandelion Mane", "Vijay Vasudevan", "Quoc V Le" ], "title": "Autoaugment: Learning augmentation strategies from data", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Jonathon Shlens", "Quoc V Le" ], "title": "Randaugment: Practical automated data augmentation with a reduced search space", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2020 }, { "authors": [ "Tri Dao", "Albert Gu", "Alexander Ratner", "Virginia Smith", "Chris De Sa", "Christopher Re" ], "title": "A kernel theory of modern data augmentation", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Alexandre Défossez", "Francis Bach" ], "title": "Averaged least-mean-squares: Bias-variance trade-offs and optimal sampling distributions", "venue": "In Artificial Intelligence and Statistics, pp", "year": 2015 }, { "authors": [ "Terrance DeVries", "Graham W Taylor" ], "title": "Improved regularization of convolutional neural networks with cutout", "venue": "arXiv preprint arXiv:1708.04552,", "year": 2017 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch sgd: Training imagenet in 1 hour", "venue": "arXiv preprint arXiv:1706.02677,", "year": 2017 }, { "authors": [ "Yves Grandvalet", "Stéphane Canu" ], "title": "Noise injection for inputs relevance determination", "venue": "In Advances in intelligent systems,", "year": 1997 }, { "authors": [ "Dan Hendrycks", "Norman Mu", "Ekin D. Cubuk", "Barret Zoph", "Justin Gilmer", "Balaji Lakshminarayanan" ], "title": "AugMix: A simple data processing method to improve robustness and uncertainty", "venue": "Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Daniel Ho", "Eric Liang", "Xi Chen", "Ion Stoica", "Pieter Abbeel" ], "title": "Population based augmentation: Efficient learning of augmentation policy schedules", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "De Huang", "Jonathan Niles-Weed", "Joel A. Tropp", "Rachel Ward" ], "title": "Matrix concentration for products, 2020", "venue": null, "year": 2020 }, { "authors": [ "Daniel LeJeune", "Randall Balestriero", "Hamid Javadi", "Richard G Baraniuk" ], "title": "Implicit rugosity regularization via data augmentation", "venue": "arXiv preprint arXiv:1905.11639,", "year": 2019 }, { "authors": [ "Aitor Lewkowycz", "Guy Gur-Ari" ], "title": "On the training dynamics of deep networks with l 2 regularization", "venue": "arXiv preprint arXiv:2006.08643,", "year": 2020 }, { "authors": [ "Frederick Liu", "Amir Najmi", "Mukund Sundararajan" ], "title": "The penalty imposed by ablated data augmentation", "venue": "arXiv preprint arXiv:2006.04769,", "year": 2020 }, { "authors": [ "Siyuan Ma", "Raef Bassily", "Mikhail Belkin" ], "title": "The power of interpolation: Understanding the effectiveness of sgd in modern over-parametrized learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Daniel S Park", "William Chan", "Yu Zhang", "Chung-Cheng Chiu", "Barret Zoph", "Ekin D Cubuk", "Quoc V Le" ], "title": "Specaugment: A simple data augmentation method for automatic speech recognition", "venue": "Proc. Interspeech", "year": 2019 }, { "authors": [ "Shashank Rajput", "Zhili Feng", "Zachary Charles", "Po-Ling Loh", "Dimitris Papailiopoulos" ], "title": "Does data augmentation lead to positive margin", "venue": null, "year": 1905 }, { "authors": [ "Herbert Robbins", "Sutton Monro" ], "title": "A stochastic approximation method", "venue": "Ann. Math. Statist.,", "year": 1951 }, { "authors": [ "PY Simard", "D Steinkraus", "JC Platt" ], "title": "Best practices for convolutional neural networks applied to visual document analysis", "venue": "In Seventh International Conference on Document Analysis and Recognition, 2003. Proceedings.,", "year": 2003 }, { "authors": [ "Samuel L Smith", "Quoc V Le" ], "title": "A bayesian perspective on generalization and stochastic gradient descent", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Samuel L Smith", "Pieter-Jan Kindermans", "Chris Ying", "Quoc V Le" ], "title": "Don’t decay the learning rate, increase the batch size", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Stefan Wager", "Sida Wang", "Percy S Liang" ], "title": "Dropout training as adaptive regularization", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Denny Wu", "Ji Xu" ], "title": "On the optimal weighted l2 regularization in overparameterized linear regression", "venue": "arXiv preprint arXiv:2006.05800,", "year": 2020 }, { "authors": [ "Lei Wu", "Chao Ma", "E Weinan" ], "title": "How sgd selects the global minima in over-parameterized learning: A dynamical stability perspective", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Sen Wu", "Hongyang R. Zhang", "Gregory Valiant", "Christopher Ré" ], "title": "On the generalization effects of linear transformations in data augmentation, 2020", "venue": null, "year": 2020 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "arXiv preprint arXiv:1710.09412,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Implementing gradient-based optimization in practice requires many choices. These include setting hyperparameters such as learning rate and batch size as well as specifying a data augmentation scheme, a popular set of techniques in which data is augmented (i.e. modified) at every step of optimization. Trained model quality is highly sensitive to these choices. In practice they are made using methods ranging from a simple grid search to Bayesian optimization and reinforcement learning (Cubuk et al., 2019; 2020; Ho et al., 2019). Such approaches, while effective, are often ad-hoc and computationally expensive due to the need to handle scheduling, in which optimization hyperparameters and augmentation choices and strengths are chosen to change over the course of optimization.\nThese empirical results stand in contrast to theoretically grounded approaches to stochastic optimization which provide both provable guarantees and reliable intuitions. The most extensive work in this direction builds on the seminal article (Robbins & Monro, 1951), which gives provably optimal learning rate schedules for stochastic optimization of strongly convex objectives. While rigorous, these approaches are typically are not sufficiently flexible to address the myriad augmentation types and hyperparameter choices beyond learning rates necessary in practice.\nThis article is a step towards bridging this gap. We provide in §3 a rigorous framework for reinterpreting gradient descent with arbitrary data augmentation as stochastic gradient descent on a time-varying sequence of objectives. This provides a unified language to study traditional stochastic optimization methods such as minibatch SGD together with widely used augmentations such as additive noise (Grandvalet & Canu, 1997), CutOut (DeVries & Taylor, 2017), Mixup (Zhang et al., 2017) and label-preserving transformations (e.g. color jitter, geometric transformations (Simard et al., 2003)). It also opens the door to studying how to schedule and evaluate arbitrary augmentations, an important topic given the recent interest in learned augmentation Cubuk et al. (2019).\nQuantitative results in our framework are difficult to obtain in full generality due to the complex interaction between models and augmentations. To illustrate the utility of our approach and better understand specific augmentations, we present in §3 and §5 results about arbitrary augmentations for overparameterized linear regression and specialize to additive noise and minibatch SGD in §4 and §6. While our results apply directly only to simple quadratic losses, they treat very general augmentations. Treating more complex models is left to future work. Our main contributions are:\n• In Theorem 5.1, we give sufficient conditions under which gradient descent under any augmentation scheme converges in the setting of overparameterized linear regression. Our\nresult extends classical results of Monro-Robbins type and covers schedules for both learning rate and data augmentation scheme. • We complement the asymptotic results of Theorem 5.1 with quantitative rates of conver-\ngence furnished in Theorem 5.2. These rates depend only on the first few moments of the augmented data distribution, underscoring the flexibility of our framework. • In §4, we analyze additive input noise, a popular augmentation strategy for increasing\nmodel robustness. We recover the known fact that it is equivalent to stochastic optimization with `2-regularization and find criteria in Theorem 4.1 for jointly scheduling the learning rate and noise level to provably recover the minimal norm solution. • In §6, we analyze minibatch SGD, recovering known results about rates of convergence for\nSGD (Theorem 6.1) and novel results about SGD with noise (Theorem 6.2)." }, { "heading": "2 RELATED WORK", "text": "In addition to the extensive empirical work on data augmentation cited elsewhere in this article, we briefly catalog other theoretical work on data augmentation and learning rate schedules. The latter were first considered in the seminal work Robbins & Monro (1951). This spawned a vast literature on rates of convergence for GD, SGD, and their variants. We mention only the relatively recent articles Bach & Moulines (2013); Défossez & Bach (2015); Bottou et al. (2018); Smith et al. (2018); Ma et al. (2018) and the references therein. The last of these, namely Ma et al. (2018), finds optimal choices of learning rate and batch size for SGD in the overparametrized linear setting.\nA number of articles have also pointed out in various regimes that data augmentation and more general transformations such as feature dropout correspond in part to `2-type regularization on model parameters, features, gradients, and Hessians. The first article of this kind of which we are aware is Bishop (1995), which treats the case of additive Gaussian noise (see §4). More recent work in this direction includes Chapelle et al. (2001); Wager et al. (2013); LeJeune et al. (2019); Liu et al. (2020). There are also several articles investigating optimal choices of `2-regularization for linear models (cf e.g. Wu et al. (2018); Wu & Xu (2020); Bartlett et al. (2020)). These articles focus directly on the generalization effects of ridge-regularized minima but not on the dynamics of optimization. We also point the reader to Lewkowycz & Gur-Ari (2020), which considers optimal choices for the weight decay coefficient empirically in neural networks and analytically in simple models.\nWe also refer the reader to a number of recent attempts to characterize the benefits of data augmentation. In Rajput et al. (2019), for example, the authors quantify how much augmented data, produced via additive noise, is needed to learn positive margin classifiers. Chen et al. (2019), in contrast, focuses on the case of data invariant under the action of a group. Using the group action to generate label-preserving augmentations, the authors prove that the variance of any function depending only on the trained model will decrease. This applies in particular to estimators for the trainable parameters themselves. Dao et al. (2019) shows augmented k-NN classification reduces to a kernel method for augmentations transforming each datapoint to a finite orbit of possibilities. It also gives a second order expansion for the proxy loss of a kernel method under such augmentations and interprets how each term affects generalization. Finally, the article Wu et al. (2020) considers both label preserving and noising augmentations, pointing out the conceptually distinct roles such augmentations play." }, { "heading": "3 DATA AUGMENTATION AS STOCHASTIC OPTIMIZATION", "text": "A common task in modern machine learning is the optimization of an empirical risk\nL(W ;D) = 1 |D| ∑ (xj ,yj)∈D `(f(xj ;W ), yj), (3.1)\nwhere f(x;W ) is a parameterized model for a dataset D of input-response pairs (x, y) and ` is a per-sample loss. Optimizing W by vanilla gradient descent on L corresponds to the update equation\nWt+1 = Wt − ηt∇WL(Wt;D). In this context, we define a data augmentation scheme to be any procedure that consists, at every step of optimization, of replacing the dataset D by a randomly augmented variant, which we will denote\nby Dt. Typically, Dt is related to D in some way, but our framework does not explicitly constrain the form of this relationship. Instead, certain conditions on this relationship will be required for our main results Theorems 5.1 and 5.2 to give useful results for a specific augmentation scheme. A data augmentation scheme therefore corresponds to the augmented update equation\nWt+1 = Wt − ηt∇WL(Wt;Dt). (3.2) SinceDt is a stochastic function ofD, it is natural to view the augmented update rule (3.2) as a form of stochastic optimization for the proxy loss at time t\nLt(W ) := EDt [L(W ;Dt)] . (3.3) The update (3.2) corresponds precisely to stochastic optimization for the time-varying objective Lt(W ) in which the unbiased estimate of its gradient is obtained by evaluating the gradient of L(W ;Dt) on a single sampleDt drawn from the augmentation distribution. The connection between data augmentation and this proxy loss was introduced for Gaussian noise in Bishop (1995) and in general in Chapelle et al. (2001), but we now consider it in the context of stochastic optimization.\nDespite being mathematically straightforward, reformulating data augmentation as stochastic optimization provides a unified language for questions about learning rate schedules and general augmentation schemes including SGD. In general, such questions can be challenging to answer, and even evaluating the proxy loss Lt(W ) may require significant ingenuity. While we will return to more sophisticated models in future work, we henceforth analyze general augmentations in the simple context of overparameterized linear regression. Though there are many ways to perform linear regression, we restrict to augmented gradient descent both to gain intuition about specific augmentations and to understand the effect of augmentation on optimization. We therefore consider optimizing the entries of a weight matrix W ∈ Rp×n by gradient descent on\nL(W ;D) = 1 |D| ∑ (x,y)∈D ||y −Wx||2F = 1 N ||Y −WX||2F , (3.4)\nwhere our dataset D is summarized by data matrices X ∈ Rn×N and Y ∈ Rp×N , whose N < n columns consist of inputs xi ∈ Rn and associated labels yi ∈ Rp. Following this notation, a data augmentation scheme is specified by prescribing at each time step an augmented dataset Dt consisting of modified data matrices Xt, Yt, whose columns we denote by xi,t ∈ Rn and yi,t ∈ Rp. Here, the number of columns in Xt and Yt (i.e. the number of datapoints in Dt) may vary. We now give examples of some commonly used augmentations our framework can address.\n• Additive Gaussian noise: This is implemented by setting Xt = X + σt · G and Yt = Y for σt > 0 and G a matrix of i.i.d. standard Gaussians. We analyze this in §4.\n• Mini-batch SGD: To implement mini-batch SGD with batch size Bt, we can take Xt = XAt and Yt = Y At where At ∈ RN×Bt has i.i.d. columns containing a single non-zero entry equal to 1 chosen uniformly at random. We analyze this in detail in §6.\n• Random projection: This is implemented by Xt = ΠtX and Yt = Y , where Πt is an orthogonal projection onto a random subspace. For γt = Tr(Πt)/n, the proxy loss is\nLt(W ) = ‖Y − γtWX‖2F + γt(1− γt)n−1 Tr(XXT)‖W‖2F +O(n−1), which adds a data-dependent `2 penalty and applies a Stein shrinkage on input data. • Label-preserving transformations: For a 2-D image viewed as a vector x ∈ Rn, geomet-\nric transforms (with pixel interpolation) or other label-preserving transforms such as color jitter take the form of linear transforms Rn → Rn. We may implement such augmentations in our framework by Xt = AtX and Yt = Y for some random transform matrix At. • Mixup: To implement Mixup, we can takeXt = XAt and Yt = Y At, whereAt ∈ RN×Bt\nhas i.i.d. columns containing with two random non-zero entries equal to 1− ct and ct with mixing coefficient ct drawn from a Beta(αt, αt) distribution for a parameter αt.\nOur main technical results, Theorems 5.1 and 5.2, give sufficient conditions for a learning rate schedule ηt and a schedule for the statistics of Xt, Yt under which optimization with augmented gradient descent will provably converge. We state these general results in §5. Before doing so, we seek to demonstrate both the utility of our framework and the flavor of our results by focusing on the simple but already informative case of additive Gaussian noise." }, { "heading": "4 AUGMENTATION WITH ADDITIVE GAUSSIAN NOISE", "text": "An common augmentation in practice injects input noise as a regularizer (Grandvalet & Canu, 1997):\nDt = {(xi,t, yi,t), i = 1, . . . , N}, xi,t = xi + σtgi,t, yi,t = yi,\nwhere gi,t are i.i.d. standard Gaussian vectors and σt is a strength parameter. This section studies such augmentations using our framework. A direct computation reveals that the proxy loss\nLt(W ) = Lσt(W ) := L(W ;D) + σ2t ||W || 2 F\ncorresponding to additive Gaussian noise adds an `2-penalty to the original loss L. This is simple but useful intuition. It also raises the question: what is the optimal relation between the learning rate ηt and the augmentation strength σt (i.e. the `2-penalty)?\nTo get a sense of what optimal might mean in this context, observe first that if σt = 0, then directly differentiating the loss L yields the following update rule:\nWt+1 = Wt + 2ηt N · (Y −WtX)XT. (4.1)\nThe increment Wt+1 −Wt is therefore contained in the column span\nV‖ := column span of XXT ⊆ Rn (4.2)\nof the model Hessian XXT. Overparameterization implies V‖ 6= Rn. The component Wt,⊥ of Wt that is in the orthogonal complement of V‖ thus remains frozen to its initialized value. Geometrically, this means that there are some directions, namely those in the orthogonal complement to V‖, which gradient descent “cannot see.” Optimization with appropriate step sizes then yields\nlim t→∞\nWt = W0,⊥ +Wmin, Wmin := Y X T(XXT)+,\nwhere Wmin is the minimum norm solution of Y = WX . The original motivation for introducing the `2-regularized losses Lσ is that they provide a mechanism to eliminate the component W0,⊥ for all initializations, not just the special choice W0 = 0, and they can be used to regularize non-linear models as well. Indeed, for σ > 0, the loss Lσ is strictly convex and has a unique minimum\nW ∗σ := Y X T ( XXT + σ2N · Idn×n )−1 ,\nwhich tends to the minimal norm solution in the weak regularization limit limσ→0W ∗σ = Wmin. Geometrically, this is reflected in the fact that `2-penalty yields non-trivial gradient updates\nWt+1,⊥ = Wt,⊥ − ηtσ2Wt,⊥ = (Id− ηtσ2)Wt,⊥ = t∏\ns=1\n(Id− ηsσ2)W0,⊥, (4.3)\nwhich drive this perpendicular component ofWt to zero provided ∑∞ t=1 ηt =∞. However, for each positive value of σ, the `2-penalty also modifies the gradient descent updates for Wt,‖, ultimately causing Wt to converge to W ∗σ , which is not a minimizer of the original loss L. This downside of ridge regression motivates jointly scheduling the step size ηt and the noise strength σt. We hope that driving σt to 0 at an appropriate rate can guarantee convergence of Wt to Wmin. Namely, we want to retain the regularizing effects of `2-noise to forceWt,⊥ to zero while mitigating its adverse effects which prevent W ∗σ from minimizing L. It turns out that this is indeed possible: Theorem 4.1 (Special case of Theorem 5.1). Suppose σ2t , ηt → 0 with σ2t non-increasing and\n∞∑ t=0 ηtσ 2 t =∞ and\n∞∑ t=0 η2t σ 2 t <∞. (4.4)" }, { "heading": "Then, Wt", "text": "p→ Wmin. Further, if ηt = Θ(t−x) and σ2t = Θ(t−y) with x, y > 0, x + y < 1, and 2x+ y > 1, then for any ∈ (0,min{y, x/2}), we have that\ntmin{y, 1 2x}− ‖Wt −Wmin‖F p→ 0.\nLet us give a few comments on Theorem 4.1. First, although it is stated for additive Gaussian noise, an analogous version holds for arbitrary additive noise with bounded moments, with the only change being a constant multiplicative factor in the second condition of (4.4).\nSecond, that convergence in probability Wt p→ Wmin follows from (4.4) is analogous to a MonroRobbins type theorem (Robbins & Monro, 1951). Indeed, inspecting (4.3), we see that the first condition in (4.4) guarantees that the effective learning rate ηtσ2t in the orthogonal complement to V‖ is sufficiently large that the corresponding component Wt,⊥ of Wt tends to 0, allowing the result of optimization to be independent of the initial condition W0. Further, the second condition in (4.4) guarantees that the variance of the gradients, which at time t scales like η2t σ 2 t is summable. As in the usual Monro-Robbins setup, this means that only a finite amount of noise is injected into the optimization. Further, (4.4) is a direct specialization of (5.5) and (5.6) from Theorem 5.1.\nThird, by optimizing over x, y, we see that fastest rate of convergence guaranteed by Theorem 4.1 is obtained by setting ηt = t−2/3+ , σ2t = t\n−1/3 and results in a O(t−1/3+ ) rate of convergence. It is not evident that this is the best possible rate, however.\nFinally, although we leave systemic study of augmentation in non-linear models to future work, our framework can be applied beyond linear models and quadratic losses. To see this, as noted for kernels in Dao et al. (2019), augmenting inputs of nonlinear feature models correspond to applying different augmentations on the outputs of the feature map. To give a concrete example, consider additive noise for small σt. For any sufficiently smooth function g, Taylor expansion reveals\nE [g(x+ σtG)] = g(x) + σ2t 2 ∆g(x) +O(σ4t ),\nwhere ∆ = ∑ i ∂ 2 i is the Laplacian and G is a standard Gaussian vector. For a general loss of the form (3.1) we have\nLt(W ) = L(W ;D) + σ2t 2|D| ∑\n(x,y∈D)\nTr [ (∇xf)T (Hf `)∇xf ] + (∇f `)T ∆xf +O(σ4t ),\nwhere we have written Hf ` for the Hessian of some convex per-sample loss ` with respect to f and ∇x,∇f for the gradients with respect to x, f , respectively. This is consistent with the similar expansion done in the kernel setting by Dao et al. (2019, Section 4). If σt is small, then the proxy loss Lt will differ significantly from the unaugmented loss L only near the end of training, when we expect ∇f ` to be small and Hf ` to be positive semi-definite. Hence, we find heuristically that, neglecting higher order terms in σt, additive noise with small σt corresponds to an `2-regularizer\nTr [ σ2t 2 (∇xf)T (HfL)∇xf ] =: σ2t 2 ||∇xf ||2HfL\nfor the gradients of f with respect to the natural inner product determined by the Hessian of the loss. This is intuitive since penalizing the gradients of f is the same as requiring that f is approximately constant in a neighborhood of every datapoint. However, although the input noise was originally isotropic, the `2-penalty is aligned with the loss Hessian and hence need not be." }, { "heading": "5 TIME-VARYING MONRO-ROBBINS FOR LINEAR MODELS UNDER AUGMENTATION", "text": "In this section, we state two general results, Theorems 5.1 and 5.2, which provide sufficient conditions for jointly scheduling learning rates and general augmentation schemes to guarantee convergence of augmented gradient descent in the overparameterized linear model (3.4)." }, { "heading": "5.1 A GENERAL TIME-VARYING MONRO-ROBBINS THEOREM", "text": "Given an augmentation scheme for the model (3.4), the time t gradient update at learning rate ηt is\nWt+1 := Wt + 2ηt N · (Yt −WtXt)XTt , (5.1)\nwhere Dt = (Xt, Yt) is the augmented dataset at time t. The minimum norm minimizer of the corresponding proxy loss Lt (see 3.3) is\nW ∗t := E[YtXTt ]E[XtXTt ]+, (5.2)\nwhere E[XtXTt ]+ denotes the Moore-Penrose pseudo-inverse. In this section we state a rigorous result, Theorem 5.1, giving sufficient conditions on the learning rate ηt and distributions of the augmented matrices Xt, Yt under which augmented gradient descent converges. In analogy with the case of Gaussian noise, (5.1) shows Wt+1 −Wt is contained in the column span of the Hessian XtX T t of the augmented loss and almost surely belongs to the subspace\nV‖ := column span of E[XtXTt ] ⊆ Rn. (5.3)\nTo ease notation, we assume that V‖ is independent of t. This assumption is valid for additive Gaussian noise, random projection, MixUp, SGD, and their combinations. We explain in Remark B.2 how to generalize Theorems 5.1 and 5.2 to the case where V‖ varies with t.\nLet us denote by Q‖ : Rn → Rn the orthogonal projection onto V‖. At time t, gradient descent leaves the projection Wt(Id−Q‖) of Wt onto the orthogonal complement of V‖ unchanged. In contrast, ||WtQ‖ −W ∗t ||F decreases at a rate governed by the smallest positive eigenvalue\nλmin,V‖ ( E [ XtX T t ]) := λmin ( Q‖E [ XtX T t ] Q‖ )\nof the Hessian for the proxy loss Lt, which is obtained by restricting its full Hessian E [ XtX T t ] to V‖. Moreover, whether and at what rate WtQ‖ −W ∗t converges to 0 must depend on how quickly\nΞ∗t := W ∗ t+1 −W ∗t (5.4)\ntends to zero. Indeed, ||Ξ∗t ||F is the distance between proxy loss optima at different times and hence must tend to zero if ||WtQ‖ −W ∗t ||F converges to zero. Theorem 5.1. Suppose that V‖ is independent of t, that the learning rate satisfies ηt → 0, that the proxy optima satisfy\n∞∑ t=0 ‖Ξ∗t ‖F <∞, (5.5)\nensuring the existence of a limit W ∗∞ := limt→∞W ∗ t , and that\n∞∑ t=0 ηtλmin,V‖(E[XtX T t ]) =∞. (5.6)\nIf either ∞∑ t=0 η2tE [ ‖XtXTt − E[XtXTt ]‖2F + ‖YtXTt − E[YtXTt ]‖2F ] <∞ (5.7)\nor the more refined condition ∞∑ t=0 η2tE [ ‖XtXTt − E[XtXTt ]‖2F + ‖E[Wt](XtXTt − E[XtXTt ])− (YtXTt − E[YtXTt ])‖2F ] <∞ (5.8) hold, then for any initialization W0 we have WtQ‖ p→W ∗∞.\nThe conditions of Theorem 5.1 can be applied to the choice of joint schedule for the learning rate and augmentation scheme applied to gradient descent. If the same augmentation is applied with different strength parameters at each step t such as σt for Gaussian noise, they impose conditions on the choice of joint schedule for ηt and these strength parameters. In the example of Theorem 4.1 for Gaussian noise, the condition that σ2t is non-increasing implies (5.5), the first condition of (4.4) implies (5.6), and the second condition of (4.4) implies (5.7).\nIn addition to the conditions Theorem 5.1 imposes on Dt, the proxy optima W ∗t and their limit W ∗∞ are determined by the distribution of Dt. Therefore, for W ∗∞ in Theorem 5.1 to be a desirable set of parameters for the original dataset D, the augmented dataset Dt must have some relation to D.\nWhen the augmentation procedure is static in t, Theorem 5.1 reduces to a standard Monro-Robbins theorem Robbins & Monro (1951) for the (static) proxy loss Lt(W ). As in that setting, condition (5.6) enforces that the learning trajectory travels far enough to reach an optimum. Condition (5.7) implies the weaker condition (5.8); the second summand in (5.8) is the variance of the gradient of the augmented loss L(W ;Dt), meaning (5.8) implies the total variance of the stochastic gradients is summable. Condition (5.5) is new; it enforces that the minimizers W ∗t of the proxy losses Lt(W ) change slowly enough that the augmented optimization procedure can keep pace.\nThough it may be surprising that E[Wt] appears in this condition, it may be interpreted as the gradient descent trajectory for the deterministic sequence of proxy losses Lt(W ). Accounting for the dependence on E[Wt] allows us to give more precise rates using the variance of the stochastic gradient in (5.8); we include both (5.7) and (5.8) to allow a user of our results to separately analyze E[Wt] to obtain stronger conclusions." }, { "heading": "5.2 CONVERGENCE RATES AND SCHEDULING FOR DATA AUGMENTATION", "text": "A more precise analysis of the the proof of Theorem 5.1 allows us to obtain rates of convergence for the projections WtQ‖ of the weights onto V‖ to the limiting optimum W ∗∞. In particular, when the quantities in Theorem 5.1 have power law decay, we obtain the following result. Theorem 5.2 (informal - Special case of Theorem B.4). If V‖ is independent of t, the learning rate satisfies ηt → 0, and for some 0 < α < 1 < β1, β2 and γ > α we have\nηtλmin,V‖(E[XtX T t ]) = Ω(t −α), ‖Ξ∗t ‖F = O(t−β1) (5.9) and η2tE[‖XtXTt − E[XtXTt ]‖22] = O(t−γ) (5.10) and\nη2tE [ ‖E[Wt](XtXTt − E[XtXTt ])− (YtXTt − E[YtXTt ])‖2F ] = O(t−β2), (5.11)\nthen for any initialization W0, we have for any > 0 that\ntmin{β1−1, β2−α 2 }− ‖WtQ‖ −W ∗∞‖F p→ 0.\nTheorem 5.2 measures rates in terms of optimization steps t, but a different measurement of time called the intrinsic time of the optimization will be more suitable for measuring the behavior of optimization quantities. This was introduced for SGD in Smith & Le (2018); Smith et al. (2018), and we now generalize it to our broader setting. For gradient descent on a loss L, the intrinsic time is a quantity which increments by ηλmin(H) for a optimization step with learning rate η at a point where L has Hessian H . When specialized to our setting, it is given by\nτ(t) := t−1∑ s=0 2ηs N λmin,V‖(E[XsX T s ]). (5.12)\nNotice that intrinsic time of augmented optimization for the sequence of proxy losses Ls appears in Theorems 5.1 and 5.2, which require via condition (5.6) that the intrinsic time tends to infinity as the number of optimization steps grows.\nIntrinsic time will be a sensible variable in which to measure the behavior of quantities such as the fluctuations of the optimization path f(t) := E[‖(Wt − E[Wt])Q‖‖2F ]. In the proofs of Theorems 5.1 and 5.2, we show that the fluctuations satisfy an inequality of the form\nf(t+ 1) ≤ f(t)(1− a(t))2 + b(t) (5.13)\nfor a(t) := 2ηt 1N λmin,V‖(E[XtX T t ]) and b(t) := Var[||ηt∇WL(Wt)||F ] so that τ(t) = ∑t−1 s=0 a(s). Iterating the recursion (5.13) shows that\nf(t) ≤ f(0) t−1∏ s=0 (1− a(s))2 + t−1∑ s=0 b(s) t−1∏ r=s+1 (1− a(r))2\n≤ e−2τ(t)f(0) + t−1∑ s=0 b(s) a(s) e2τ(s+1)−2τ(t)(τ(s+ 1)− τ(s)).\nFor τ := τ(t) and changes of variable A(τ), B(τ), and F (τ) such that A(τ(t)) = a(t), B(τ(t)) = b(t), and F (τ(t)) = f(t), we find by replacing a right Riemann sum by an integral that\nF (τ) - e−2τ [ F (0) + ∫ τ 0 B(σ) A(σ) e2σdσ ] . (5.14)\nIn order for the result of optimization to be independent of the starting point, by (5.14) we must have τ → ∞ to remove the dependence on F (0); this provides one explanation for the appearance of τ in condition (5.6). Further, (5.14) implies that the fluctuations at an intrinsic time are bounded by an integral against the function B(σ)A(σ) which depends only on the ratio of A(σ) and B(σ). In the case of minibatch SGD, we compute this ratio in (6.2) and recover the commonly used “linear scaling” rule for learning rate.\nIn Section 6, we specialize Theorem 5.2 to obtain rates of convergence for specific augmentations. Optimizing the learning rate and augmentation parameter schedules in Theorem 5.2 allows us to derive power law schedules with convergence rate guarantees in these settings." }, { "heading": "6 IMPLICATIONS FOR MINI-BATCH STOCHASTIC GRADIENT DESCENT (SGD)", "text": "We now apply our framework to study mini-batch stochastic gradient descent (SGD) with the potential presence of additive noise. Though data augmentation commonly refers to techniques aside from SGD, we will see that our framework handles it uniformly with other augmentations." }, { "heading": "6.1 MINI-BATCH SGD", "text": "In mini-batch stochastic gradient descent, Dt is obtained by choosing a random subset Bt of D of prescribed batch size Bt = |Bt|. Each datapoint in Bt is chosen uniformly with replacement from D, and the resulting data matrices Xt and Yt are scaled so that Lt(W ) = L(W ;D). Concretely, this means that for the normalizing factor ct := √ N/Bt we have\nXt = ctXAt and Yt = ctY At,\nwhereAt ∈ RN×Bt has i.i.d. columnsAt,i with a single non-zero entry equal to 1 chosen uniformly at random. In this setting the minimum norm optimum for each t are the same and given by\nW ∗t = W ∗ ∞ = Y X T(XXT)+,\nwhich coincides with the minimum norm optimum for the unaugmented loss. Our main result for standard SGD is the following theorem, whose proof is given in Appendix D.1. Theorem 6.1. If the learning rate satisfies ηt → 0 and\n∞∑ t=0 ηt =∞, (6.1)\nthen for any initialization W0, we have WtQ‖ p→ W ∗∞. If further we have that ηt = Θ(t−x) with 0 < x < 1, then for some C > 0 we have\neCt 1−x ‖WtQ‖ −W ∗∞‖F p→ 0.\nTheorem 6.1 recovers the exponential convergence rate for SGD, which has been extensively studied through both empirical and theoretical means (Bottou et al., 2018; Ma et al., 2018). Because 1 ≤ Bt ≤ N for all t, it does not affect the asymptotic results in Theorem 6.1. In practice, however, the number of optimization steps t is often small enough that BtN is of order t\n−α for some α > 0, meaning the choice of Bt can affect rates in this non-asymptotic regime. Though we do not attempt to push our generic analysis to this granularity, this is done in Ma et al. (2018) to derive optimal batch sizes and learning rates in the overparametrized setting.\nOur proof of Theorem 6.1 shows the intrinsic time is τ(t) = ∑t−1 s=0 2ηs 1 N λmin,V‖(XX T) and the ratio b(t)a(t) in (5.14) is by (D.4) bounded uniformly for a constant C > 0 by\nb(t) a(t) ≤ C · ηt Bt . (6.2)\nThus, keeping b(t)a(t) fixed as a function of τ suggests the “linear scaling” ηt ∝ Bt used empirically in Goyal et al. (2017) and proposed via an heuristic SDE limit in Smith et al. (2018)." }, { "heading": "6.2 MINI-BATCH SGD WITH ADDITIVE SYNTHETIC NOISE", "text": "In addition to handling synthetic noise and SGD separately, our results and framework also cover the hybrid case of mini-batch SGD with batch size Bt and additive noise at level σt. Here,\nXt = ct(XAt + σtGt) and Yt = ctY At,\nwhere ct and At are as in Section 6.1 and Gt ∈ Rn×Bt has i.i.d. Gaussian entries. The proxy loss is\nLt(W ) := 1 N E [ ‖ctY At − ctWXAt − ctσtWGt‖2F ] = 1 N ‖Y −WX‖2F + σ2t ‖W‖2F ,\nwith ridge minimizer W ∗t = Y X T(XXT + σ2tN · Idn×n)−1. Like with synthetic noise but unlike noiseless SGD, the optima W ∗t converge to the minimal norm interpolant Wmin = Y X T(XXT)+. Theorem 6.2. Suppose σ2t → 0 is decreasing, ηt → 0, and for any C > 0 we have ∞∑ t=0 (ηtσ 2 t − Cη2t ) =∞ and ∞∑ t=0 η2t σ 2 t <∞. (6.3)\nThen we have Wt p→ Wmin. If we further have ηt = Θ(t−x) and σ2t = Θ(t−y) with x, y > 0 and 0 < x+ y < 1 < 2x+ y, we have for any > 0 that\ntmin{y, 1 2x}− ‖Wt −Wmin‖F p→ 0.\nTheorem 6.2 provides an example where our framework can handle the composition of two augmentations, namely additive noise and SGD. It reveals a qualitative difference between SGD with and without additive noise. For polynomially decaying ηt the convergence of noiseless SGD in Theorem 6.1 is exponential in t, while the bound from Theorem 6.2 is polynomial in t. This is unavoidable. Indeed, for components of Wt orthogonal to colspan(X), convergence requires that ∑∞ t=0 ηtσ 2 t =∞ (see (4.3)). This occurs only if σt has power law decay, causing the ||W ∗t −Wmin||F to have at most power law decay as well. Finally, the Monro-Robbins conditions (6.3) are more restrictive than the analogous conditions in the pure noise setting (see (4.4)), as the latter allow for large ηt schedules in which ∑∞ t=0 η 2 t diverges but ∑∞ t=0 η 2 t σ 2 t does not." }, { "heading": "7 DISCUSSION", "text": "We have presented a theoretical framework to rigorously analyze the effect of data augmentation. As can be seen in our main results, our framework applies to completely general augmentations and relies only on analyzing the first few moments of the augmented dataset. This allows us to handle augmentations as diverse as additive noise and mini-batch SGD as well as their composition in a uniform manner. We have analyzed some representative examples in detail in this work, but many other commonly used augmentations may be handled similarly: label-preserving transformations (e.g. color jitter, geometric transformations), random projections (DeVries & Taylor, 2017; Park et al., 2019), and Mixup (Zhang et al., 2017), among many others. Another line of investigation left to future work is to compare different methods of combining augmentations such as mixing, alternating, or composing, which often improve performance in the empirical literature (Hendrycks et al., 2020).\nThough our results provide a rigorous baseline to compare to more complex settings, the restriction of the present work to linear models is of course a significant constraint. In future work, we hope to extend our general analysis to models closer to those used in practice. Most importantly, we intend to consider more complex models such as kernels (including the neural tangent kernel) and neural networks by making similar connections to stochastic optimization. In an orthogonal direction, our analysis currently focuses on the mean square loss for regression, and we aim to extend it to other losses such as the cross-entropy loss. Finally, our study has thus far been restricted to the effect of data augmentation on optimization, and it would be of interest to derive consequences for generalization with more complex models. We hope our framework can provide the theoretical underpinnings for a more principled understanding of the effect and practice of data augmentation." }, { "heading": "A ANALYTIC LEMMAS", "text": "In this section, we present several basic lemmas concerning convergence for certain matrix-valued recursions that will be needed to establish our main results. For clarity, we first collect some matrix notations used in this section and throughout the paper." }, { "heading": "A.1 MATRIX NOTATIONS", "text": "Let M ∈ Rm×n be a matrix. We denote its Frobenius norm by ‖M‖F and its spectral norm by ‖M‖2. Ifm = n so thatM is square, we denote by diag(M) the diagonal matrix with diag(M)ii = Mii. For matrices A,B,C of the appropriate shapes, define\nA ◦ (B ⊗ C) := BAC (A.1)\nand Var(A) := E[AT ⊗A]− E[AT]⊗ E[A]. (A.2)\nNotice in particular that Tr[Id ◦Var(A)] = E[‖A− E[A]‖2F ]." }, { "heading": "A.2 ONE- AND TWO-SIDED DECAY", "text": "Definition A.1. Let At ∈ Rn×n be a sequence of independent random non-negative definite matrices with\nsup t ||At|| ≤ 2 almost surely,\nlet Bt ∈ Rp×n be a sequence of arbitrary matrices, and let Ct ∈ Rn×n be a sequence of nonnegative definite matrices. We say that the sequence of matrices Xt ∈ Rp×n has one-sided decay of type ({At}, {Bt}) if it satisfies\nXt+1 = Xt(Id−E[At]) +Bt. (A.3) We say that a sequence of non-negative definite matrices Zt ∈ Rn×n has two-sided decay of type ({At}, {Ct}) if it satisfies\nZt+1 = E[(Id−At)Zt(Id−At)] + Ct. (A.4)\nIntuitively, if a sequence of matrices Xt (resp. Zt) satisfies one decay of type ({At}, {Bt}) (resp. two-sided decay of type ({At}, {Ct})), then in those directions u ∈ Rn for which ||Atu|| does not decay too quickly in t we expect that Xt (resp. Zt) will converge to 0 provided Bt (resp. Ct) are not too large. More formally, let us define\nV‖ := ∞⋂ t=0 ker [ ∞∏ s=t (Id−E[As]) ] = { u ∈ Rn ∣∣∣∣ limT→∞ T∏ s=t (Id−E[As])u = 0, ∀t ≥ 1 } ,\nand let Q‖ be the orthogonal projection onto V‖. It is on the space V‖ that that we expect Xt, Zt to tend to zero if they satisfy one or two-side decay, and the precise results follows." }, { "heading": "A.3 LEMMAS ON CONVERGENCE FOR MATRICES WITH ONE AND TWO-SIDED DECAY", "text": "We state here several results that underpin the proofs of our main results. We begin by giving in Lemmas A.2 and A.3 two slight variations of the same simple argument that matrices with one or two-sided decay converge to zero. Lemma A.2. If a sequence {Xt} has one-sided decay of type ({At}, {Bt}) with\n∞∑ t=0 ‖Bt‖F <∞, (A.5)\nthen limt→∞XtQ‖ = 0. Proof. For any > 0, choose T1 so that ∑∞ t=T1 ‖Bt‖F < 2 and T2 so that for t > T2 we have∥∥∥∥∥( t∏ s=T1 (Id−E[As]) ) Q‖ ∥∥∥∥∥ 2 < 2 1 ‖X0‖F + ∑T1−1 s=0 ‖Bs‖F .\nBy (A.3), we find that\nXt+1 = X0 t∏ s=0 (Id−E[As]) + t∑ s=0 Bs t∏ r=s+1 (Id−E[Ar]),\nwhich implies for t > T2 that\n‖Xt+1Q‖‖F ≤ ‖X0‖F ∥∥∥∥∥( t∏\ns=0\n(Id−E[As]) ) Q‖ ∥∥∥∥∥ 2 + t∑ s=0 ‖Bs‖F ∥∥∥∥∥( t∏ r=s+1 (Id−E[Ar]) ) Q‖ ∥∥∥∥∥ 2 .\n(A.6)\nOur assumption that ||At|| ≤ 2 almost surely implies that for any T ≤ t∥∥∥∥∥( t∏\ns=0\n(Id−E[As]) ) Q‖ ∥∥∥∥∥ 2 ≤ ∥∥∥∥∥( T∏ s=0 (Id−E[As]) ) Q‖ ∥∥∥∥∥ 2\nsince each term in the product is non-negative-definite. Thus, we find\n‖Xt+1Q‖‖F ≤ [ ‖X0‖F +\nT1−1∑ s=0 ‖Bs‖F ]∥∥∥∥∥( t∏\ns=T1\n(Id−E[As]) ) Q‖ ∥∥∥∥∥ 2 + t∑ s=T1 ‖Bs‖F < .\nTaking t→∞ and then → 0 implies that limt→∞XtQ‖ = 0, as desired.\nLemma A.3. If a sequence {Zt} has two-sided decay of type ({At}, {Ct}) with\nlim T→∞\nE ∥∥∥∥∥( T∏ s=t (Id−As) ) Q‖ ∥∥∥∥∥ 2\n2 = 0 for all t ≥ 0 (A.7) and\n∞∑ t=0 Tr(Ct) <∞, (A.8)\nthen limt→∞QT‖ZtQ‖ = 0.\nProof. The proof is essentially identical to that of Lemma A.2. That is, for > 0, choose T1 so that∑∞ t=T1 Tr(Ct) < 2 and choose T2 by (A.7) so that for t > T2 we have\nE ∥∥∥∥∥( t∏\ns=T1\n(Id−As) ) Q‖ ∥∥∥∥∥ 2\n2\n < 2\n1 Tr(Z0) + ∑T1−1 s=0 Tr(Cs) .\nConjugating (A.4) by Q‖, we have that\nQT‖Zt+1Q‖ = E [ QT‖ ( t∏ s=0 (Id−As) )T Z0 ( t∏ s=0 (Id−As) ) Q‖ ]\n+ t∑ s=0 E [ QT‖ ( t∏ r=s+1 (Id−Ar) )T Cs ( t∏ r=s+1 (Id−Ar) ) Q‖ ] .\nOur assumption that ||At|| ≤ 2 almost surely implies that for any T ≤ t∥∥∥∥∥( t∏\ns=0\n(Id−As) ) Q ∥∥∥∥∥ 2 ≤ ∥∥∥∥∥( T∏ s=0 (Id−As) ) Q ∥∥∥∥∥ 2 .\nFor t > T2, this implies by taking trace of both sides that\nTr(QT‖Zt+1Q‖) ≤ Tr(Z0)E ∥∥∥∥∥( t∏\ns=0\n(Id−As) ) Q‖ ∥∥∥∥∥ 2\n2\n+ t∑ s=0 Tr(Cs)E ∥∥∥∥∥( t∏ r=s+1 (Id−Ar) ) Q‖ ∥∥∥∥∥ 2\n2 (A.9)\n≤ [ Tr(Z0) +\nT1−1∑ s=0 Tr(Cs)\n] E ∥∥∥∥∥( t∏\ns=T1\n(Id−As) ) Q‖ ∥∥∥∥∥ 2\n2\n+ t∑ s=T1 Tr(Cs)\n< ,\nwhich implies that limt→∞QT‖ZtQ‖ = 0.\nThe preceding Lemmas will be used to provide sufficient conditions for augmented gradient descent to converge as in Theorem B.1 below. Since we are also interested in obtaining rates of convergence, we record here two quantitative refinements of the Lemmas above that will be used in the proof of Theorem B.4.\nLemma A.4. Suppose {Xt} has one-sided decay of type ({At}, {Bt}). Assume also that for some X ≥ 0 and C > 0, we have\nlog ∥∥∥∥∥( t∏\nr=s\n(Id−E[Ar]) ) Q‖ ∥∥∥∥∥ 2 < X − C ∫ t+1 s r−αdr\nand ‖Bt‖F = O(t−β) for some 0 < α < 1 < β. Then, ‖XtQ‖‖F = O(tα−β). Proof. Denote γs,t := ∫ t s r−αdr. By (A.6), we have for some constants C1, C2 > 0 that\n‖Xt+1Q‖‖F < C1e−Cγ1,t+1 + C2eX t∑\ns=1\n(1 + s)−βe−Cγs+1,t+1 . (A.10)\nThe first term on the right hand side is exponentially decaying in t since γ1,t+1 grows polynomially in t. To bound the second term, observe that the function\nf(s) := Cγs+1,t+1 − β log(s+ 1)\nsatisfies\nf ′(s) ≥ 0 ⇔ C(s+ 1)−α − β 1 + s\n≥ 0 ⇔ s ≥ ( β\nC\n)1/(1−α) =: K.\nHence, the summands are monotonically increasing for s greater than a fixed constant K depending only on α, β, C. Note that\nK∑ s=1 (1 + s)−βe−Cγs+1,t+1 ≤ Ke−CγK+1,t+1 ≤ Ke−C ′t1−α\nfor some C ′ depending only on α and K, and hence sum is exponentially decaying in t. Further, using an integral comparison, we find\nt∑ s=K+1 (1 + s)−βe−Cγs+1,t+1 ≤ ∫ t K (1 + s)−βe− C 1−α ((t+1) 1−α−(s+1)1−α)ds. (A.11)\nChanging variables using u = (1 + s)1−α/(1− α), the last integral has the form\ne−Cgt(1− α)−ξ ∫ gt gK u−ξeCudu, gx := (1 + x)1−α 1− α , ξ := β − α 1− α . (A.12)\nIntegrating by parts, we have∫ gt gK u−ξeudu = C−1ξ ∫ gt gK u−ξ−1eCudu+ (u−ξeCu)|gtgK\nFurther, since on the range gK ≤ u ≤ gt the integrand is increasing, we have\ne−Cgtξ ∫ gt gK u−ξ−1eCudu ≤ ξg−ξt .\nHence, e−Cgt times the integral in (A.12) is bounded above by\nO(g−ξt ) + e −Cgt(u−ξeCu)|gtgK = O(g −ξ t ).\nUsing (A.11) and substituting the previous line into (A.12) yields the estimate\nt∑ s=K+1 (1 + s)−βe−Cγs+1,t+1 ≤ (1 + t)−β+α,\nwhich completes the proof.\nLemma A.5. Suppose {Zt} has two-sided decay of type ({At}, {Ct}). Assume also that for some X ≥ 0 and C > 0, we have\nlogE ∥∥∥∥∥( t∏\nr=s\n(Id−Ar) ) Q‖ ∥∥∥∥∥ 2\n2\n < X − C ∫ t+1 s r−αdr\nas well as Tr(Ct) = O(t−β) for some 0 < α < 1 < β. Then Tr(QT‖ ZtQ‖) = O(t α−β).\nProof. This argument is identical to the proof of Lemma A.4. Indeed, using (A.9) we have that\nTr ( QT‖ ZtQ‖ ) ≤ C1e−Cγ1,t+1 + C2eX t∑ s=1 (1 + s)−βe−Cγs+1,t+1 .\nThe right hand side of this inequality coincides with the expression on the right hand side of (A.10), which we already bounded by O(tβ−α) in the proof of Lemma A.4.\nIn what follows, we will use a concentration result for products of matrices from Huang et al. (2020). Let Y1, . . . , Yn ∈ RN×N be independent random matrices. Suppose that\n‖E[Yi]‖2 ≤ ai and E [ ‖Yi − E[Yi]‖22 ] ≤ b2i a2i\nfor some a1, . . . , an and b1, . . . , bn. We will use the following result, which is a specialization of (Huang et al., 2020, Theorem 5.1) for p = q = 2. Theorem A.6 ((Huang et al., 2020, Theorem 5.1)). For Z0 ∈ RN×n, the product Zn = YnYn−1 · · ·Y1Z0 satisfies\nE [ ‖Zn‖22 ] ≤ e ∑n i=1 b 2 i n∏ i=1 a2i · ‖Z0‖22\nE [ ‖Zn − E[Zn]‖22 ] ≤ ( e ∑n i=1 b 2 i − 1 ) a2i · ‖Z0‖22.\nFinally, we collect two simple analytic lemmas for later use. Lemma A.7. For any matrix M ∈ Rm×n, we have that\nE[‖M‖22] ≥ ‖E[M ]‖22.\nProof. We find by Cauchy-Schwartz and the convexity of the spectral norm that\nE[‖M‖22] ≥ E[‖M‖2]2 ≥ ‖E[M ]‖22. Lemma A.8. For bounded at ≥ 0, if we have ∑∞ t=0 at =∞, then for any C > 0 we have\n∞∑ t=0 ate −C ∑t s=0 as <∞.\nProof. Define bt := ∑t s=0 as so that\nS := ∞∑ t=0 ate −C ∑t s=0 as = ∞∑ t=0 (bt − bt−1)e−Cbt ≤ ∫ ∞ 0 e−Cxdx <∞,\nwhere we use ∫∞ 0 e−Cxdx to upper bound its right Riemann sum." }, { "heading": "B ANALYSIS OF DATA AUGMENTATION AS STOCHASTIC OPTIMIZATION", "text": "In this section, we prove generalizations of our main theoretical results Theorems 5.1 and 5.2 giving Monro-Robbins type conditions for convergence and rates for augmented gradient descent in the linear setting." }, { "heading": "B.1 MONRO-ROBBINS TYPE RESULTS", "text": "To state our general Monro-Robbins type convergence results, let us briefly recall the notation. We consider overparameterized linear regression with loss\nL(W ;D) = 1 N ||WX − Y ||2F ,\nwhere the dataset D of size N consists of data matrices X,Y that each have N columns xi ∈ Rn, yi ∈ Rp with n > N. We optimize L(W ;D) by augmented gradient descent, which means that at each time t we replace D = (X,Y ) by a random dataset Dt = (Xt, Yt). We then take a step\nWt+1 = Wt − ηt∇WL(Wt;Dt) of gradient descent on the resulting randomly augmented lossL(W ;Dt) with learning rate ηt. Recall that we set V‖ := column span of E[XtXTt ] and denoted by Q‖ the orthogonal projection onto V‖. As noted in §5, on V‖ the proxy loss\nLt = E [L(W ;Dt)] is strictly convex and has a unique minimum, which is\nW ∗t = E [ YtX T t ] (Q||E [ XtX T t ] Q||) −1.\nThe change from one step of augmented GD to the next in these proxy optima is captured by Ξ∗t := W ∗ t+1 −W ∗t .\nWith this notation, we are ready to state Theorems B.1, which gives two different sets of timevarying Monro-Robbins type conditions under which the optimization trajectory Wt converges for large t. In Theorem B.4, we refine the analysis to additionally give rates of convergence. Theorem B.1. Suppose that V‖ is independent of t, that the learning rate satisfies ηt → 0, that the proxy optima satisfy\n∞∑ t=0 ‖Ξ∗t ‖F <∞, (B.1)\nensuring the existence of a limit W ∗∞ := limt→∞W ∗ t and that\n∞∑ t=0 ηtλmin,V‖(E[XtX T t ]) =∞. (B.2)\nThen if either ∞∑ t=0 η2tE [ ‖XtXTt − E[XtXTt ]‖2F + ‖YtXTt − E[YtXTt ]‖2F ] <∞ (B.3) or ∞∑ t=0 η2tE [ ‖XtXTt − E[XtXTt ]‖2F\n+ ∥∥∥E[Wt](XtXTt − E[XtXTt ])− (YtXTt − E[YtXTt ])∥∥∥2\nF\n] <∞ (B.4)\nhold, then for any initialization W0, we have WtQ‖ p→W ∗∞.\nRemark B.2. In the general case, the column span V|| of E[XtXTt ] may vary with t. This means that some directions in Rn may only have non-zero overlap with colspan(E[XtXTt ]) for some positive but finite collection of values of t. In this case, only finitely many steps of the optimization would moveWt in this direction, meaning that we must define a smaller space for convergence. The correct definition of this subspace turns out to be the following\nV‖ := ∞⋂ t=0 ker [ ∞∏ s=t ( Id−2ηs N E[XsXTs ] )] (B.5)\n= ∞⋂ t=0\n{ u ∈ Rn ∣∣∣∣ limT→∞ T∏ s=t ( Id−2ηs N E[XsXTs ] ) u = 0 } .\nWith this re-definition of V|| and with Q‖ still denoting the orthogonal projection to V‖, Theorem B.1 holds verbatim and with the same proof. Note that if ηt → 0, V||colspan(E[XtXTt ]) is fixed in t, and (B.2) holds, this definition of V‖ reduces to that defined in (5.3).\nRemark B.3. The condition (B.4) can be written in a more conceptual way as ∞∑ t=0 [ ‖XtXTt − E[XtXTt ]‖2F + η2t Tr [ Id ◦Var ( (E[Wt]Xt − Yt)XTt )]] <∞,\nwhere we recognize that (E[Wt]Xt − Yt)XTt is precisely the stochastic gradient estimate at time t for the proxy loss Lt, evaluated at E [Wt], which is the location at time t for vanilla GD on Lt since taking expectations in the GD update equation (5.1) coincides with GD for Lt. Moreover, condition (B.4) actually implies condition (B.3) (see (B.12) below). The reason we state Theorem B.1 with both conditions, however, is that (B.4) makes explicit reference to the average E [Wt] of the augmented trajectory. Thus, when applying Theorem B.1 with this weaker condition, one must separately estimate the behavior of this quantity.\nTheorem B.1 gave conditions on joint learning rate and data augmentation schedules under which augmented optimization is guaranteed to converge. Our next result proves rates for this convergence.\nTheorem B.4. Suppose that ηt → 0 and that for some 0 < α < 1 < β1, β2 and C1, C2 > 0, we have\nlogE ∥∥∥∥∥( t∏\nr=s\n( Id−2ηr\nN XrX\nT r )) Q‖ ∥∥∥∥∥ 2\n2\n < C1 − C2 ∫ t+1 s r−αdr (B.6)\nas well as ‖Ξ∗t ‖F = O(t−β1) (B.7)\nand\nη2t Tr [ Id ◦Var(E[Wt]XtXTt − YtXTt )] = O(t−β2). (B.8)\nThen, for any initialization W0, we have for any > 0 that\ntmin{β1−1, β2−α 2 }− ‖WtQ‖ −W ∗∞‖F p→ 0.\nRemark B.5. To reduce Theorem 5.2 to Theorem B.4, we notice that (5.9) and (5.10) mean that Theorem A.6 applies to Yt = Id−2ηt XtX T t N with at = 1 − Ω(t −α) and and b2t = O(t\n−γ), thus implying (B.6).\nThe first step in proving both Theorem B.1 and Theorem B.4 is to obtain recursions for the mean and variance of the difference Wt − W ∗t between the time t proxy optimum and the augmented optimization trajectory at time t. We will then complete the proof of Theorem B.1 in §B.3 and the proof of Theorem B.4 in §B.4." }, { "heading": "B.2 RECURSION RELATIONS FOR PARAMETER MOMENTS", "text": "The following proposition shows that difference between the mean augmented dynamics E[Wt] and the time−t optimum W ∗t satisfies, in the sense of Definition A.1, one-sided decay of type ({At}, {Bt}) with\nAt = 2ηt N XtX T t , Bt = −Ξ∗t .\nIt also shows that the variance of this difference, which is non-negative definite, satisfies two-sided decay of type ({At}, {Ct}) with At as before and\nCt = 4η2t N2\n[ Id ◦Var ( E[Wt]XtXTt − YtXTt )] .\nIn terms of the notations of Appendix A.1, we have the following recursions.\nProposition B.6. The quantity E[Wt]−W ∗t satisfies E[Wt+1]−W ∗t+1 = (E[Wt]−W ∗t ) (\nId−2ηt N\nE[XtXTt ] ) − Ξ∗t (B.9)\nand Zt := E[(Wt − E[Wt])T(Wt − E[Wt])] satisfies Zt+1 = E [ (Id−2ηt\nN XtX\nT t )Zt(Id− 2ηt N XtX T t )\n] +\n4η2t N2\n[ Id ◦Var ( E[Wt]XtXTt − YtXTt )] .\n(B.10)\nProof. Notice that E[XtXTt ]u = 0 if and only if XTt u = 0 almost surely, which implies that\nW ∗t E[XtXTt ] = E[YtXTt ]E[XtXTt ]+E[XtXTt ] = E[YtXTt ]. Thus, the learning dynamics (5.1) yield\nE[Wt+1] = E[Wt]− 2ηt N\n( E[Wt]E[XtXTt ]− E[YtXTt ] ) = E[Wt]−\n2ηt N (E[Wt]−W ∗t )E[XtXTt ].\nSubtracting W ∗t+1 from both sides yields (B.9). We now analyze the fluctuations. Writing Sym(A) := A+AT, we have\nE[Wt+1]TE[Wt+1] = E[Wt]TE[Wt] + 2ηt N\nSym ( E[Wt]TE[YtXTt ]− E[Wt]TE[Wt]E[XtXTt ] ) +\n4η2t N2\n( E[XtXTt ]E[Wt]TE[Wt]E[XtXTt ]+E[XtY Tt ]E[YtXTt ]−Sym(E[XtXTt ]E[Wt]TE[YtXTt ]) ) .\nSimilarly, we have that\nE[WTt+1Wt+1] = E[WTt Wt] + 2ηt N Sym(E[WTt YtXTt −WTt WtXtXTt ])\n+ 4η2t N2 E[XtXTt WTt WtXtXTt − Sym(XtXTt WTt YtXTt ) +XtY Tt YtXTt ].\nNoting that Xt and Yt are independent of Wt and subtracting yields the desired." }, { "heading": "B.3 PROOF OF THEOREM B.1", "text": "First, by Proposition B.6, we see that E[Wt]−W ∗t has one-sided decay with\nAt = 2ηt XtX\nT t\nN and Bt = −Ξ∗t .\nThus, by Lemma A.2 and (B.1), we find that\nlim t→∞\n(E[Wt]Q‖ −W ∗t ) = 0, (B.11)\nwhich gives convergence in expectation.\nFor the second moment, by Proposition B.6, we see that Zt has two-sided decay with\nAt = 2ηt XtX\nT t\nN and Ct = 4η2t N2\n[ Id ◦Var ( E[Wt]XtXTt − YtXTt )] .\nWe now verify (A.7) and (A.8) in order to apply Lemma A.3.\nFor (A.7), for any > 0, notice that\nE[‖As − E[As]‖2F ] = η2sE[‖XsXTs − E[XsXTs ]‖2F ] so by either (B.3) or (B.4) we may choose T1 > t so that ∑∞ s=T1\nE[‖As − E[As]‖2F ] < 2 . Now choose T2 > T1 so that for T > T2, we have∥∥∥∥∥( T∏ r=T1 E[Id−Ar] ) Q‖ ∥∥∥∥∥ 2\n2\n<\n2\n1 ‖ ∏T1−1 s=t E[Id−As]‖2F + ∑T1−1 s=t E[‖As − E[As]‖2F ] .\nFor T > T2, we then have\nE ∥∥∥∥∥( T∏ s=t (Id−As) ) Q‖ ∥∥∥∥∥ 2\n2 ≤\n∥∥∥∥∥( T∏ s=t E[Id−As] ) Q‖ ∥∥∥∥∥ 2 + T∑ s=t E ∥∥∥∥∥ s∏ r=t (Id−Ar) T∏ r=s+1 (Id−E[Ar])Q‖ ∥∥∥∥∥ 2\nF\n− ∥∥∥∥∥ s−1∏ r=t (Id−Ar) T∏ r=s (Id−E[Ar])Q‖ ∥∥∥∥∥ 2\nF =\n∥∥∥∥∥( T∏ s=t E[Id−As] ) Q‖ ∥∥∥∥∥ 2\nF\n+ T∑ s=t E ∥∥∥∥∥ s−1∏ r=t (Id−Ar)(As − E[As]) T∏ r=s+1 (Id−E[Ar])Q‖ ∥∥∥∥∥ 2\nF ≤\n∥∥∥∥∥ T1−1∏ s=t E[Id−As] ∥∥∥∥∥ 2 ∥∥∥∥∥F( T∏ r=T1 E[Id−Ar] ) Q‖ ∥∥∥∥∥ 2\n2\n+ T∑ s=t E[‖As − E[As]‖2F ] ∥∥∥∥∥( T∏ r=s+1 E[Id−Ar] ) Q‖ ∥∥∥∥∥ 2\n2 ≤ (∥∥∥∥∥ T1−1∏ s=t E[Id−As] ∥∥∥∥∥ 2\nF\n+ T1−1∑ s=t E[‖As − E[As]‖2F ] )∥∥∥∥∥( T∏ r=T1 E[Id−Ar] ) Q‖ ∥∥∥∥∥ 2\n2\n+ T∑ s=T1 E[‖As − E[As]‖2F ]\n< ,\nwhich implies (A.7). Condition (A.8) follows from either (B.4) or (B.3) and the bounds\nTr(Ct) ≤ 8η2t N2\n( ‖E[Wt](XtXTt − E[XtXTt ])‖2F + ‖YtXTt − E[YtXTt ]‖2F ) (B.12)\n≤ 8η 2 t N2 ( ‖E[Wt]‖2‖XtXTt − E[XtXTt ]‖2F + ‖YtXTt − E[YtXTt ]‖2F ) ,\nwhere in the first inequality we use the fact that ‖M1−M2‖2F ≤ 2(‖M1‖2F +‖M2‖2F ). Furthermore, iterating (B.9) yields ‖E[Wt]−W ∗t ‖F ≤ ‖W0−W ∗0 ‖F+ ∑∞ t=0 ‖Ξ∗t ‖F , which combined with (B.12) and either (B.3) or (B.4) therefore implies (A.8). We conclude by Lemma A.3 that\nlim t→∞ QT‖ZtQ‖ = limt→∞ E[QT‖ (Wt − E[Wt]) T(Wt − E[Wt])Q‖] = 0. (B.13)\nTogether, (B.11) and (B.13) imply that WtQ‖ −W ∗t p→ 0. The conclusion then follows from the fact that limt→0W ∗t = W ∗ ∞. This complete the proof of Theorem B.1." }, { "heading": "B.4 PROOF OF THEOREM B.4", "text": "By Proposition B.6, E[Wt]−W ∗t has one-sided decay with\nAt = 2ηt N XtX T t , Bt = −Ξ∗t .\nBy Lemma A.7 and (B.6), E[At] satisfies\nlog ∥∥∥∥∥ t∏\nr=s\n( Id−2ηr 1\nN E[XrXTr ]\n) Q‖ ∥∥∥∥∥ 2 ≤ 1 2 logE ∥∥∥∥∥( t∏ r=s ( Id−2ηr XrX T r N )) Q‖ ∥∥∥∥∥ 2 2 < C1 2 − C2 2 ∫ t+1 s r−αdr.\nApplying Lemma A.4 using this bound and (B.7), we find that\n‖E[Wt]Q‖ −W ∗t ‖F = O(tα−β1).\nMoreover, because ‖Ξ∗t ‖F = O(t−β1), we also find that ‖W ∗t −W ∗∞‖F = O(t−β1+1), and hence\n‖E[Wt]Q‖ −W ∗∞‖F = O(t−β1+1).\nFurther, by Proposition B.6, E[(Wt − E[Wt])T(Wt − E[Wt])] has two-sided decay with\nAt = 2ηt N XtX T t , Ct = 4η2t N2\n[ Id ◦Var ( E[Wt]XtXTt − YtXTt )] .\nApplying Lemma A.5 with (B.6) and (B.8), we find that E [ ‖(Wt − E[Wt])Q‖‖2F ] = O(tα−β2).\nBy Chebyshev’s inequality, for any x > 0 we have P ( ‖WtQ‖ −W ∗∞‖F ≥ O(t−β1+1) + x ·O(t α−β2 2 ) ) ≤ x−2.\nFor any > 0, choosing x = tδ for small 0 < δ < we find as desired that\ntmin{β1−1, β2−α 2 }− ‖WtQ‖ −W ∗∞‖F p→ 0,\nthus completing the proof of Theorem B.4." }, { "heading": "C ANALYSIS OF NOISING AUGMENTATIONS", "text": "In this section, we give a full analysis of the noising augmentations presented in Section 4. Let us briefly recall the notation. As before, we consider overparameterized linear regression with loss\nL(W ;D) = 1 N ||WX − Y ||2F ,\nwhere the dataset D of size N consists of data matrices X,Y that each have N columns xi ∈ Rn, yi ∈ Rp with n > N. We optimize L(W ;D) by augmented gradient descent with additive Gaussian noise, which means that at each time t we replace D = (X,Y ) by a random dataset Dt = (Xt, Y ), where the columns xi,t of Xt are\nxi,t = xi + σtGi, Gi ∼ N (0, 1) i.i.d.\nWe then take a step Wt+1 = Wt − ηt∇WL(Wt;Dt)\nof gradient descent on the resulting randomly augmented loss L(W ;Dt) with learning rate ηt. A direct computation shows that the proxy loss\nLt = E [L(W ;Dt)] = L(W ;D) + σ2tN ||W || 2 F ,\nwhich is strictly convex. Thus, the space\nV‖ := column span of E[XtXTt ]\nis simply all of Rn. Moreover, the proxy loss has a unique minimum, which is\nW ∗t = Y X T (σ2tN Idn×n +XX T )−1." }, { "heading": "C.1 PROOF OF THEOREM 4.1", "text": "We first show convergence. For this, we seek to show that if σ2t , ηt → 0 with σ2t non-increasing and\n∞∑ t=0 ηtσ 2 t =∞ and\n∞∑ t=0 η2t σ 2 t <∞, (C.1)\nthen, Wt p→ Wmin. We will do this by applying Theorem 5.1, so we check that our assumptions imply the hypotheses of these theorems. For Theorem 5.1, we directly compute\nE[YtXTt ] = Y XT and E[XtXTt ] = XXT + σ2tN · Idn×n\nand\nE[XtXTt Xt] = XXTX + σ2t (N + n+ 1)X E[XtXTt XtXTt ] = XXTXXT + σ2t ( (2N + n+ 2)XXT + Tr(XXT) Idn×n ) + σ4tN(N + n+ 1) Idn×n .\nWe also find that ‖Ξ∗t ‖F = |σ2t − σ2t+1|N ∥∥∥∥Y XT(XXT + σ2tN · Idn×n )−1(XXT + σ2t+1N · Idn×n )−1∥∥∥∥\nF\n≤ |σ2t − σ2t+1|N‖Y XT[(XXT)+]2‖F .\nThus, because σ2t is decreasing, we see that the hypothesis (5.5) of Theorem 5.1 indeed holds. Further, we note that\n∞∑ t=0 η2tE [ ‖XtXTt − E[XtXTt ]‖2F + ‖YtXTt − E[YtXTt ]‖2F ] =\n∞∑ t=0 η2t σ 2 t ( 2(n+ 1)‖X‖2F +N‖Y ‖2F + σ2tNn(n+ 1) ) = O ( ∞∑ t=0 η2t σ 2 t ) ,\nwhich by (C.1) implies (B.3). Theorem 5.1 and the fact that limt→∞W ∗t = Wmin therefore yield that Wt\np→Wmin. For the rate of convergence, we aim to show that if ηt = Θ(t−x) and σ2t = Θ(t\n−y) with x, y > 0, x+ y < 1, and 2x+ y > 1, then for any > 0, we have that\ntmin{β, 1 2α}− ‖Wt −Wmin‖F p→ 0.\nWe now check the hypotheses for and apply Theorem B.4. For (B.6), notice that Yr = Id−2ηr XrX T r\nN\nsatisfies the hypotheses of Theorem A.6 with ar = 1 − 2ηrσ2r and b2r = η2rσ 2 r\na2r\n( 2(n + 1)‖X‖2F +\nσ2rNn(n + 1) ) . Thus, by Theorem A.6 and the fact that ηt = Θ(t−x) and σ2t = Θ(t −y), we find\nfor some C1, C2 > 0 that\nlogE ∥∥∥∥∥ t∏\nr=s\n(Id−2ηr XrX\nT r\nN ) ∥∥∥∥∥ 2\n2\n ≤ t∑ r=s b2r + 2 t∑ r=s log(1− 2ηrσ2r)\n≤ C1 − C2 ∫ t+1 s r−x−ydr.\nFor (B.7), we find that\n‖Ξ∗t ‖F ≤ |σ2t − σ2t+1|N‖Y XT[(XXT)+]2‖F = O(t−y−1).\nFinally, for (B.8), we find that η2t Tr [ Id ◦Var ( E[Wt]XtXTt − YtXTt )] = O(t−2x−y).\nNoting finally that ‖W ∗t −Wmin‖F = O(σ2t ) = O(t−y), we apply Theorem B.4 with α = x + y, β1 = y + 1, and β2 = 2x+ y to obtain the desired estimates. This concludes the proof of Theorem 4.1." }, { "heading": "D ANALYSIS OF SGD", "text": "This section gives the full analysis of the results for stochastic gradient descent with and without additive synthetic noise presented in Sections 6.1 and 6.2. Let us briefly recall the notation. As before, we consider overparameterized linear regression with loss\nL(W ;D) = 1 N ||WX − Y ||2F ,\nwhere the dataset D of size N consists of data matrices X,Y that each have N columns xi ∈ Rn, yi ∈ Rp with n > N.We optimize L(W ;D) by augmented SGD either with or without additive Gaussian noise. In the former case, this means that at each time t we replace D = (X,Y ) by a random batch Bt = (Xt, Yt) given by a prescribed batch size Bt = |Bt| in which each datapoint\nin Bt is chosen uniformly with replacement from D, and the resulting data matrices Xt and Yt are scaled so that Lt(W ) = L(W ;D). Concretely, this means that for the normalizing factor ct := √ N/Bt we have\nXt = ctXAt and Yt = ctY At, (D.1)\nwhereAt ∈ RN×Bt has i.i.d. columnsAt,i with a single non-zero entry equal to 1 chosen uniformly at random. In this setting the minimum norm optimum for each t are the same and given by\nW ∗t = W ∗ ∞ = Y X T(XXT)+,\nwhich coincides with the minimum norm optimum for the unaugmented loss.\nIn the setting of SGD with additive noise at level σt, we take instead\nXt = ct(XAt + σtGt) and Yt = ctY At,\nwhere ct and At are as before and Gt ∈ Rn×Bt has i.i.d. Gaussian entries. In this setting, the proxy loss is\nLt(W ) := 1 N E [ ‖ctY At − ctWXAt − ctσtWGt‖2F ] = 1 N ‖Y −WX‖2F + σ2t ‖W‖2F ,\nwhich has ridge minimizer W ∗t = Y X T(XXT + σ2tN · Idn×n)−1.\nWe begin in §D.1 by treating the case of noiseless SGD. We then do the analysis in the presence of noise in §D.2." }, { "heading": "D.1 PROOF OF THEOREM 6.1", "text": "In order to apply Theorems B.1 and B.4, we begin by computing the moments of At as follows. Recall the notation diag(M) from Appendix A.1.\nLemma D.1. For any Z ∈ RN×N , we have that\nE[AtATt ] = Bt N IdN×N and E[AtATt ZAtATt ] = Bt N diag(Z) + Bt(Bt − 1) N2 Z.\nProof. We have that\nE[AtATt ] = Bt∑ i=1 E[Ai,tATi,t] = Bt N IdN×N .\nSimilarly, we find that\nE[AtATt ZAtATt ] = Bt∑ i,j=1 E[Ai,tATi,tZAj,tATj,t]\n= Bt∑ i=1 E[Ai,tATi,tZAi,tATi,t] + 2 ∑\n1≤i<j≤Bt\nE[Ai,tATi,tZAj,tATj,t]\n= Bt N diag(Z) + Bt(Bt − 1) N2 Z,\nwhich completes the proof.\nLet us first check convergence in mean:\nE[Wt]Q‖ →W ∗∞.\nTo see this, note that Lemma D.1 implies\nE[YtXTt ] = Y XT E[XtXTt ] = XXT,\nwhich yields that W ∗t = Y X T[XXT]+ = W ∗∞ (D.2)\nfor all t. We now prove convergence. Since all W ∗t are equal to W ∗ ∞, we find that Ξ ∗ t = 0. By (B.9) and Lemma D.1 we have\nE[Wt+1]−W ∗∞ = (E[Wt]−W ∗∞) (\nId−2ηt N XXT\n) ,\nwhich implies since 2ηtN < λmax(XX T)−1 for large t that for some C > 0 we have\n‖E[Wt]Q‖ −W ∗∞‖F ≤ ‖W0Q‖ −W ∗∞‖F t−1∏ s=0 ∥∥∥∥Q‖ − 2ηsN XXT ∥∥∥∥ 2\n≤ C‖W0Q‖ −W ∗∞‖F exp ( − t−1∑ s=0 2ηs N λmin,V‖(XX T) ) . (D.3)\nFrom this we readily conclude using (6.1) the desired convergence in mean E[Wt]Q‖ →W ∗∞.\nLet us now prove that the variance tends to zero. By Proposition B.6, we find that Zt = E[(Wt − E[Wt])T(Wt − E[Wt])] has two-sided decay of type ({At}, {Ct}) with\nAt = 2ηt N XtX T t , Ct = 4η2t N2\n[ Id ◦Var((E[Wt]Xt − Yt)XTt ) ] .\nTo understand the resulting rating of convergence, let us first obtain a bound on Tr(Ct). To do this, note that for any matrix A, we have\nTr (Id ◦Var[A]) = Tr ( E [ ATA ] − E [A]T E [A] ) .\nMoreover, using the definition (D.1) of the matrix At and writing\nMt := E [Wt]X − Y, we find (\n(E [Wt]Xt − Yt)XTt )T (E [Wt]Xt − Yt)XTt = XAtATtMTt MtAtATt XT\nas well as E [( (E[Wt]Xt − Yt)XTt )]T E [(E[Wt]Xt − Yt)XTt ] = XE [AtATt ]MTt MtE [AtATt ]XT.\nHence, using the expression from Lemma D.1 for the moments ofAt and recalling the scaling factor ct = (N/Bt) 1/2, we find\nTr(Ct) = 4η2t Bt Tr\n( X { diag ( MTt Mt ) − 1 N MTt Mt } XT ) .\nNext, writing ∆t := E[Wt]−W ∗∞\nand recalling (D.2), we see that Mt = ∆tX.\nThus, applying the estimates (D.3) about exponential convergence of the mean, we obtain\nTr(Ct) ≤ 8η2t Bt ∣∣∣∣∆tQ||∣∣∣∣22 ∣∣∣∣XXT ∣∣∣∣22 ≤ C 8η 2 t\nBt\n∣∣∣∣XXT ∣∣∣∣2 2 ‖∆0Q‖‖2F exp ( − t−1∑ s=0 4ηs N λmin,V‖(XX T) ) . (D.4)\nNotice now that Yr = Q‖ − Ar satisfies the conditions of Theorem A.6 with ar = 1 − 2ηr 1 N λmin,V‖(XX T) and b2r = 4η2r Bra2rN Tr ( X diag(XTX)X − 1NXX TXXT )\n. By Theorem A.6 we then obtain for any t > s > 0 that\nE ∥∥∥∥∥ t∏\nr=s+1\n(Q‖ −Ar) ∥∥∥∥∥ 2\n2\n ≤ e∑tr=s+1 b2r t∏ r=s+1 ( 1− 2ηr 1 N λmin,V‖(XX T) )2 . (D.5)\nBy two-sided decay of Zt, we find by (D.4), (D.5), and (A.9) that\nE[‖WtQ‖ − E[Wt]Q‖‖2F ] = Tr(Q‖ZtQ‖)\n≤ e− 4 N λmin,V‖ (XX\nT) ∑t−1 s=0 ηs\n‖XXT‖22 N2 ‖∆0Q‖‖2FC t−1∑ s=0 8η2s Bs/N e 4ηs N λmin,V‖ (XX T)+ ∑t r=s+1 b 2 r . (D.6)\nSince ηs → 0, we find that ηs NBs e 4ηs N λmin,V‖ (XX T) is uniformly bounded and that b2r ≤ 4 N λmin,V‖(XX T)ηr for sufficiently large r. We therefore find that for some C ′ > 0,\nE[‖WtQ‖ − E[Wt]Q‖‖2F ] ≤ C ′ t−1∑ s=0 ηse − 4N λmin,V‖ (XX T) ∑s r=0 ηr ,\nhence limt→∞ E[‖WtQ‖ − E[Wt]Q‖‖2F ] = 0 by Lemma A.8. Combined with the fact that E[Wt]Q‖ →W ∗∞, this implies that WtQ‖ p→W ∗∞.\nTo obtain a rate of convergence, observe that by (D.3) and the fact that ηt = Θ(t−x), for some C1, C2 > 0 we have ‖E[Wt]Q‖ −W ∗∞‖F ≤ C1 exp ( − C2t1−x ) . (D.7)\nSimilarly, by (D.6) and the fact that ηsBs/N <∞ uniformly, for some C3, C4, C5 > 0 we have E[‖WtQ‖ − E[Wt]Q‖‖2F ] ≤ C3 exp ( − C4t1−x ) t1−x\nWe conclude by Chebyshev’s inequality that for any a > 0 we have P ( ‖WtQ‖ −W ∗∞‖F ≥ C1 exp ( − C2t1−x ) + a · √ C3t 1 2− x 2 e−C4t 1−x/2 ) ≤ a−2.\nTaking a = t, we conclude as desired that for some C > 0, we have\neCt 1−x ‖WtQ‖ −W ∗∞‖F p→ 0. This completes the proof of Theorem 6.1." }, { "heading": "D.2 PROOF OF THEOREM 6.2", "text": "We now complete our analysis of SGD with Gaussian noise. We will directly check that the optimization trajectory Wt converges at large t to the minimal norm interpolant W ∗∞ with the rates claimed in Theorem 6.2. We will deduce this from Theorem B.4. To check the hypotheses of this theorem, we will need expressions for its moments, which we record in the following lemma. Lemma D.2. We have\nE[YtXTt ] = Y XT and E[XtXTt ] = XXT + σ2tN Idn×n . (D.8)" }, { "heading": "Moreover,", "text": "E[YtXTt XtY Tt ] = c4tE[Y AtATt XTXAtATt Y T + σ2t Y AtGTt GtATt Y T]\n= N\nBt Y diag(XTX)Y T + Bt − 1 Bt Y XTXY T + σ2tNY Y T\nE[YtXTt XtXTt ] = c4tE[Y AtATt XTXAtATt XT + σ2t Y AtGTt GtATt XT\n+ σ2t Y AtG T t XAtG T t + σ 2 t Y AtA T t X TGtG T t ]\n= N\nBt Y diag(XTX)XT + Bt − 1 Bt Y XTXXT + σ2t (N + n+ 1 Bt/N )Y XT\nE[XtXTt XtXTt ] = c4tE[XAtATt XTXAtATt XT + σ2tGtGTt XAtATt XT + σ2tXAtGTt GtATt XT\n+ σ2tXAtA T t X TGtG T t + σ 2 tGtA T t X TGtA T t X T + σ2tXAtG T t XAtG T t\n+ σ2tGtA T t X TXAtG T t + σ 4 tGtG T t GtG T t ]\n= N\nBt X diag(XTX)XT + Bt − 1 Bt XXTXXT + σ2t (2N + n+ 2 Bt/N )XXT\n+ σ2t N\nBt Tr(XXT) Idn×n +σ 4 tN(N +\nn+ 1 Bt/N ) Idn×n .\nProof. All these formulas are obtained by direct, if slightly tedious, computation.\nWith these expressions in hand, we can readily check the of conditions Theorem B.4. First, we find using the Sherman-Morrison-Woodbury matrix inversion formula that\n‖Ξ∗t ‖F = |σ2tN − σ2t+1N | ∥∥Y XT(XXT + σ2tN · Idn×n)−1(XXT + σ2t+1N · Idn×n)−1∥∥F\n(D.9) ≤ N |σ2t − σ2t+1| ∥∥Y XT[(XXT)+]2∥∥ F .\nHence, assuming that σ2t = Θ(t −y), we see that condition (B.7) of Theorem B.4 holds with\nβ1 = −y − 1.\nNext, let us verify that the condition (B.6) holds for an appropriate α. For this, we need to bound\nlogE ∣∣∣∣∣ ∣∣∣∣∣ t∏\nr=s\n( Id−2ηr\nN XrX\nT r )∣∣∣∣∣ ∣∣∣∣∣ 2\n2\n,\nwhich we will do using Theorem A.6. In order to apply this result, we find by direct inspection of the formula E[XrXTr ] = XXT + σ2rN Idn×n that ∣∣∣∣∣∣∣∣E [Id−2ηrN XrXTr ]∣∣∣∣∣∣∣∣ 2 = 1− 2ηrσ2r := ar. Moreover, we have\nE [∣∣∣∣∣∣∣∣Id−2ηrN XrXTr − E [ Id−2ηr N XrX T r ]∣∣∣∣∣∣∣∣2 2 ] = 4η2r N2 E [∣∣∣∣XrXTr − E [XrXTr ]∣∣∣∣22] .\nUsing the exact expressions for the resulting moments from Lemma D.2, we find\n4η2r N2 E [∣∣∣∣XrXTr − E [XrXTr ]∣∣∣∣22] =\n4η2r N2\n[ 1 Bt Tr ( X(N diag(XTX)−XTX)XT ) + 2σ2t n+ 1 Bt/N Tr(XXT) + σ4t Nn(n+ 1) Bt/N ] ≤ Cη2r .\nThus, applying Theorem A.6, we find that\nlogE ∣∣∣∣∣ ∣∣∣∣∣ t∏\nr=s\n( Id−2ηr\nN XrX\nT r )∣∣∣∣∣ ∣∣∣∣∣ 2\n2\n≤ t∑\nr=s\nCη2r log\n( t∏\nr=s\n( 1− 2ηrσ2r )) ≤ t∑ r=s Cη2r − 2ηrσ2r .\nRecall that, in the notation of Theorem 6.2, we have\nηr = Θ(r −x), σ2r = Θ(r −y).\nHence, since under out hypotheses we have x < 2y, we conclude that condition (B.6) holds with α = x+ y. Moreover, exactly as in Proposition B.6, we have\n∆′t+1 = ∆ ′ t\n( Id−2ηt N E [ XtX T t ]) + 2 N Ξ∗t , ∆ ′ t := E [Wt −W ∗t ] .\nSince ||Ξ∗t ||F = O(t −y−1) and we already saw that ∣∣∣∣∣∣∣∣Id−2ηtN E [XtXTt ] ∣∣∣∣∣∣∣∣ 2 = 1− 2ηtσ2t , we may use the single sided decay estimates Lemma A.4 to conclude that\n||∆′t||F = O(t x−1).\nFinally, it remains to bound η2t Tr [ Id ◦Var(E[Wt]XtXTt − YtXTt )] .\nA direct computation using Lemma D.2 shows\nE [ ‖YtXTt − E[YtXTt ]‖2F ] = 1 Bt Tr ( Y (N diag(XTX)−XTX)Y T ) + σ2tN Tr(Y Y T).\nHence, again using D.2, we find η2t Tr [ Id ◦Var(E[Wt]XtXTt − YtXTt ) ] = η2t Tr ( 1 Bt E[Wt]X(N diag(XTX)−XTX)XTE[Wt]T\n+ 2σ2t n+ 1\nBt/N E[Wt]XXTE[Wt]T + (σ2t\nN Bt Tr(XXT) + σ4tN n+ 1 Bt/N )E[Wt]E[Wt]T ) − 2η2t Tr ( 1 Bt Y (N diag(XTX)−XTX)XTE[Wt]T + σ2t n+ 1 Bt/N Y XTE[Wt]T\n) + η2t Tr ( 1 Bt Y (N diag(XTX)−XTX)Y T + σ2tNY Y T ) .\nTo make sense of this term, note that W ∗∞X = Y.\nHence, we find after some rearrangement that η2t Tr [ Id ◦Var(E[Wt]XtXTt − YtXTt ) ] ≤ Cη2t (σ2t + ||∆t|| 2 F ),\nwhere we set ∆t := E [Wt −W ∗∞] .\nFinally, we have\n∆t ≤ ∆′t + ||W ∗t −W ∗∞||F = O(t x−1) + Θ(t−y) = Θ(t−y)\nsince we assumed that x+ y < 1. Therefore, we obtain η2t Tr [ Id ◦Var(E[Wt]XtXTt − YtXTt ) ] ≤ Cη2t σ2t = Θ(t−2x−y),\nshowing that condition (B.8) holds with β2 = 2x+ y. Applying Theorem B.4 completes the proof." } ]
2,020
null
SP:140100004dc307efd67790ef58f67929d3403c67
[ "This paper proposed a new approach for modeling multi-domain dialogue state tracking by incorporating domain-slot relationship using a pre-trained language encoder. The proposed approach are based on using special tokens to mode l such relationship. Two kinds of special tokens are proposed to represent domain-slot pair, DS_merge token for each specific pair, and tokens for every domain and slots separately ", "In this paper, the authors proposed a multidomain state-tracking model that leverages the relationship among different domain-slot pairs. This is done by leveraging the full-attention step over the [CLS] special token and by providing all the domain-slot pairs as a special token to a pre-trained language model (Figure 2 is very clear). To predict the value of the slot $D_{i,j}$, the author concatenates the representation of the [CLS] token, share among all the domain-slots, and the $D_{i,j}$, provided as input, and use a gating mechanism, by only using $D_{i,j}$ representation, to decide whether require as value (i.e., prediction) or not (e.g. None). \\" ]
Dialogue state tracking for multi-domain dialogues is challenging because the model should be able to track dialogue states across multiple domains and slots. Past studies had its limitations in that they did not factor in the relationship among different domain-slot pairs. Although recent approaches did support relationship modeling among the domain-slot pairs, they did not leverage a pre-trained language model, which has improved the performance of numerous natural language tasks, in the encoding process. Our approach fills the gap between these previous studies. We propose a model for multi-domain dialogue state tracking that effectively models the relationship among domain-slot pairs using a pre-trained language encoder. Inspired by the way the special [CLS] token in BERT is used to aggregate the information of the whole sequence, we use multiple special tokens for each domain-slot pair that encodes information corresponding to its domain and slot. The special tokens are run together with the dialogue context through the pre-trained language encoder, which effectively models the relationship among different domain-slot pairs. Our experimental results show that our model achieves state-of-the-art performance on the MultiWOZ-2.1 and MultiWOZ-2.2 dataset.
[]
[ { "authors": [ "Paweł Budzianowski", "Tsung-Hsien Wen", "Bo-Hsiang Tseng", "Iñigo Casanueva", "Stefan Ultes", "Osman Ramadan", "Milica Gasic" ], "title": "Multiwoz-a large-scale multi-domain wizard-of-oz dataset for taskoriented dialogue modelling", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Guan-Lin Chao", "Ian Lane" ], "title": "Bert-dst: Scalable end-to-end dialogue state tracking with bidirectional encoder representations from transformer", "venue": "arXiv preprint arXiv:1907.03040,", "year": 2019 }, { "authors": [ "Lu Chen", "Boer Lv", "Chi Wang", "Su Zhu", "Bowen Tan", "Kai Yu" ], "title": "Schema-guided multi-domain dialogue state tracking with graph attention neural networks", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2019 }, { "authors": [ "Mihail Eric", "Rahul Goel", "Shachi Paul", "Abhishek Sethi", "Sanchit Agarwal", "Shuyag Gao", "Dilek Hakkani-Tur" ], "title": "Multiwoz 2.1: Multi-domain dialogue state corrections and state tracking baselines", "venue": null, "year": 1907 }, { "authors": [ "Shuyang Gao", "Abhishek Sethi", "Sanchit Agarwal", "Tagyoung Chung", "Dilek Hakkani-Tur" ], "title": "Dialog state tracking: A neural reading comprehension approach", "venue": "In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue,", "year": 2019 }, { "authors": [ "Rahul Goel", "Shachi Paul", "Dilek Hakkani-Tür" ], "title": "Hyst: A hybrid approach for flexible and accurate dialogue state tracking", "venue": "Proc. Interspeech 2019,", "year": 2019 }, { "authors": [ "Michael Heck", "Carel van Niekerk", "Nurul Lubis", "Christian Geishauser", "Hsien-Chin Lin", "Marco Moresi", "Milica" ], "title": "Gašić. Trippy: A triple copy strategy for value independent neural dialog state tracking", "venue": null, "year": 2005 }, { "authors": [ "Ehsan Hosseini-Asl", "Bryan McCann", "Chien-Sheng Wu", "Semih Yavuz", "Richard Socher" ], "title": "A simple language model for task-oriented dialogue", "venue": "arXiv preprint arXiv:2005.00796,", "year": 2020 }, { "authors": [ "Zhenzhong Lan", "Mingda Chen", "Sebastian Goodman", "Kevin Gimpel", "Piyush Sharma", "Radu Soricut" ], "title": "Albert: A lite bert for self-supervised learning of language representations", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Hung Le", "Richard Socher", "Steven CH Hoi" ], "title": "Non-autoregressive dialog state tracking", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Hwaran Lee", "Jinsik Lee", "Tae-Yoon Kim" ], "title": "Sumbt: Slot-utterance matching for universal and scalable belief tracking", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Decoupled weight decay regularization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Shikib Mehri", "Mihail Eric", "Dilek Hakkani-Tur" ], "title": "Dialoglue: A natural language understanding benchmark for task-oriented dialogue", "venue": "arXiv preprint arXiv:2009.13570,", "year": 2020 }, { "authors": [ "Elnaz Nouri", "Ehsan Hosseini-Asl" ], "title": "Toward scalable neural dialogue state tracking model", "venue": "arXiv preprint arXiv:1812.00899,", "year": 2018 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, highperformance deep learning library", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Alec Radford", "Jeff Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": null, "year": 2019 }, { "authors": [ "Osman Ramadan", "Paweł Budzianowski", "Milica Gašić" ], "title": "Large-scale multi-domain belief tracking with knowledge sharing", "venue": "arXiv preprint arXiv:1807.06517,", "year": 2018 }, { "authors": [ "Abhinav Rastogi", "Xiaoxue Zang", "Srinivas Sunkara", "Raghav Gupta", "Pranav Khaitan" ], "title": "Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Yong Shan", "Zekang Li", "Jinchao Zhang", "Fandong Meng", "Yang Feng", "Cheng Niu", "Jie Zhou" ], "title": "A contextual hierarchical attention network with adaptive objective for dialogue state tracking", "venue": null, "year": 2006 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Thomas Wolf", "Lysandre Debut", "Victor Sanh", "Julien Chaumond", "Clement Delangue", "Anthony Moi", "Pierric Cistac", "Tim Rault", "Rémi Louf", "Morgan Funtowicz", "Joe Davison", "Sam Shleifer", "Patrick von Platen", "Clara Ma", "Yacine Jernite", "Julien Plu", "Canwen Xu", "Teven Le Scao", "Sylvain Gugger", "Mariama Drame", "Quentin Lhoest", "Alexander M. Rush" ], "title": "Huggingface’s transformers: Stateof-the-art natural language processing", "venue": "ArXiv, abs/1910.03771,", "year": 2019 }, { "authors": [ "Chien-Sheng Wu", "Andrea Madotto", "Ehsan Hosseini-Asl", "Caiming Xiong", "Richard Socher", "Pascale Fung" ], "title": "Transferable multi-domain state generator for task-oriented dialogue systems", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Xiaoxue Zang", "Abhinav Rastogi", "Srinivas Sunkara", "Raghav Gupta", "Jianguo Zhang", "Jindong Chen" ], "title": "Multiwoz 2.2: A dialogue dataset with additional annotation corrections and state tracking baselines", "venue": "ACL 2020,", "year": 2020 }, { "authors": [ "Jian-Guo Zhang", "Kazuma Hashimoto", "Chien-Sheng Wu", "Yao Wan", "Philip S Yu", "Richard Socher", "Caiming Xiong" ], "title": "Find or classify? dual strategy for slot-value predictions on multi-domain dialog state tracking", "venue": null, "year": 1910 }, { "authors": [ "Victor Zhong", "Caiming Xiong", "Richard Socher" ], "title": "Global-locally self-attentive encoder for dialogue state tracking", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2018 }, { "authors": [ "Li Zhou", "Kevin Small" ], "title": "Multi-domain dialogue state tracking as dynamic knowledge graph enhanced question answering", "venue": "arXiv preprint arXiv:1911.06192,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "A task-oriented dialogue system is designed to help humans solve tasks by understanding their needs and providing relevant information accordingly. For example, such a system may assist its user with making a reservation at an appropriate restaurant by understanding the user’s needs for having a nice dinner. It can also recommend an attraction site to a travelling user, accommodating the user’s specific preferences. Dialogue State Tracking (DST) is a core component of these taskoriented dialogue systems, which aims to identify the state of the dialogue between the user and the system. DST represents the dialogue state with triplets of the following items: a domain, a slot, a value. A set of {restaurant, price range, cheap}, or of {train, arrive-by, 7:00 pm} are examples of such triplets. Fig. 1 illustrates an example case of the dialogue state during the course of the conversation between the user and the system. Since a dialogue continues for multiple turns of utterances, the DST model should successfully predict the dialogue state at each turn as the conversation proceeds. For multi-domain conversations, the DST model should be able to track dialogue states across different domains and slots.\nPast research on multi-domain conversations used a placeholder in the model to represent domainslot pairs. A domain-slot pair is inserted into the placeholder in each run, and the model runs repeatedly until it covers all types of the domain-slot pairs. (Wu et al., 2019; Zhang et al., 2019; Lee et al., 2019). A DST model generally uses an encoder to extract information from the dialogue context that is relevant to the dialogue state. A typical input for a multi-domain DST model comprises a sequence of the user’s and the system’s utterances up to the turn t, Xt, and the domain-slot information for domain i and slot j, DiSj . In each run, the model feeds the input for a given domain-slot pair through the encoder.\nfencoder(Xt, DiSj) for i = 1, · · · , n, j = 1, · · · ,m, (1)\nwhere n and m is the number of domains and slots, respectively. However, because each domain-slot pair is modeled independently, the relationship among the domain-slot pairs can not be learned. For example, if the user first asked for a hotel in a certain place and later asked for a restaurant near that hotel, sharing the information between {hotel, area} and {restaurant, area} would help the model recognize that the restaurant should be in the same area as the hotel.\nRecent approaches address these issues by modeling the dialogue state of every domain-slot pair in a single run, given a dialogue context (Chen et al., 2020; Le et al., 2019). This approach can be represented as follows:\nfencoder(Xt, D1S1, · · · , DnSm). (2)\nBecause the encoder receives all of the domain-slot pairs, the model can factor in the relationship among the domain-slot pairs through the encoding process. For the encoder, these studies used models that are trained from scratch, without pre-training. However, since DST involves natural language text for the dialogue context, using a pre-trained language model can help improve the encoding process. Several studies used BERT (Devlin et al., 2019), a pre-trained bidirectional language model, for encoding the dialogue context (Zhang et al., 2019; Lee et al., 2019; Chao & Lane, 2019; Gao et al., 2019), but did not model the dependencies among different domain-slot pairs. Our approach fills the gap between these previous studies. In this work, we propose a model for multi-domain dialogue state tracking that effectively models the relationship among domain-slot pairs using a pre-trained language encoder. We modify the input structure of BERT, specifically the special token part of it, to adjust it for multi-domain DST.\nThe [CLS] token of BERT (Devlin et al., 2019) is expected to encode the aggregate sequence representation as it runs through BERT, which is used for various downstream tasks such as sentence classification or question answering. This [CLS] token can also be used as an aggregate representation for a given dialogue context. However, in a multi-domain dialogue, a single [CLS] token has to store information for different domain-slot pairs at the same time. In this respect, we propose to use multiple special tokens, one for each domain-slot pair. Using a separate special token for each domain-slot pair is more effective in storing information for different domains and slots since each token can concentrate on its corresponding domain and slot. We consider two different ways to represent such tokens: DS-merge and DS-split. DS-merge employs a single token to represent a single domain-slot pair. For example, to represent a domain-slot pair of {restaurant, area}, we use a special token DS(restaurant,area). DS-split, on the other hand, employs tokens separately for the domain and slot and then merges them into one to represent a domain-slot pair. For {restaurant, area}, the domain token Drestaurant and the slot token Sarea. is computed separately and then merged. We use {DS}merge and {DS}split to represent the special tokens for DS-merge or DS-split, respectively. Unless it is absolutely necessary to specify whether the tokens are from DS-merge or DS-split, we’ll refer to the DS-produced tokens as {DS} tokens, without special distinction, in our descriptions forward. The {DS} tokens, after being encoded by the pre-trained language encoder along with the dialogue context, is used to predict its corresponding domain-slot value for a given dialogue context." }, { "heading": "2 RELATED WORKS", "text": "Recent work on dialogue state tracking can be largely divided into two groups according to how the slot-values are predicted: fixed-vocabulary and open-vocabulary. The fixed-vocabulary approach, also known as the picklisted-based approach, uses a classification module to predict the dialogue state for each slot from a pre-defined set of candidate values (Zhong et al., 2018; Nouri & HosseiniAsl, 2018; Ramadan et al., 2018; Eric et al., 2019; Lee et al., 2019; Chen et al., 2020). The openvocabulary approach generates the dialogue state for each domain-slot pair either by using a generative decoder to generate text (Wu et al., 2019; Hosseini-Asl et al., 2020) or by extracting text spans from the dialogue history (Gao et al., 2019; Goel et al., 2019; Heck et al., 2020). There is also an approach to use both picklist-based and span-based methods according to the slot type (Zhang et al., 2019).\nFor models that deal with multi-domain dialogue, how they deal with different domain-slot pairs is another way to divide them. The first approach encodes the dialogue context independent of the domain-slot pairs and uses separate modules for each domain-slot pair (Eric et al., 2019; Gao et al., 2019; Goel et al., 2019; Heck et al., 2020). The second approach encodes the dialogue context using the domain-slot pair information as the prefix and run the encoder multiple times (Nouri & Hosseini-Asl, 2018; Wu et al., 2019). Other approaches encode the dialogue context independently but merges it with domain-slot pair information later with a separate fusion module (Zhong et al., 2018; Ramadan et al., 2018; Lee et al., 2019). However, none of these models are able to model the relationship among different domain-slot pairs because there is no module that enables the interaction between them.\n(Le et al., 2019) and (Chen et al., 2020) directly models the relationship among different domainslot pairs. (Le et al., 2019) uses a Fertility decoder to learn potential dependencies across domainslot pairs, but without using a pre-trained language model. Also, their model requires additional data such as system action and delexicalized system responses for its performance. (Chen et al., 2020) also explicitly models the relationship among different domain-slot pairs by using a Graph Attention Network (GAT) (Veličković et al., 2018). Schema graphs, which is the relation graph between domains and slots, are utilized for connecting edges in the GAT. Our work is different from these works in that we leverage the power of a pre-trained language encoder for directly modeling the dependencies among different domain-slot pairs.\n(Hosseini-Asl et al., 2020) takes a different approach from the others by using multi-task learning that encompasses DST as well as action and response generation with a generative language model GPT-2 (Radford et al., 2019). However, since our work is focused on DST, we consider the model that is trained on DST only. In the decoding process, dialogue states for different domain-slot pairs are sequentially generated." }, { "heading": "3 PROPOSED METHOD", "text": "Our model is composed of three parts. The first is the domain-slot-context (DSC) encoder, which encodes the dialogue context along with the special tokens representing domain-slot pairs. Next is slot-gate classifier, which is a preliminary classifier that predicts whether each domain-slot pair is relevant to the dialogue context. The adopted the concept of the slot-gate classifier from (Wu et al., 2019) and made adjustments to apply to our model. The last is the slot value classifier for predicting the value for each domain-slot pair among the candidate values.\nIn the following descriptions, we assume a dialogue context with a total of T turns. The task is to predict the dialogue state, which are {domain, slot, value} triplets for all domain-slot pairs, for every turn t = 1, · · · , T , using the dialogue context until each turn. Section 3 show the overview of our proposed model." }, { "heading": "3.1 DOMAIN-SLOT-CONTEXT ENCODER", "text": "The main structure of our model is the DSC encoder, which uses a pre-trained language to encode the dialogue context along with {DS} tokens. For the pre-trained language encoder, we used ALBERT (Lan et al., 2019) due to its strong performance on numerous natural language understanding tasks while having fewer parameters compared to other BERT-style encoders. {DS} tokens work like\nthe [CLS] token for BERT, encoding information corresponding to its domain-slot pair (DS-merge) or domain and slot (DS-split). The set of special tokens for each layout are shown in Eq. (3) and Eq. (4), respectively. In DS-merge, we used special tokens for each individual domain-slot pair. If there are many domain-slot pairs, using this layout can increase the number of special tokens as each domain-slot pair requires a separate special token. In DS-split, we used separate tokens for the domain and slot. To represent a domain-slot pair, we merged the corresponding tokens from each domain and slot by concatenating them. This promotes modeling compositionality, since the same slot token can be used for different domains. These {DS} tokens and the dialogue context are processed through the DSC encoder, which results in each token in {DS} being encoded with contextualized representations according to its domain and slot.\n{DS}merge = {DS(domain(1),slot(1)), · · · , DS(domain(n),slot(m))} (3) {DS}split = {Ddomain(1) , · · · , Ddomain(n) , Sslot(1) , · · · , Sslot(m)} (4)\nFig. 3 shows the input representation of the DSC encoder. The sequence begins with {DS} tokens. The special token [CLS] follows, which encodes the overall information of the dialogue context. For the dialogue context, we added a special token [SEPu] to separate each user or system utterance, which is added at the end of each utterance from the user or system. The input ends with a special token [SEP ] as the end-of-sequence token.\n4 types of embeddings are summed up to represent each token embedding. We used the pre-trained word embedding of ALBERT, except for the {DS} tokens, which are randomly initialized. We introduced the token type embedding to differentiate the {DS} tokens, user utterances tokens, and system utterances tokens. For DS-merge, we used a single token type embedding to represent a domain-slot pair, whereas for DS-split, we used two token type embeddings, one for the domain and the other for the slot. We did not apply this embedding for the [CLS] token. Position embeddings are also employed from ALBERT, but the index of the positional embedding starts from the [CLS] token. We did not use the positional embedding for the {DS} tokens as the order within those tokens is meaningless. Lastly, the segment embedding from ALBERT was used to represent the whole sequence as a single segment, which is the default segment embedding of ALBERT.\nDSC encoder encodes contextualized embeddings for every input token. However, for the slotgate classifier and slot-value classifier, we only use the special token outputs of the DSC encoder ([CLS] token and {DS} tokens). This is formally defined as follows for DS-merge and DS-split, respectively, for turn t:\nD̂S(1,1), · · · , D̂S(n,m), ĈLS = DSCencoder([{DS}merge, CLS,Xt, SEP ]), (5)\nD̂1, · · · , D̂n, Ŝ1 · · · , Ŝm, ĈLS = DSCencoder([{DS}split, CLS,Xt, SEP ]), (6)\nwhere Xt represents the dialogue context of (S1, SEPuU1, SEPu, · · · , St, SEPu, U t, SEPu). U t and St represents the utterance for the tth turn for the user and system respectively. The {DS}\ntokens and [CLS] token with the hat notation ̂ represents the encoded output of the DSC encoder for those special tokens. They are vectors of Rd, where d is the hidden dimension of ALBERT." }, { "heading": "3.2 SLOT-GATE CLASSIFIER", "text": "For the slot-gate classifier, we use the DSC encoder output of the {DS} tokens for each domainslot pair to predict whether it is relevant to the dialogue or not. In previous methods, gating used categories of {prediction, dontcare, none}, where prediction means a slot value is not dontcare or none and dontcare means that the predicted slot value is dontcare and none means that the domain-slot is non-relevant. The label for slot-gates are made from the slot-values. However, the performance for the dontcare category was far inferior to the other two categories, so we dismissed the dontcare category and only used {prediction, none}. In our preliminary models with ALBERT large-v2, the prediction and recall for dontcare was 48.87% and 17.21%, respectively. The precision and recall for none showed 98.91%, 99.45% and prediction 96.16%, 94.93%, respectively. In this setting, the dontcare category is included in prediction. For DS-merge, the slot-gate classifier predicts the value using the domain-slot pair special token. For the domain-slot pair of domain i and slot j, the slot-gate classifier output for DS-merge is\nGateDiSj = sigmoid ( WGDS(i,j) D̂S(i,j) ) , (7)\nwhere WGDiSj ∈ R 1×d. For DS-split, the slot-gate classifier uses the concatenated output of the corresponding domain and slot token. Similarly, for the same domain-slot pair, the slot-gate classifier output for DS-split is\nGateDiSj = sigmoid ( WG(Di,Sj) [ D̂i|Ŝj ]) , (8)\nwhere | represents concatenation of vectors and WG(Di,Sj) ∈ R 1×2d. The loss objective for the gate classification is as follows.set\nLgate = ∑\n(i,j)∈DS\nBinaryCrossEntropy ( ygateDiSj , GateDiSj ) , (9)\nwhere DS refers to the set of all domain-slot pairs and ygateDiSj is the binary slot-gate label for domain i and slot j. If the domain-slot is predicted to none, the corresponding output of the slot-value classifier is changed into none regardless of the prediction of the slot-value classifier." }, { "heading": "3.3 SLOT-VALUE CLASSIFIER", "text": "We employ the fixed-vocabulary based classification method for predicting slot values. As in (Zhang et al., 2019), the candidate-value list for each domain-slot pair was constructed by using the values from the training dataset, rather than using the incomplete ontology from the dataset. The [CLS] token is concatenated with each token from {DS}, and used as the input to the slot-value classifier for each domain-slot pair. The slot-value classifier output of domain i and slot j for DS-merge is as follows:\nV alueDiSj = softmax ( WVDS(i,j) [ D̂S(i,j)|ĈLS ]) , (10)\nwhere WVDS(i,j) ∈ R nDiSj × 2d and nDiSj is the number of candidate values for domain i and slot j. Similarly, for DS-split, the slot-value classifier output is\nV alueDiSj = softmax ( WV(Di,Sj) [ D̂i|Ŝj |ĈLS ]) , (11)\nwhere WV(Di,Sj ) ∈ R nDiSj × 3d. The loss objective for the slot-value classification is as follows:\nLvalue = ∑\n(i,j)∈DS\nCrossEntropy(yvalueDiSj , V alueDiSj ), (12)\nwhere yvalueDiSj is the label for domain i and slot j." }, { "heading": "3.4 TOTAL OBJECTIVE FUNCTION", "text": "The DSC encoder, slot-gate classifier and slot-value classifier is jointly trained under the total objective function below.\nLtotal = Lgate + Lvalue (13)" }, { "heading": "4 EXPERIMENT SETUP AND RESULTS", "text": "We evaluate our model using the joint goal accuracy, which considers a model prediction to be correct when the prediction jointly matches the ground truth values for all domain-slot pairs, given a dialogue context." }, { "heading": "4.1 DATASET", "text": "We use the MultiWOZ-2.1 (Eric et al., 2019) and MultiWOZ-2.2 dataset (Zang et al., 2020), both of which fixed noisy annotations and dialogue utterances of the MultiWOZ 2.0 dataset (Budzianowski et al., 2018). The dataset contains 7 domains and over 10,000 dialogues. We follow the previous studies and use 5 domains (train, restaurant, hotel, taxi, attraction) with 30 domain-slot pairs. The other two domains (police, hospital) have little data and do not appear in the test dataset. For MultiWOZ-2.1, we follow the pre-processing explained in (Wu et al., 2019). For MultiWOZ-2.2, we use the raw data as given without any pre-processing." }, { "heading": "4.2 SETUP", "text": "For the pre-trained language encoder, we used ALBERT(Lan et al., 2019) from HuggingFace (Wolf et al., 2019) in Pytorch (Paszke et al., 2019). We used the xxlarge-v2 version of ALBERT for the main experiment and compare other versions (base-v2, large-v2) in the analysis section. We also compared RoBERTa (Liu et al., 2019) to generalizability of our model. The optimizer was AdamW (Loshchilov & Hutter, 2018) with a learning rate of 1e−5 for ALBERT-xlarge-v2, ALBERT-xxlargev2 and RoBERTa-large and 5e−5 for ALBERT-base-v2, ALBERT-large-v2 and RoBERTa-base. We applied linear warm-up followed by linear decay for the learning rate. We trained all models with the effective batch size of 32, using gradient accumulation for bigger ALBERT models. Models were\nselected based on their joint goal accuracy on the validation data split. Only the training data was used to build the labels for each domain-slot pair. We used two NVIDIA V100 for our training. The original ALBERT was pre-trained with a sequence length of up to 512 tokens. However, dialogues that are longer than 512 tokens exists in the data. Usually, the standard procedure for this situation is to truncate the sequence up to 512 tokens and discard the remaining tokens. However, to cover dialogues longer than 512 tokens that are in the dataset, we resized the positional embedding to cover a maximum length of the dialogue. We preserved the original pre-trained position embedding for positions indices up to 512 and randomly initialized the remaining position indices. This method showed better results than limiting the maximum sequence length to 512. We plan to release our code on Github." }, { "heading": "4.3 RESULTS", "text": "Table 1 shows the joint goal accuracy of our model compared to previous methods. Both of our models show better performance among models without any additional supervision other than the dialogue context and domain-slot pair labels. Especially, the DS-split, ALBERT-xxlarge-v2 version of our proposed model achieves state-of-the-art result on the MultiWOZ-2.1 and MultiWOZ-2.2 dataset, without any form of extra supervision. However, in smaller models, The model with DSsplit shows better results than the model with DS-merge. This shows that in models with enough capacity, the slot-sharing of DS-split was more effective. However, this was not the case for smaller ALBERT models, which is explained in Section 4.4.2. This is important in that scalability is much better for DS-split than DS-merge, as many slots can be shared across different domains, reducing the number of special tokens to be used. We show the individual domain-slot accuracy in Appendix A.2, Table 4." }, { "heading": "4.4 ANALYSIS", "text": "In this section, we show that relationship modeling among different domain-slot pairs is indeed the key factor of our proposed model by running ablation studies. Also, we compare the effect of the size and type of the pre-trained language encoder in terms of performance." }, { "heading": "4.4.1 RELATIONSHIP MODELING AMONG DIFFERENT DOMAIN-SLOT PAIRS", "text": "First, we did not use any {DS} tokens and only used the CLS token. Because there are no dedicated special tokens for each domain-slot pair, the performance is very poor as shown in ’None’ row in Table 2. This shows that our approach to introduce {DS} is effective. Next, to evaluate the effect of relationship modeling among different domain-slot pairs, we blocked the attention among different {DS} tokens during the encoding process, which restricts direct interaction among {DS} tokens. Table 2 shows that without the relationship modeling, our model performance deteriorates by a substantial amount. This validates our idea that relationship modeling is the crucial factor for our approach.\nIn the Appendix A.1, we show some examples of wrong predictions that models without direct relationship modeling has made." }, { "heading": "4.4.2 SIZE AND TYPE OF THE PRE-TRAINED LANGUAGE ENCODER", "text": "We compared ALBERT and RoBERTa (Liu et al., 2019) and various model sizes within those pretrained language encoders. Table 3 shows the result for different versions of the pre-trained language encoders. For ALBERT, a bigger language model shows better results as is shown in various downstream tasks that ALBERT was evaluated on (Lan et al., 2019). Except for ALBERT-xx-large, all other configurations show that DS-merge shows better performance than DS-split. Based on the drastic increase in performance with xx-large, we presume that the high model complexity of ALBERTxx-large enabled {DS}split tokens to effectively encode information and make slot-sharing to work. In smaller models, this slot-sharing might not have been as effective due to their smaller encoding capacity. Also, concatenation, which was used for merging domain and slot embeddings in DS-split, might not have been enough for fully representing the information for the domain-slot pair in smaller models. RoBERTa also shows similar results with bigger models showing stronger performance." }, { "heading": "4.4.3 LEARNING CURVE", "text": "Fig. 4 shows the learning curve of the ALBERT-xxlarge-v2 on the MultiWOZ-2.2 dataset. The joint goal accuracy steadily increases after the slot-value loss plateaus." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose a model for multi-domain dialogue state tracking that effectively models the relationship among domain-slot pairs using a pre-trained language encoder. We introduced two methods to represent special tokens for each domain-slot pair: DS-merge and DS-split. These tokens work like the [CLS] token for BERT, encoding information corresponding to its domain-slot pair (DS-merge) or domain and slot (DS-split). These special tokens are run together with the dialogue context through the pre-trained language encoder, which enables modeling the relationship among different domain-slot pairs. Experimental results show that our model achieves state-of-the-art performance on the MultiWOZ-2.1 and MultiWOZ-2.2 dataset. The ablation experiments show that the relationship modeling among different domain-slot pairs is the key element of our model. Also, we showed that larger pre-trained language encoders improves performance. We hope to advance our research by finding ways to effectively apply our model towards the open-vocabulary approach, which will enable better generalization for candidate values that are outside of the training data." }, { "heading": "ACKNOWLEDGMENTS", "text": "This research was supported and funded by the Korean National Police Agency. [Pol-Bot Development for Conversational Police Knowledge Services / PR09-01-000-20]" }, { "heading": "A APPENDIX", "text": "A.1 RELATIONSHIP MODELING EXAMPLES\nA.1.1 EXAMPLE 1\nFig. 5 shows an example of a wrong prediction that the model without domain-slot relationship modeling makes. The value for {taxi, departure} is not explicitly mentioned in the dialogue context. However, Our full model correctly predicts the value for {taxi, departure}, which can be inferred from the dialogue context and {hotel, name}. However, the model without relationship modeling fails to predict the correct value for {taxi, departure}. User: i am staying in cambridge soon and would like to stay at a and b guest house.\nSystem: sure, how many days and how many people?\nUser: we are staying 6 people for 4 nights starting from tuesday. i need the reference number\nSystem: your booking is successful! your reference number is iigra0mi. do you need anything else?\nUser: yeas, what to recommend if i want to see good architecture in the west part of town?\nSystem: unfortunately there is no good architecture on the west end but i can look in other parts of town if you want\nUser: what about a museum?\nSystem: what part of town there are none in the west.\nUser: there are no museums in the west at all?\nSystem: sorry about that, there are actually 7 in that area.\nUser: great, can i get the postcode, entrance fee and address of 1 of them?\nSystem: cafe jello gallery has a free entrance fee. the address is cafe jello gallery, 13 magdalene street and the post code is cb30af. can i help you with anything else?\nUser: yes please. i need a taxi to commute.\nSystem: when would you like to leave and arrive?\nUser: i would like to get to the gallery by 13:45, please.\nSystem: sure, lookout for a blue volvo the contact number is 07941424083. can i help with anything else?\nUser: that is all for now. thank you so much\nA.1.2 EXAMPLE 2\nFig. 6 also shows an example of a wrong prediction that the model without domain-slot relationship modeling makes. The value for {train, day} is not explicitly mentioned in the dialogue context. In a similar manner from the example above, it can be referred from the {restaurant, book day}. User: i would like to find a particular restaurant in cambridge. the name of the restaurant is restaurant 2 two. could you give me the location?\nSystem: restaurant 2 two is nice french restaurant located at 22 chesterton road chesterton. would like me to book you a table?\nUser: that would be great. i need it for 8 on friday.\nSystem: do you have a time preference?\nUser: yes at 11:15 if that is not available i can do 10:15\nSystem: the booking for 10:15 was successful they will reserve the table for 15 minutes. the reference number is 6b5z7vj5.\nUser: thanks. can you help me find a train, too? i want to leave cambridge some time after 12:15.\nA.2 INDIVIDUAL SLOT ACCURACY\nTable 4 shows the individual domain-slot accuracy for the ALBERT-xxlarge-v2 model on the MultiWOZ-2.2 dataset." } ]
2,020
null
SP:2e548e320d5da211ffed027de7f0c6b78935f205
[ "The paper proposes PEP (Plug-in Embedding Pruning) to reduce the size of embedding table while incurring insignificant drop in accuracy. The related work is well summarized into Embedding Parameter Sharing and Embedding Size Selection methods and the motivation for the current approach is well explained. The paper draws inspiration from Lottery Ticket Hypothesis. The problem formulation of Embedding pruning is done in a crisp way avoiding additional hyper parameter tuning that can be found in other methods. Similar to LTH, the paper shows that the initiation strategy can make the training process faster and stable. The results show an impressive 97-99% parameter pruning via PEP. As for the computation cost, PEP results show an additional 20-30% time cost compare with base models.", "This paper proposed a novel approach to reduce size of the embedding table while not to drop in accuracy and computational optimization. Fixed-size embedding table has two problems, high memory usage cost and overfitting problem for those features that do not require too large representation. This paper recast the problem of embedding-size selection into learning column-wise sparsity, constraint K (eq(7)) and then convert S(V,s) problem (eq(8)). Paper used three benchmark datasets and some classical methods to verify effect." ]
The embedding-based representation learning is commonly used in deep learning recommendation models to map the raw sparse features to dense vectors. The traditional embedding manner that assigns a uniform size to all features has two issues. First, the numerous features inevitably lead to a gigantic embedding table that causes a high memory usage cost. Second, it is likely to cause the over-fitting problem for those features that do not require too large representation capacity. Existing works that try to address the problem always cause a significant drop in recommendation performance or suffer from the limitation of unaffordable training time cost. In this paper, we propose a novel approach, named PEP1 (short for Plug-in Embedding Pruning), to reduce the size of the embedding table while avoiding the drop of recommendation accuracy. PEP prunes embedding parameter where the pruning threshold(s) can be adaptively learned from data. Therefore we can automatically obtain a mixed-dimension embedding-scheme by pruning redundant parameters for each feature. PEP is a general framework that can plug in various base recommendation models. Extensive experiments demonstrate it can efficiently cut down embedding parameters and boost the base model’s performance. Specifically, it achieves strong recommendation performance while reducing 97-99% parameters. As for the computation cost, PEP only brings an additional 20-30% time cost compared with base models.
[ { "affiliations": [], "name": "Siyi Liu" }, { "affiliations": [], "name": "Chen Gao" }, { "affiliations": [], "name": "Yihong Chen" }, { "affiliations": [], "name": "Depeng Jin" }, { "affiliations": [], "name": "Yong Li" } ]
[ { "authors": [ "Martı́n Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Xiaoqiang Zhang" ], "title": "Tensorflow: A system for large-scale machine", "venue": null, "year": 2016 }, { "authors": [ "Amir Beck", "Marc Teboulle" ], "title": "A fast iterative shrinkage-thresholding algorithm for linear inverse problems", "venue": "Siam Journal on Imaging Sciences,", "year": 2009 }, { "authors": [ "Heng Tze Cheng", "Levent Koc", "Jeremiah Harmsen", "Tal Shaked", "Tushar Chandra", "Hrishi Aradhye", "Glen Anderson", "Greg Corrado", "Wei Chai", "Mustafa Ispir" ], "title": "Wide & deep learning for recommender", "venue": null, "year": 2016 }, { "authors": [ "Weiyu Cheng", "Yanyan Shen", "Linpeng Huang" ], "title": "Differentiable neural input search for recommender systems", "venue": "arXiv preprint arXiv:2006.04466,", "year": 2020 }, { "authors": [ "Paul Covington", "Jay Adams", "Emre Sargin" ], "title": "Deep neural networks for youtube recommendations", "venue": "In Proceedings of the 10th ACM conference on recommender systems,", "year": 2016 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Neural architecture search: A survey", "venue": "J. Mach. Learn. Res.,", "year": 2019 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Antonio Ginart", "Maxim Naumov", "Dheevatsa Mudigere", "Jiyan Yang", "James Zou" ], "title": "Mixed dimension embeddings with application to memory-efficient recommendation systems", "venue": null, "year": 1909 }, { "authors": [ "Huifeng Guo", "Ruiming Tang", "Yunming Ye", "Zhenguo Li", "Xiuqiang He" ], "title": "Deepfm: a factorizationmachine based neural network for ctr prediction", "venue": "In Proceedings of the 26th International Joint Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Frank Hutter", "Lars Kotthoff", "Joaquin Vanschoren" ], "title": "Automated machine learning: methods, systems, challenges", "venue": null, "year": 2019 }, { "authors": [ "Prateek Jain", "Ambuj Tewari", "Purushottam Kar" ], "title": "On iterative hard thresholding methods for highdimensional m-estimation", "venue": null, "year": 2014 }, { "authors": [ "Manas R Joglekar", "Cong Li", "Mei Chen", "Taibai Xu", "Xiaoming Wang", "Jay K Adams", "Pranav Khaitan", "Jiahui Liu", "Quoc V Le" ], "title": "Neural input search for large scale recommendation models", "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2020 }, { "authors": [ "Wang-Cheng Kang", "Derek Zhiyuan Cheng", "Ting Chen", "Xinyang Yi", "Dong Lin", "Lichan Hong", "Ed H Chi" ], "title": "Learning multi-granular quantized embeddings for large-vocab categorical features in recommender systems", "venue": "In Companion Proceedings of the Web Conference", "year": 2020 }, { "authors": [ "Aditya Kusupati", "Vivek Ramanujan", "Raghav Somani", "Mitchell Wortsman", "Prateek Jain", "Sham Kakade", "Ali Farhadi" ], "title": "Soft threshold weight reparameterization for learnable sparsity", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, highperformance deep learning library", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Steffen Rendle" ], "title": "Factorization machines", "venue": "IEEE International Conference on Data Mining,", "year": 2010 }, { "authors": [ "Hao-Jun Michael Shi", "Dheevatsa Mudigere", "Maxim Naumov", "Jiyan Yang" ], "title": "Compositional embeddings using complementary partitions for memory-efficient recommendation systems", "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2020 }, { "authors": [ "Weiping Song", "Chence Shi", "Zhiping Xiao", "Zhijian Duan", "Yewen Xu", "Ming Zhang", "Jian Tang" ], "title": "Autoint: Automatic feature interaction learning via self-attentive neural networks", "venue": "In Proceedings of the 28th ACM International Conference on Information and Knowledge Management,", "year": 2019 }, { "authors": [ "Omid Taheri", "Sergiy A. Vorobyov" ], "title": "Sparse channel estimation with lp-norm and reweighted l1-norm penalized least mean squares", "venue": "In IEEE International Conference on Acoustics,", "year": 2011 }, { "authors": [ "Pauli Virtanen", "Ralf Gommers", "Travis E Oliphant", "Matt Haberland", "Paul Van Mulbregt" ], "title": "Author correction: Scipy 1.0: fundamental algorithms for scientific computing in python", "venue": "Nature Methods,", "year": 2020 }, { "authors": [ "Caojin Zhang", "Yicun Liu", "Yuanpu Xie", "Sofia Ira Ktena", "Alykhan Tejani", "Akshay Gupta", "Pranay Kumar Myana", "Deepak Dilipkumar", "Suvadip Paul", "Ikuhiro Ihara" ], "title": "Model size reduction using frequency based double hashing for recommender systems", "venue": "In Fourteenth ACM Conference on Recommender Systems,", "year": 2020 }, { "authors": [ "Shuai Zhang", "Lina Yao", "Aixin Sun", "Yi Tay" ], "title": "Deep learning based recommender system: A survey and new perspectives", "venue": "ACM Computing Surveys (CSUR),", "year": 2019 }, { "authors": [ "Xiangyu Zhao", "Haochen Liu", "Hui Liu", "Jiliang Tang", "Weiwei Guo", "Jun Shi", "Sida Wang", "Huiji Gao", "Bo Long" ], "title": "Memory-efficient embedding for recommendations", "venue": "arXiv preprint arXiv:2006.14827,", "year": 2020 }, { "authors": [ "Xiangyu Zhao", "Chong Wang", "Ming Chen", "Xudong Zheng", "Xiaobing Liu", "Jiliang Tang" ], "title": "Autoemb: Automated embedding dimensionality search in streaming recommendations", "venue": "arXiv preprint arXiv:2002.11252,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "The success of deep learning-based recommendation models (Zhang et al., 2019) demonstrates their advantage in learning feature representations, especially for the most widely-used categorical features. These models utilize the embedding technique to map these sparse categorical features into real-valued dense vectors to extract users’ preferences and items’ characteristics. The learned vectors are then fed into prediction models, such as the inner product in FM (Rendle, 2010), selfattention networks in AutoInt (Song et al., 2019), to obtain the prediction results. The embedding table could contain a large number of parameters and cost huge amounts of memory since there are always a large number of raw features. Therefore, the embedding table takes the most storage cost.\nA good case in point is the YouTube Recommendation Systems (Covington et al., 2016). It demands tens of millions of parameters for embeddings of the YouTube video IDs. Considering the increasing demand for instant recommendations in today’s service providers, the scale of embedding tables becomes the efficiency bottleneck of deep learning recommendation models. On the other hand, features with uniform embedding size may hard to handle the heterogeneity among different features. For example, some features are more sparse, and assigning too large embedding sizes is likely\n∗Chen Gao is the Corresponding Author. The work is performed when Siyi Liu is an intern in Tsinghua University.\n1Codes are available at: https://github.com/ssui-liu/learnable-embed-sizes-for-RecSys\nto result in over-fitting issues. Consequently, recommendation models tend to be sub-optimal when embedding sizes are uniform for all features.\nThe existing works towards this problem can be divided into two categories. Some works (Zhang et al., 2020; Shi et al., 2020; Kang et al., 2020) proposed that some closely-related features can share parts of embeddings, reducing the whole cost. Some other works (Joglekar et al., 2020; Zhao et al., 2020b;a; Cheng et al., 2020) proposed to assign embeddings with flexible sizes to different features relying on human-designed rules (Ginart et al., 2019) or neural architecture search (Joglekar et al., 2020; Zhao et al., 2020b;a; Cheng et al., 2020). Despite a reduced embedding size table, these methods still cannot perform well on the two most concerned aspects, recommendation performance and computation cost. Specifically, these methods either obtain poor recommendation performance or spend a lot of time and efforts in getting proper embedding sizes.\nIn this paper, to address the limitations of existing works, we proposed a simple yet effective pruning-based framework, named Plug-in Embedding Pruning (PEP), which can plug in various embedding-based recommendation models. Our method adopts a direct manner–pruning those unnecessary embedding parameters in one shot–to reduce parameter number.\nSpecifically, we introduce the learnable threshold(s) that can be jointly trained with embedding parameters via gradient descent. Note that the threshold is utilized to determine the importance of each parameter automatically. Then the elements in the embedding vector that are smaller than the threshold will be pruned. Then the whole embedding table is pruned to make sure each feature has a suitable embedding size. That is, the embedding sizes are flexible. After getting the pruned embedding table, we retrain the recommendation model with the inspiration of the Lottery Ticket Hypothesis (LTH) (Frankle & Carbin, 2018), which demonstrates that a subnetwork can reach higher accuracy compared with the original network. Based on flexible embedding sizes and the LTH, our PEP can cuts down embedding parameters while maintaining and even boosting the model’s recommendation performance. Finally, while there is always a trade-off between recommendation performance and parameter number, our PEP can obtain multiple pruned embedding tables by running only once. In other words, our PEP can generate several memory-efficient embedding matrices once-for-all, which can well handle the various demands for performance or memory-efficiency in real-world applications. We conduct extensive experiments on three public benchmark datasets: Criteo, Avazu, and MovieLens-1M. The results demonstrate that our PEP can not only achieve the best performance compared with state-of-the-art baselines but also reduces 97% to 99% parameter usage. Further studies show that our PEP is quite computationally-efficient, requiring a few additional time for embedding-size learning. Furthermore, visualization and interpretability analysis on learned embedding confirm that our PEP can capture features’ intrinsic properties, which provides insights for future researches." }, { "heading": "2 RELATED WORK", "text": "Existing works try to reduce the embedding table size of recommendation models from two perspectives, embedding parameter sharing and embedding size selection." }, { "heading": "2.1 EMBEDDING PARAMETER SHARING", "text": "The core idea of these methods is to make different features re-use embeddings via parameter sharing. Kang et al. (2020) proposed MGQE that retrieves embedding fragments from a small size of shared centroid embeddings and then generates final embedding by concatenating those fragments. Zhang et al. (2020) used the double-hash trick to make low-frequency features share a small embedding-table while reducing the likelihood of a hash collision. Shi et al. (2020) tried to yield a unique embedding vector for each feature category from a small embedding table by combining multiple smaller embedding (called embedding fragments). The combination is usually through concatenation, add, or element-wise multiplication among embedding fragments.\nHowever, those methods suffer from two limitations. First, engineers are required to carefully design the parameter-sharing ratio to balance accuracy and memory costs. Second, these rough embeddingsharing strategies cannot find the redundant parts in the embedding tables, and thus it always causes a drop in recommendation performance.\nIn this work, our method automatically chooses suitable embedding usages by learning from data. Therefore, engineers can be free from massive efforts for designing sharing strategy, and the model performance can be boosted via removing redundant parameters and alleviating the over-fitting issue." }, { "heading": "2.2 EMEBDDING SIZE SELECTION", "text": "The embedding-sharing methods assign uniform embedding sizes to every feature, which may still fail to deal with the heterogeneity among different features. Recently, several methods proposed a new paradigm of mixed-dimension embedding table. Specifically, different from assigning all features with uniformed embedding size, different features can have different embedding sizes. MDE (Ginart et al., 2019) proposed a human-defined rule that the embedding size of a feature is proportional to its popularity. However, this rule-based method is too rough and cannot handle those important features with low frequency. Additionally, there are plenty of hyper-parameters in MDE requiring a lot of truning efforts. Some other works (Joglekar et al., 2020; Zhao et al., 2020b;a; Cheng et al., 2020) assigned adaptive embedding sizes to different features, relying on the advances in Neural Architecture Search (NAS) (Elsken et al., 2019), a significant research direction of Automated Machine Learning (AutoML) (Hutter et al., 2019). NIS (Joglekar et al., 2020) used a reinforcement learning-based algorithm to search embedding size from a candidate set predefined by human experts. A controller is adopted to generate the probability distribution of size for specific feature embeddings. This was further extended by DartsEmb (Zhao et al., 2020b) by replacing the reinforcement learning searching algorithm with differentiable search (Liu et al., 2018). AutoDim (Zhao et al., 2020a) allocated different embedding sizes for different feature fields, rather than individual features, in a same way as DartsEmb. DNIS (Cheng et al., 2020) made the candidate embedding size to be continuous without predefined candidate dimensions. However, all these NAS-based methods require extremely high computation costs in the searching procedure. Even for methods that adopt differential architecture search algorithms, the searching cost is still not affordable. Moreover, these methods also require a great effort in designing proper search spaces.\nDifferent from these works, our pruning-based method can be trained quite efficiently and does not require any human efforts in determining the embedding-size candidates." }, { "heading": "3 PROBLEM FORMULATION", "text": "Feature-based recommender system2 is commonly used in today’s information services. In general, deep learning recommendation models take various raw features, including users’ profiles and items’ attributes, as input and predict the probability that a user like an item. Specifically, models take the combination of user’s profiles and item’s attributes, denoted by x, as its’ input vector, where x is the concatenation of all fields that could defined as follows:\nx = [x1;x2; . . . ;xM] , (1)\nwhere M denotes the number of total feature fields, and xi is the feature representation (one-hot vector in usual) of the i-th field. Then for xi, the embedding-based recommendation models generate corresponding embedding vector vi via following formulation:\nvi = Vixi, (2)\n2It is also known as click-through rate prediction.\nwhere Vi ∈ Rni×d is an embedding matrix of i-th field, ni denotes the number of features in the i-th field, and d denotes the size of embedding vectors. The model’s embedding matrices V for all fields of features can be formulated as follows,\nV = {V1,V2, . . . ,VM}, (3)\nThe prediction score could be calculated with V and model’s other parameters (mainly refer to the parameters in prediction model) Θ as follows,\nŷ = φ(x|V,Θ), (4)\nwhere ŷ is the predicted probability and φ represent the prediction model, such as FM (Rendle, 2010) or AutoInt (Song et al., 2019). As for model training, to learn the models parameters, the optimizer minimizes the training loss as follows,\nmin L(V,Θ,D), (5)\nwhere D = {x, y} represents the data fed into the model, x denotes the input feature, y denotes the ground truth label, and L is the loss function. The Logloss is the most widely-used loss function in recommendation tasks (Rendle, 2010; Guo et al., 2017; Song et al., 2019) and calculated as follows,\nL = − 1|D| |D|∑ j=1 ( yj log (ŷj) + (1− yj) log (1− ŷj) ) , (6)\nwhere |D| is the total number of training samples and regularization terms are omitted for simplification." }, { "heading": "4 METHODOLOGY", "text": "" }, { "heading": "4.1 LEARNABLE EMBEDDING SIZES THROUGH PRUNING", "text": "As mentioned above, a feasible solution for memory-efficient embedding learning is to automatically assign different embedding sizes d̃i for different features embeddings vi, which is our goal. However, to learn d̃i directly is infeasible due to its discreteness and extremely-large optimization space. To address it, we propose a novel idea that enforce column-wise sparsity on V, which equivalently shrinks the embedding size. For example, as it shown in Figure 1, the first value in embedding v1 is pruned and set to zero, leading to a d̃1 = d1 − 1 embedding size in effect. Furthermore, some unimportant feature embeddings, like v3, are dropped by set all values to zero3. Thus our method can significantly cut down embedding parameters. Note that the technique of sparse matrix storage help us to significantly save memory usage (Virtanen et al., 2020).\nIn such a way, we recast the problem of embedding-size selection into learning column-wise sparsity for the embedding matrix V. To achieve that, we design a sparsity constraint on V as follows,\nmin L, s.t. ||V||0 ≤ k, (7)\nwhere || · ||0 denotes the L0-norm, i.e. the number of non-zeros and k is the parameter budget, which is, the constraint on the total number of embedding parameters.\n3Our PEP benefit from such kind of reduction, as demonstrated in Section 5.1, 5.3 and 5.4.\nHowever, direct optimization of Equation (7) is NP-hard due to the non-convexity of the L0-norm constraint. To solve this problem, the convex relaxation of L0-norm, called L1-norm, has been studied for a long time (Taheri & Vorobyov, 2011; Beck & Teboulle, 2009; Jain et al., 2014). For example, the Projected Gradient Descent (PGD) (Jain et al., 2014) in particular has been proposed to project parameters to L1 ball to make the gradient computable in almost closed form. Note that the L1 ball projection is also known as Soft Thresholding (Kusupati et al., 2020). Nevertheless, such methods are still faced with two major issues. First, the process of projecting the optimization values onto L1 ball requires too much computation cost, especially when the recommendation model has millions of parameters. Second, the parameter budget k requires human experts to manually set at a global level. Considering that features have various importance for recommendation, such operation is obviously sub-optimal. To tackle those two challenges, inspired by Soft Threshold Reparameterization (Kusupati et al., 2020), we directly optimize the projection of V and adaptively pruning the V via learnable threshold(s) which can be updated by gradient descent. The re-parameterization of V can be formulated as follows,\nV̂ = S(V, s) = sign(V)ReLU(|V| − g(s)), (8)\nwhere V̂ ∈ RN×d denotes the re-parameterized embedding matrix, and g(s) serves as a pruning threshold value, of which sigmoid function is a simple yet effective solution.4 We set the initial value of trainable parameter s ∈ R (called sinit) to make sure that the threshold(s) g start close to zero. The sign(·) function converts positive input value to 1 and negative input value to -1, and zero input will keep unchanged.\nAs S(V, s) is applied to each element of V, and thus the optimization problem in Equation (5) could be redefined as follows,\nmin L(S(V, s),Θ,D). (9)\nThen the trainable pruning parameter s could be jointly optimized with parameters of the recommendation models φ, through the standard back-propagation. Specifically, the gradient descent update equation for V at t-th step is formulated as follows,\nV(t+1) ← V(t) − ηt∇S(V,s)L ( S(V(t), s),D ) ∇VS(V, s), (10)\nwhere ηt is t-th step learning rate and denotes the Hadamard product. To solve the nondifferentiablilty of S(·), we use sub-gradient to reformat the update equation as follows,\nV(t+1) ← V(t) − ηt∇S(V,s)L ( S(V(t), s),D ) 1 { S(V(t), s) 6= 0 } , (11)\nwhere 1{·} denotes the indicator function. Then, as long as we choose a continuous function g in S(·), then the loss functionL ( S(V(t), s),D ) would be continuous for s. Moreover, the sub-gradient of L with respect to s can be used of gradient descent on s as well. Thanks to the automatic differentiation framework like TensorFlow (Abadi et al., 2016) and PyTorch (Paszke et al., 2019), we are free from above complex gradient computation process. Our PEP code can be found in Figure 7 of Appendix A.2. As we can see, it is quite simple to incorporate with existing recommendation models, and there is no need for us to manually design the backpropagation process." }, { "heading": "4.2 RETRAIN WITH LOTTERY TICKET HYPOTHESIS", "text": "After pruning the embedding matrix V to the target parameter budget P , we could create a binary pruning mask m ∈ {0, 1}V that determines which parameter should remain or drop. Then we retrain the base model with a pruned embedding table. The Lottery Ticket Hypothesis (Frankle & Carbin, 2018) illustrates that a sub-network in a randomly-initialized dense network can match the original network, when trained in isolation in the same number of iterations. This sub-network is called the winning ticket. Hence, instead of randomly re-initializing the weight, we retrain the base model while re-initializing the weights back to their original (but masked now) weights m V0. This initiation strategy can make the training process faster and stable, keeping the performance consistent, which is shown in Appendix A.6.\n4More details about how to choose a suitable g(s) are provided in Appendix A.1." }, { "heading": "4.3 PRUNING WITH FINER GRANULARITY", "text": "Threshold parameter s in Equation (8) is set to a scalar that values of every dimension will have the same threshold value. We name this version as global wise pruning. However, different dimensions in the embedding vector vi may have various importance, and different fields of features may also have highly various importance. Thus, values in the embedding matrix require different sparsity budgets, and pruning with a global threshold may not be optimal. To better handle the heterogeneity among different features/dimensions in V, we design following different threshold tactic with different granularities. (1) Dimension Wise: The threshold parameter s is set as a vector s ∈ Rd. Each value in an embedding will be pruned individually. (2) Feature Wise: The threshold parameter s is defined as a vector s ∈ RN . Pruning on each features’ embedding could be done in separate ways. (3) Feature-Dimension Wise: this variant combines the above genre of threshold to obtain the finest granularity pruning. Specifically, thresholds are set as a matrix s ∈ RN×d." }, { "heading": "5 EXPERIMENTS", "text": "Dataset. We use three benchmark datasets: MovieLens-1M, Criteo, and Avazu, in our experiments.\nMetric. We adopt AUC (Area Under the ROC Curve) and Logloss to measure the performance of models.\nBaselines and Base Recommendation Models. We compared our PEP with traditional UE (short for Uniform Embedding). We also compare with the recent advances in flexible embedding sizes: MGQE (Kang et al., 2020), MDE (Ginart et al., 2019), and DartsEmb (Zhao et al., 2020b)5. We deploy PEP and all baseline methods to three representative feature-based recommendation models: FM (Rendle, 2010), DeepFM (Guo et al., 2017), and AutoInt (Song et al., 2019), to compare their performance6." }, { "heading": "5.1 RECOMMENDATION ACCURACY AND PARAMETER NUMBER", "text": "We present the curve of recommendation performance and parameter number in Figure 2, 3 and 4, including our method and state-of-the-art baseline methods. Since there is a trade-off between recommendation performance and parameter number, the curves are made of points that have different sparsity demands7.\n• Our method reduces the number of parameters significantly. Our PEP achieves the highest reduce-ratio of parameter number in all experiments, especially in relatively large datasets (Criteo and Avazu). Specifically, in Criteo and Avazu datasets, our PEP-0 can reduce 99.90% parameter usage compared with the best baseline (from the 106 level to the 103 level, which is very significant.). Embedding matrix with such low parameter usage means that only hundreds of embeddings are non-zero. By setting less-important features’ embedding to zero, our PEP can break the limitation in existing methods that minimum embedding size is one rather than zero. We conduct more analysis on the MovieLens dataset in Section 5.3 and 5.4 to help us understand why our method can achieve such an effective parameter decreasing.\n5We do not compare with NIS (Joglekar et al., 2020) since it has not released codes and its reinforcementlearning based search is really slow.\n6More details of implementation and above information could be found in Appendix A.4. 7We report five points of our method, marked from 0 to 4.\n• Our method achieves strong recommendation performance. Our method consistently outperforms the uniform embedding based model and achieves better accuracy than other methods in most cases. Specifically, for the FM model on the Criteo dataset, the relative performance improvement of PEP over UE is 0.59% and over DartsEmb is 0.24% in terms of AUC. Please note that the improvement of AUC or Logloss at such level is still considerable for feature-based recommendation tasks (Cheng et al., 2016; Guo et al., 2017), especially considering that we have reduced a lot of parameters. A similar improvement can also be observed from the experiments on other datasets and other recommendation models. It is worth noting that our method could keep a strong AUC performance under extreme sparsity-regime. For example, when the number of parameters is only in the 103 level (a really small one), the recommendation performance still remarkably outperforms the Linear Regression model (more details can be found in Appendix A.5).\nTo summarize it, with the effectiveness of recommendation accuracy and parameter-size reduction, the PEP forms a frontier curve encompassing all the baselines at all the levels of parameters. This verifies the superiority that our method can handle different parameter-size budgets well." }, { "heading": "5.2 EFFICIENCY ANALYSIS OF OUR METHOD", "text": "As is shown in Section 5.1, learning a suitable parameter budget can yield a higher-accuracy model while reducing the model’s parameter number. Nevertheless, it will induce additional time to find apposite sizes for different features. In this section, we study the computational cost and compare the runtime of each training epoch between PEP and DartsEmb on the Criteo dataset. We implement both models with the same batch size and test them on the same platform.\nThe training time of each epoch on three different models is given in Table 2. We can observe that our PEP’s additional computation-cost is only 20% to 30%, which is acceptable compared with the base model. DartsEmb, however, requires nearly double computation time to search a good embedding size in its bi-level optimization process. Furthermore, DartsEmb needs to search multiple times to fit different memory budgets, since each one requires a complete re-running. Different from DartsEmb, our PEP can obtain several embedding schemes, which can be applied in different application scenarios, in only a single running. As a result, our PEP’s time cost on embedding size search can be further reduced in real-world systems.\nFigure 5: Interpretable analysis on MovieLens-1M dataset.\n100 101 102 103 104 105 # Interactions\n10 9 10 8 10 7 10 6 10 5 10 4 10 3 10 2 Pr ob ab ilit y of F ea tu re s\n(a) PDF of feature frequencies of MovieLens-1M dataset\n0 25 50 75 100 Epoch\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nAv er\nag e\nSp ar\nsit y\n< 52 [52, 139) [139, 378)\n378\n(b) Sparsity trajectory generated by PEP on FM\n1 25 4 54 8 11 04 61 88\nFeature Frequency\n0\n25\n50\n75\n100\nEp oc\nh\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nSp ar\nsit y\n(c) Sparsity heatmap generated by PEP on FM\nFigure 6: Correlation between Sparsity and Frequency." }, { "heading": "5.3 INTERPRETABLE ANALYSIS ON PRUNED EMBEDDINGS", "text": "The feature-based recommendation models usually apply the embedding technique to capture two or high order feature interactions. But how does our method work on features interactions? Does our method improve model performance by reducing noisy feature interactions? In this section, we conduct an interpretable analysis by visualizing the feature interaction matrix, calculated by VV>. Each value in the matrix is the normalized average of the absolute value of those two field features’ dot product result, of which the higher indicates those two fields have a stronger correlation.\nFigure 5 (a) and 5 (b) illustrate the interaction matrix without and with pruning respectively, and 5 (c) shows the variation of matrix values. We can see that our PEP can reduce the parameter number between unimportant field interaction while keeping the significance of those meaningful field features’ interactions. By denoising those less important feature interactions, the PEP can reduce embedding parameters while maintaining or improving accuracy." }, { "heading": "5.4 CORRELATION BETWEEN SPARSITY AND FREQUENCY", "text": "As is shown in Figure 6 (a), feature frequencies among different features are highly diversified. Thus, using embeddings with uniform size may not handle their heterogeneity, and this property play an important role in embedding size selection. Hence, some recent works (Zhao et al., 2020b; Ginart et al., 2019; Cheng et al., 2020; Kang et al., 2020; Zhang et al., 2020; Joglekar et al., 2020) explicitly utilize the feature frequencies. Different from them, our PEP shrinks the parameter in an end-to-end automatic way, thus circumvents the complex human manipulation. Nevertheless, the\nfrequency of features is one of the factors that determines whether one feature is important or not. Thus, we study whether our method can detect the influence of frequencies and whether the learned embedding sizes are relevant to the frequency.\nWe first analyze the sparsity8 trajectory during training, which is shown in Figure 6 (b), where different colors indicate different groups of features divided according to their popularity. For each group, we first calculate each feature’s sparsity, then compute the average on all features. Shades in pictures represent the variance within a group. We can observe that PEP tends to assign high-frequency features larger sizes to make sure there is enough representation capacity. For low-frequency features, the trends are on the contrary. These results are accord to the postulation that high-frequency features deserve more embedding parameters while a few parameters are enough for low-frequency feature embeddings.\nThen we probe the relationship between the sparsity of pruned embedding and frequencies of each feature. From Figure 6 (c), we can observe that the general relationship is concord with the above analysis. However, as we can see, some low-frequency features are assigned rich parameters, and some features with larger popularity are assigned small embedding size. This illustrates that simply allocating more parameters to high-frequency features, as most previous works do, can not handle the complex connection between features and their popularities. Our method performs pruning based on data, which can reflect the feature intrinsic proprieties, and thus can cut down parameters in a more elegant and efficient way." }, { "heading": "6 CONCLUSION", "text": "In this paper, we approach the common problem of fixed-size embedding table in today’s featurebased recommender systems. We propose a general plug-in framework to learn the suitable embedding sizes for different features adaptively. The proposed PEP method is efficient can be easily applied to various recommendation models. Experiments on three state-of-the-art recommendation models and three benchmark datasets verify that PEP can achieve strong recommendation performance while significantly reducing the parameter number and can be trained efficiently." }, { "heading": "7 ACKNOWLEDGEMENTS", "text": "This work was supported in part by The National Key Research and Development Program of China under grant 2020AAA0106000, the National Natural Science Foundation of China under U1936217, 61971267, 61972223, 61941117, 61861136003." }, { "heading": "A APPENDIX", "text": "A.1 DESCRIPTION OF g(s)\nFollowing Kusupati et al. (2020), a proper threshold function g(s) should have following three properties:\n1. g(s) > 0, lim\ns→−∞ g(s) = 0, and lim s→∞ g(s) =∞.\n2. ∃G ∈ R++ 3 0 < g′(s) ≤ G ∀s ∈ R.\n3. g′ (sinit ) < 1 which reduce the updating speed of s at the initial pruning." }, { "heading": "A.2 PYTORCH CODE OF PEP", "text": "We present the main codes of PEP here since it is really easy-to-use and can plug in various embedding-based recommendation models." }, { "heading": "A.3 WHOLE PROCESS OF PEP", "text": "We summarizes the pruning and retrain process by Algorithm 1.\nAlgorithm 1 Our PEP Input: Initial embedding V(0), base model φ, and target parameter P . Output: Well trained sparsity embedding V.\n1: while do not reach P do 2: Pruning V through Equation 9. 3: end while 4: Obtain binary pruning mask m = 1{V(t)}. 5: Reset the remaining embedding parameter to initial values. 6: while do not coverage do 7: Minimize the training loss L(V(0) m,D) with SGD. 8: end while" }, { "heading": "A.4 EXPERIMENTAL SETUP", "text": "" }, { "heading": "A.4.1 DATASETS", "text": "We experiment with three public benchmark datasets: MovieLens-1M, Criteo, and Avazu. Table 3 summarizes the statistics of datasets.\n• MovieLens-1M9. It is a widely-used benchmark dataset and contains timestamped user-movie ratings ranging from 1 to 5. Following AutoInt (Song et al., 2019), we treat samples with a rating 1, 2 as negative samples and samples with a rating 4, 5 as positive samples. Other samples will be treat as neutral samples and removed.\n• Criteo10. This is a benchmark dataset for feature-based recommendation task, which contains 26 categorical feature fields and 13 numerical feature fields. It has about 45 million users’ clicking records on displayed ads.\n• Avazu11. Avazu dataset contains 11 days’ user clicking behaviors which are released for the Kaggle challenge, There are 22 categorical feature fields in the dataset, and parts of the fields are anonymous.\nPreprocessing Following the general preprocessing steps (Guo et al., 2017; Song et al., 2019), for numerical feature fields in Criteo, we employ the log transformation of log2(x) if x > 2 proposed by the winner of Criteo Competition12 to normalize the numerical features. Besides, we consider features of which the frequency is less than ten as unknown and treat them as a single feature “unknown” for Criteo and Avazu datasets. For each dataset, all the samples are randomly divided into training, validation, and testing set based on the proportion of 80%, 10%, and 10%." }, { "heading": "A.4.2 PERFORMANCE MEASURES", "text": "We evaluate the performance of PEP with the following two metrics:\n• AUC. The area under the Receiver Operating Characteristic or ROC curve (AUC) means the probability to rank a randomly chosen positive sample higher than a randomly chosen negative sample. A model with higher AUC indicates the better performance of the model.\n9https://grouplens.org/datasets/movielens 10https://www.kaggle.com/c/criteo-display-ad-challenge 11https://www.kaggle.com/c/avazu-ctr-prediction 12https://www.csie.ntu.edu.tw/r01922136/kaggle-2014-criteo.pdf\n• Logloss. As a loss function widely used in the feature-based recommendation, Logloss on test data can straight way evaluate the model’s performance. The lower the model’s Logloss, the better the model’s performance." }, { "heading": "A.4.3 BASELINES", "text": "We compared our proposed method with the following state-of-the-art methods:\n• UE (short for Uniform Embedding). The uniform-embedding manner is commonly accepted in existing recommender systems, of which all features have uniform embedding sizes.\n• MGQE (Kang et al., 2020). This method retrieves embedding fragments from a small size of shared centroid embeddings, and then generates final embedding by concatenating those fragments. MGQE learns embeddings with different capacities for different items. This method is the most strongest baseline among embedding-parameter-sharing methods.\n• MDE (short for Mixed Dimension Embedding (Ginart et al., 2019)). This method is based on human-crafted rule, and the embedding size of a specific feature is proportional to its popularity. Higher-frequency features will be assigned larger embedding sizes. This is the state-of-the-art human-rule-based method.\n• DartsEmb (Zhao et al., 2020b). This is the state-of-the-art neural architecture search-based based method which allows features to automatically search for the embedding sizes in a given space.\nA.4.4 IMPLEMENTATION DETAILS\nFollowing AutoInt (Song et al., 2019) and DeepFM (Guo et al., 2017), we employ Adam optimizer with the learning rate of 0.001 to optimize model parameters in both the pruning and re-training stage. For g(s), we apply g(s) = 11+e−s in all experiments and initialize the s to −15, −150 and −150 in MovieLens-1M, Criteo and Avazu datasets respectively. Moreover, the granularity of PEP is set as Dimension-wise for PEP-2, PEP-3, and PEP-4 on Criteo and Avazu datasets. And others are set as Feature Dimension-wise. The base embedding dimension d is set to 64 for all the models before pruning. We deploy our method and other baseline methods to three state-of-the-art models: FM (Rendle, 2010), DeepFM (Guo et al., 2017), and AutoInt (Song et al., 2019), to compare their performance. Besides, in the retrain stage, we exploit the early-stopping technique according to the loss of validation dataset during training. We use PyTorch (Paszke et al., 2019) to implement our method and train it with mini-batch size 1024 on a single 12G-Memory NVIDIA TITAN V GPU.\nImplementation of Baseline For Uniform Embedding, we test the embedding size varying from [8, 16, 32, 64], for the MovieLens-1M dataset. For Criteo and Avazu dataset, we vary the embedding size from [4, 8, 16] because performance starts to drop when d > 16.\nFor other baseline methods, we first turn the hyper-parameters to make models have the highest recommendation performance or highest parameter reduction rate. Then we tune those methods that can balance those two aspects. We provide the experimental details of our implementation for these baseline methods as below, following the settings of the original papers. For the grid search space of MDE, we search the baseline dimension d from [4, 8, 16, 32], the number of blocks K from [8, 16], and α from [0.1, 0.2, 0.3]. For MGQE, we search the baseline dimension d from [8, 16, 32], the number of subspace D from [4, 8, 16], and the number of centroids K from [64, 128, 256, 512]. For DartsEmb, we choose three different candidate embedding spaces to meet the different memory budgets: {1, 2, 8}, {2, 4, 16} and {4, 8, 32}." }, { "heading": "A.5 COMPARISON BETWEEN PEP-0 AND LINEAR REGRESSION", "text": "The Linear Regression (LR) model is an embedding-free model that only makes predictions based on the linear combination of raw features. Thence, it is worth comparing our method on the extremelysparse level (PEP-0) with LR.\nTable 4 shows that our PEP-0 significantly outperforms the LR in all cases. This result verity that our PEP-0 does not depend on the LR part in FM and DeepFM to remain a strong recommendation performance. Therefore, even at an extremely-sparse level, our PEP still has high application value in the real-world scenarios.\nIt is worth noting that the AutoInt model does not contain the LR component, so the PEP-0 in AutoInt on Criteo and Avazu dataset lead to a large performance drop. We try to include LR in PEP-0 in AutoInt and test the performance13. As we can see, the accuracy on Criteo and Avazu outperforms the AutoInt without LR; It can be explained that LR helps our PEP-0 acquire a more stable performance." }, { "heading": "A.6 THE LOTTERY TICKET HYPOTHESIS", "text": "In the retraining stage in Section 4.2, we rely on the Lottery Ticket Hypothesis to reinitialize the pruned embeddings table (called winning ticket) into their original initial values. Here we conduct experiments to verify the effectiveness of this operation in our PEP. We compare our method with its variation that uses random re-initialization for retraining to examine the influence of initialization. We also compare the standard PEP with the original base recommendation model to verify the influence of embedding pruning. To evaluate the importance of retraining, we further test the performance of PEP with the pruning stage only. We choose FM as the base recommendation model and use the same settings as the above experiments.\nWe present the results in Figure 8 and 9. We can observe that the winning ticket with original initialization parameters can make the training procedure faster and obtain higher recommendation accuracy compared with random re-initialization. This demonstrates the effectiveness of our design of retraining. Moreover, the randomly reinitialize winning ticket still outperforms the unpruned model. By reducing the less-important features’ embedding parameters, model performance could benefit from denoising those over-parametered embeddings. This can be explained that it is likely to get over-fitted for those over-parameterized embeddings when embedding sizes are uniform.\nMoreover, it is clear that the performance of PEP without retraining gets a little bit downgrade, but it still outperforms the original models. And the margin between without retrain and the original model is larger than the margin between with and without retraining. These results demonstrate that the PEP chiefly benefits from the suitable embedding size selection. We conjecture the benefit of retraining: during the search stage, less-important elements in embedding matrices are pruned gradually until the training procedure reaches a convergence. However, in earlier training epochs when these elements have not been pruned, they may have negative effects on the gradient updates for those important elements. This may make the learning of those important elements suboptimal. Thus, a retraining step can eliminate such effects and improve performance." }, { "heading": "A.7 PRUNING WITH FINER GRANULARITY", "text": "In this section, we analyze the four different thresholds with different granularity mentioned in Section 4.3. The experiments are conducted on the MovieLens-1M dataset with base model FM. Figure 10 (a) and (b) demonstrates the varying of embedding parameters and test AUC evolving with training epoch. As we can see, the Feature-Dimension granularity can reduce much more embedding parameters than others. Meanwhile, it achieves the highest performance at the retrain stage compared with other granularities. With the minimum granularity, the Feature-Dimension wise pruning can effectively determine the importance of embedding values. Besides, the Dimension-wise prun-\n13We omit the results of AutoInt with LR on the MovieLens-1M dataset because there is no performance drop for the AutoInt model compared with other models.\ning can achieve comparable AUC with fewer training epochs. Hence we adopt this granularity on PEP-2, PEP-3, and PEP-4 in large datasets to save time spent on training.\nA.8 ABOUT LEARNABLE g(s)\nPruning threshold(s) g(s) can be learned from training data to reduce parameter usage in the embedding matrix. However, why can our PEP learn suitable g(s) with training data? We deduce that the increase of s in g(s) can decrease the training loss. In other words, our PEP tries to update s in the optimization process to achieve lower training loss.\nIn Figure 11, we plot the FM’s training curves with/without PEP on MovieLens-1M and Criteo datasets to confirm our assumption. Our PEP can achieve much lower training loss when pruning. Besides, it verifies that our PEP could learn embedding sizes in a stable form.\nThe stability shown in Figure 11 can be explained that our PEP obtains a relatively stable embedding parameter number at later stage of pruning (e.g., when epoch is larger than 30 in MovieLens dataset) as shown in Figure 11. And embedding parameters are well-trained. Thus, the training loss curve looks relatively stable. Note that the figure shows a sequence of changing thresholds. The point when we get the embedding table for some sparsity level is not a converged point for this exact level, which instead requires retraining with a fixed threshold." } ]
2,021
LEARNABLE EMBEDDING SIZES FOR RECOMMENDER SYSTEMS
SP:8f5230bf3c19417980b10112488d1c7a8f1177f4
[ "The goal of this work is to enable existing pre-trained transformers (e.g. GPT-2) to operate over long input contexts. This is achieved by breaking the input sequence into segments and processing each segment through the transformers while allowing tokens in the current segment to attend over a summary vector of the tokens in the previous segment. The summary vector is created as a weighted combination of the tokens in the summarized segment. Thus the summary vector introduces recurrence where each segment can use information from the previous segment. These modifications yield a better language model for long input texts. ", "The paper proposed to add a recurrent component to pretrained transformers. The component pools the hidden states of a context window and passes it to the next context window as an additional input to the self-attention layer. The component reduces the memory usage at both training and inference time, and enables the Transformer model to work on a longer sequence. The component is evaluated on two language modeling datasets and outperforms baseline models." ]
Fine-tuning a pretrained transformer for a downstream task has become a standard method in NLP in the last few years. While the results from these models are impressive, applying them can be extremely computationally expensive, as is pretraining new models with the latest architectures. We present a novel method for applying pretrained transformer language models which lowers their memory requirement both at training and inference time. An additional benefit is that our method removes the fixed context size constraint that most transformer models have, allowing for more flexible use. When applied to the GPT-2 language model, we find that our method attains better perplexity than an unmodified GPT-2 model on the PG-19 and WikiText-103 corpora, for a given amount of computation or memory.
[]
[ { "authors": [ "Iz Beltagy", "Kyle Lo", "Arman Cohan" ], "title": "SciBERT: A pretrained language model for scientific text", "venue": "In EMNLP/IJCNLP,", "year": 2019 }, { "authors": [ "Iz Beltagy", "Matthew E. Peters", "Arman Cohan" ], "title": "Longformer: The long-document", "venue": "transformer. arXiv,", "year": 2020 }, { "authors": [ "David M Blei", "Andrew Y Ng", "Michael I Jordan" ], "title": "Latent dirichlet allocation", "venue": "Journal of machine Learning research,", "year": 2003 }, { "authors": [ "Qingqing Cao", "Harsh Trivedi", "Aruna Balasubramanian", "Niranjan Balasubramanian" ], "title": "DeFormer: Decomposing pre-trained transformers for faster question answering", "venue": null, "year": 2005 }, { "authors": [ "Tianqi Chen", "Bing Xu", "Chiyuan Zhang", "Carlos Guestrin" ], "title": "Training deep nets with sublinear memory", "venue": "cost. arXiv,", "year": 2016 }, { "authors": [ "Rewon Child", "Scott Gray", "Alec Radford", "Ilya Sutskever" ], "title": "Generating long sequences with sparse transformers", "venue": "arXiv preprint arXiv:1904.10509,", "year": 2019 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "Jaime Carbonell", "Quoc Le", "Ruslan Salakhutdinov" ], "title": "Transformer-XL: Attentive language models beyond a fixed-length context", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2019 }, { "authors": [ "Jie Hao", "Xing Wang", "Baosong Yang", "Longyue Wang", "Jinfeng Zhang", "Zhaopeng Tu" ], "title": "Modeling recurrence for transformer", "venue": null, "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Nikita Kitaev", "Lukasz Kaiser", "Anselm Levskaya" ], "title": "Reformer: The efficient transformer", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Hang Le", "Loic Vial", "Jibril Frej", "Vincent Segonne", "Maximin Coavoux", "Benjamin Lecouteux", "Alexandre Allauzen", "Benoı̂t Crabbé", "Laurent Besacier", "Didier Schwab" ], "title": "FlauBERT: Unsupervised language model pre-training for French", "venue": "In LREC,", "year": 2020 }, { "authors": [ "Jinhyuk Lee", "Wonjin Yoon", "Sungdong Kim", "Donghyeon Kim", "Sunkyu Kim", "Chan Ho So", "Jaewoo Kang" ], "title": "BioBERT: a pre-trained biomedical language representation model for biomedical text mining", "venue": null, "year": 2020 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Louis Martin", "Benjamin Muller", "Pedro Javier Ortiz Suárez", "Yoann Dupont", "Laurent Romary", "’Eric de la Clergerie", "Djamé Seddah", "Benoı̂t Sagot" ], "title": "CamemBERT: a tasty French language model", "venue": null, "year": 1911 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher" ], "title": "Pointer sentinel mixture models", "venue": "arXiv preprint arXiv:1609.07843,", "year": 2016 }, { "authors": [ "Debora Nozza", "Federico Bianchi", "Dirk Hovy" ], "title": "What the [MASK]? making sense of languagespecific", "venue": "BERT models. arXiv,", "year": 2020 }, { "authors": [ "Marco Polignano", "Pierpaolo Basile", "Marco Degemmis", "Giovanni Semeraro", "Valerio Basile" ], "title": "AlBERTo: Italian BERT language understanding model for NLP challenging tasks based on tweets", "venue": "CLiC-it,", "year": 2019 }, { "authors": [ "Jiezhong Qiu", "Hao Ma", "Omer Levy", "Scott Wen-tau Yih", "Sinong Wang", "Jie Tang" ], "title": "Blockwise self-attention for long document understanding", "venue": null, "year": 1911 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Dario Amodei", "Daniela Amodei", "Jack Clark", "Miles Brundage", "Ilya Sutskever" ], "title": "Better language models and their implications", "venue": "OpenAI Blog https://openai. com/blog/better-language-models,", "year": 2019 }, { "authors": [ "Jack W Rae", "Anna Potapenko", "Siddhant M Jayakumar", "Chloe Hillier", "Timothy P Lillicrap" ], "title": "Compressive transformers for long-range sequence modelling", "venue": "arXiv preprint,", "year": 2019 }, { "authors": [ "Laila Rasmy", "Yang Xiang", "Ziqian Xie", "Cui Tao", "Degui Zhi" ], "title": "Med-BERT: pre-trained contextualized embeddings on large-scale structured electronic health records for disease", "venue": "prediction. arXiv,", "year": 2020 }, { "authors": [ "Aurko Roy", "Mohammad Saffar", "Ashish Vaswani", "David Grangier" ], "title": "Efficient content-based sparse attention with routing transformers", "venue": "arXiv preprint arXiv:2003.05997,", "year": 2020 }, { "authors": [ "Victor Sanh", "Lysandre Debut", "Julien Chaumond", "Thomas Wolf" ], "title": "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter", "venue": null, "year": 1910 }, { "authors": [ "Sainbayar Sukhbaatar", "Edouard Grave", "Piotr Bojanowski", "Armand Joulin" ], "title": "Adaptive attention span in transformers", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Yi Tay", "Dara Bahri", "Donald Metzler", "Da-Cheng Juan", "Zhe Zhao", "Che Zheng" ], "title": "Synthesizer: Rethinking self-attention in transformer models", "venue": "arXiv, abs/2005.00743,", "year": 2020 }, { "authors": [ "Yi Tay", "Mostafa Dehghani", "Dara Bahri", "Donald Metzler" ], "title": "Efficient transformers: A survey", "venue": "arXiv preprint arXiv:2009.06732,", "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Thomas Wolf", "Lysandre Debut", "Victor Sanh", "Julien Chaumond", "Clement Delangue", "Anthony Moi", "Pierric Cistac", "Tim Rault", "Rémi Louf", "Morgan Funtowicz" ], "title": "Huggingface’s transformers: State-of-the-art natural language processing", "venue": null, "year": 1910 }, { "authors": [ "Felix Wu", "Angela Fan", "Alexei Baevski", "Yann Dauphin", "Michael Auli" ], "title": "Pay less attention with lightweight and dynamic convolutions", "venue": null, "year": 1901 }, { "authors": [ "Zhilin Yang", "Peng Qi", "Saizheng Zhang", "Yoshua Bengio", "William W. Cohen", "Ruslan Salakhutdinov", "Christopher D. Manning" ], "title": "HotpotQA: A dataset for diverse, explainable multi-hop question answering", "venue": "In Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent progress in NLP has been dominated by large pretrained transformer neural networks (Vaswani et al., 2017), such as BERT (Devlin et al., 2019), and GPT-2 (Radford et al., 2019). However, these models have a memory footprint that is quadratic in input sequence length. Although architectural innovations such as those of Kitaev et al. (2019) and Rae et al. (2019) mitigate this and the issue of a predetermined maximum context size, large pretrained models applying these techniques are not available at this time. Even if large pretrained models of this kind are released in the future, they will likely not cover the wide range of domains that BERT-family models have been published for. For example, there have been BERT-based models trained for other languages such as French (Le et al., 2020; Martin et al., 2020), Italian (Polignano et al., 2019), and many other languages (see Nozza et al. (2020) for an overview) as well as specific domains such as scientific papers (Beltagy et al., 2019), biomedical papers (Lee et al., 2020), and health records (Rasmy et al., 2020). Individuals working with these models may not have the resources to train new models from scratch using the latest tricks, as the computation requirements for pretraining are extremely high. As such, identifying ways that already existing models can be improved could be widely impactful.\nAnother drawback of this family of models is that they have an a priori fixed maximum context size (typically 512 or 1024 tokens for the currently available pretrained models). A typical application of pretrained language models is producing contextual embeddings for a document. If the document is simply chunked into disjoint segments of 512 tokens, tokens at the boundary of a window will have less contextual information than tokens in the center of a window. This can be mitigated by striding the evaluation of the model, and only keeping the embedding for a token which has the largest context—but this adds quite a bit of wasted computation.\nIn this paper, we propose a method for augmenting and fine-tuning pretrained transformer language models to use context without directly attending to it. Our method simultaneously allows for increasing the context size a transformer processes, while allowing a controllable trade-off between computation and perplexity. We accomplish this by adding a small recurrence module that computes a fixed size representation from the transformer hidden states in a window of text. Then, the representation for that window is used during processing of the next window. Shrinking the window size is then a way to reduce the memory footprint of the model, with less loss of performance than would occur with a standard transformer. Our experiments add recurrence GPT-2 language models, and fine-tune them on the PG-19 (Rae et al., 2019) and WikiText-103 corpora (Merity et al., 2016), and require only the same amount of memory used for standard fine-tuning of a pretrained\nlanguage model. We demonstrate improvements in perplexity compared to a baseline model using the same amount of computation. Qualitative analysis shows that our recurrent module propagates certain information from previous windows of text, which can facilitate handling of long-distance dependencies with fixed-size input windows." }, { "heading": "2 RELATED WORK", "text": "Many methods have been proposed to lower the memory footprint or computation time of transformer language models, or allow them to be used on larger contexts. The Transformer-XL (Dai et al., 2019) allows a position within an attention window to attend to tokens from the previous windows by introducing relative position embeddings. While that mechanism, like ours, allows information to flow between windows, existing BERT and GPT-2 models do not use relative position embeddings, so training from scratch would be necessary to take advantage of this architecture. Additionally, each layer in the Transformer-XL attends to the previous layer in the previous window, so the maximum attention horizon is finite. Our recurrent method could theoretically pass information across an arbitrary distance, although one would not expect it to exceed the Transformer-XL’s horizon without a much larger scale of data than we experiment with.\nWe list here some other modifications of the transformer architecture, somewhat imprecisely grouping them for brevity. For a more detailed discussion, see Tay et al. (2020b). Child et al. (2019), Qiu et al. (2019), Kitaev et al. (2019), Sukhbaatar et al. (2019), and Roy et al. (2020) introduce sparsity to self-attention in various forms, reducing its memory cost. Rae et al. (2019) and Beltagy et al. (2020)—dynamically and statically respectively—add extra tokens to attend to which allow for global passing of information. Tay et al. (2020a) and Wu et al. (2019) replace dynamically computed self-attention with cheaper alternatives. While the above methods all allow for a reduction in computation, they also all require training from scratch. Our goal is to allow more efficient and powerful use of the wide array of existing pre-trained models that cover many domains.\nCao et al. (2020) propose the DeFormer, which also modifies the execution of a pretrained transformer. However, unlike our method, they decompose a single window into multiple windows by removing the attention interactions between these windows. This is largely orthogonal to our method, as one could both decompose windows of text, and additionally use our method to allow information to be passed between neighboring windows. Similarly, distilled versions of pre-trained models such as DistilBERT (Sanh et al., 2019) provide more computational efficiency, but could be combined with our method to apply them to longer contexts, or reduce the quadratic cost of self-attention.\nHao et al. (2019) apply pre-trained transformers recurrently for machine translation, but do so by using an attention network to embed the document, applying a recurrent encoder to those embeddings, and using the recurrent encoder alongside a typical transformer encoder. This differs from our method as we are fine-tuning language models, which are transformer decoders, and directly modifying the transformer’s computation with a recurrent connection, rather than running an RNN on top of embeddings produced by a transformer." }, { "heading": "3 METHOD", "text": "The main idea of our method is to take a transformer that was pretrained in a fixed context size setting and add recurrence at the level of T -token windows of text. For example, instead of executing the model on one 1000 token window of text, we could instead execute our model with 10 windows of 100 tokens. The first window is processed by the transformer model as normal, but for subsequent windows we add a supplementary embedding, which is generated using the hidden states from the preceding window (see Figure 1). The recurrence module is extremely small compared to the size of transformer language model, so the additional computation required is negligible." }, { "heading": "3.1 ADDING RECURRENCE TO PRETRAINED TRANSFORMERS", "text": "Starting by defining terms, we will consider a pretrained transformer with L layers, a hidden state size of k, and a maximum context size of T tokens. Let h(`)i ∈ Rk be the output of the `-th layer of the pretrained model, at position i. To produce a fixed-size representation of tokens t1, t2, . . . , tT ,\nthe embeddings produced by the pretrained transformer are mean-pooled as follows:\nz1 = 1\nT\nT∑\ni=1\nL∑\n`=1\nw`h (`) i (1)\nwhere w` are weights softmax-normalized from learned parameters α`:\nw` = eα`\nL∑ j=1 eαj\nThe fixed-size representation, z1, is passed through a feedforward network to produce an embedding hprev,1 which represents the tokens processed so far, t1:T . Next, instead of evaluating the pretrained transformer without modification on positions T + 1 through 2T , hprev,1 is inserted at a single layer (denoted `ins) of the pretrained model, as an additional embedding that may be used in the computation of attention, as shown in Figure 2. To keep the number of embeddings per layer fixed, this embedding is only used as a key and a value, but not a query, in the self-attention layer. That is, for a window size of 300 tokens, there are 301 inputs to layer `ins, but still only 300 outputs. The embeddings for positions T + 1 to 2T are then pooled in the same way as Equation 1 to produce z2 and passed through the feedforward network, outputting hprev,2. hprev,2 is used to modify the execution of the pretrained language model on tokens 2T + 1 through 3T , and so on. Because the model is now being applied recurrently, it is trained end-to-end with backpropagation through time.\nOne could consider more complex recurrence modules, other methods for pooling the previous window’s embeddings, or for inserting hprev into the computation for the next window. We experimented with modifications such as max pooling instead of mean pooling, inserting multiple embeddings into the next window, inserting an embedding at all layers of the transformer for the next window, and using fixed key attention as the pooling function. However during our preliminary experiments, we were not successful in finding a significantly higher performing architecture than the one given above, so it is the one we present results for." }, { "heading": "3.2 GRADIENT CHECKPOINTING IN NETWORKS WITH BOTTLENECKS", "text": "While our method can reduce the quadratic cost of attention by splitting the input into windows, we can also easily apply it to much longer contexts by use of gradient checkpointing (Chen et al., 2016).\nGradient checkpointing is a method for lowering the peak memory requirement of training large neural networks. This is accomplished by storing only a subset of activations during the forward pass, and recomputing forward from those cached states during the backwards pass. For example, in a 100 layer feedforward network with uniformly wide layers, one could store the output of only every 10th layer. Then, during the backward pass, in order to compute the gradients for the 95th layer, one would re-compute layers 91 through 99 using the stored 90th layer activations. The overall memory cost is reduced to √ L at the cost of a single additional forward pass.\nIn a network with variable width, the memory reduction can be even larger. When gradient checkpointing is applied to transformers, the outputs of each layer are usually stored (k × L× T values), so that at most one set of self-attention activations is in memory at once. In the case of our recurrent models, we have an even narrower bottleneck: the zi’s and hprev,i’s. Storing only these values means that the maximum number of activations present in memory while training on sequences N tokens in length isM +2kdNT e, whereM is the number of activations stored when training the transformer on an individual window of length T . Because k is extremely small compared to M , our model can be applied to very long contexts on any GPU on which the pretrained model can be fine-tuned." }, { "heading": "4 REVISITING THE EVALUATION OF TRANSFORMER LANGUAGE MODELS", "text": "Before describing the empirical evaluation of our method, we discuss how transformer language models are evaluated in related work. The standard way of measuring perplexity uses extra computation in order to make as much context available for each token prediction. This yields low perplexities, but does not reflect how practitioners use transformer language models in applications. In this section, we describe the situation in detail and propose practical solutions that achieve relatively low perplexities while being closer to how transformers are used in practice." }, { "heading": "4.1 POTENTIAL MISALIGNMENT BETWEEN LM EVALUATION AND APPLICATION", "text": "Transformers are often described as having quadratic time complexity in comparison to RNNs which have linear time complexity. However, this can be somewhat misleading when it comes to evaluation of perplexity. Given a test set of length N , an RNN requires O(N) time to evaluate—but reaching the best perplexity for a transformer requiresO(NT 2), where T is its maximum context size. (These time complexities exclude hidden state size, number of layers, and batch size.) This much higher time complexity is due to the fact that a transformer may be run with its full context size once for each token in the test set, so that the maximum context is available for each prediction. Re-execution of the whole model for each token is required for models with absolute position embeddings, since hidden state reuse is only possible up to the maximum context size of the network. Note that it is possible to achieve smaller wall-clock time by splitting evaluation of a test set over multiple GPUs, but this is not applicable to the generation setting where outputs depend on prior ones.\nTo illustrate why re-computation is necessary, consider executing GPT-2 (which has 1024 position embeddings) on a test set. Each of the first 1024 tokens of a test set will have been passed into the network using a distinct position embedding. Having exhausted the position embeddings, one option is to start again with the 1025th token being treated as position 1—we will refer to this as disjoint execution, illustrated in Figure 3a. The issue with disjoint execution is that it requires predicting the tokens at the beginning of a window from a very small amount of context.\nThe alternative, which is used for standard test set evaluation, is overlapped execution, as shown in Figure 3b. The position embeddings are advanced by one position for each prediction, meaning that T − 1 tokens are repeated between consecutive evaluations of the transformer, requiring much more computation. The benefit of this method is that it allows a model with T position embeddings to have T tokens of context for each prediction, as opposed to a variable amount between 1 and T .\nStepping a transformer decoder forward one token at a time measures the best that such a model could perform, but it reflects a generative story that does not align with how the models may be used\nin practice. A perplexity that only measures the ability of GPT-2 to generate the 1024th token given a context of 1023 tokens is not necessarily indicative of the model’s performance when generating from a smaller context. For example, the popular website Talk To Transformer1 generates samples from GPT-2, but only provides 150 tokens of output. The evaluation of GPT-2 by stepping forward one token at a time provides little information about the quality of such generations.\nAn example where the discrepancy is length instead of brevity is the GPT backed text adventure game AI Dungeon.2 In this setting, the number of tokens can easily reach and exceed the full context size GPT-2 was trained on. Using overlapped execution as described above, generating each token would be 1024 times slower than with disjoint execution, so perplexity calculated by overlapped execution does not match this use case either.\nWhile lower perplexity seems to correspond to better generation with shorter contexts in practice (perhaps due to parameter sharing between all sequence positions), there is no reason that this need be the case in principle. To demonstrate an extreme case of the concern being discussed, let F be a transformer model with vocabulary V , which uses the previous 1023 tokens as context, and consider the following generative story for generating token ti:\nti ∼ {\nUniform(V ) if i ≤ 1023 F (t(i−1023):(i−1)) otherwise\nClearly the above generative model would not be of any practical use for generation or otherwise. However, because perplexity is calculated per token, increasing the size of the test set will lead to a measured perplexity that approaches that of a standard evaluation of the model F . This example is not representative of the models that are trained in practice, as even generations much shorter than the maximum context size from a GPT-2 model are quite impressive. However, it does demonstrate that the criteria that we use to compare models, or to select the best model during early stopping, place very high weight on the ability of the model to produce text given a full context, and a potentially vanishingly small amount on its ability to generate text using shorter contexts." }, { "heading": "4.2 VARYING OVERLAP FOR EVALUATION", "text": "As we are interested in increasing computational efficiency at evaluation time for pretrained models, we investigate their performance using overlapped execution, but with a reduced degree of overlap between windows. Varying the overlap lets us investigate the connection between degree of overlap and perplexity. The overlap used in evaluation will be defined to be the number of tokens from each input window that are repeated in the next window (see Figure 3). For example, consider a window size T = 10 and an overlap of 3. The windows that the transformer will be executed are then t1:10, t8:17, t15:24, . . . , t1+7n:10+7nwhere n indexes the window. These input windows are used to predict the spans of tokens t2:11, t12:18, t19:25, . . . , t5+7n:11+7n. Figure 3c illustrates an intermediate overlap setting with T = 3 and an overlap of 1. The perplexity-minimizing evaluation setting is then the extreme with an overlap T − 1, and an overlap of 0 corresponds to disjoint execution. While a transformer can be evaluated with any degree of overlap, our augmentation method produces the embedding hprev, which is used during training to help predict the first token of a window. If we change the overlap at test time, the alignment of the text represented by hprev and the current window will be different than the model was trained for, and so performance will degrade. To address this, we use the same overlap that will be used at test time during training for the recurrent models.3" }, { "heading": "5 EXPERIMENTS", "text": "We now present experiments comparing our proposed technique to the default usage of transformer language models. We describe experiments on the WikiText-103 corpus and a subset of the PG-19\n1https://talktotransformer.com/ 2https://aidungeon.io/. Note that AIDungeon now uses the OpenAI GPT-3 API, but a similar project without OpenAI API access would still have to use GPT-2. 3Evaluating recurrent models trained with no overlap between adjacent windows on a different level of overlap is possible by changing which positions are pooled. We found that it led to a slight increase in perplexity, so we report results with training and evaluation matching.\ncorpus, using the GPT-2-small language model as the pretrained transformer in our models. We also provide proof-of-concept experiments using RoBERTa (Liu et al., 2019) on the HotpotQA (Yang et al., 2018) question answering dataset, indicating that our method can improve encoder performance for tasks other than language modeling. All of our experiments are based on the Hugging Face Transformers library (Wolf et al., 2019).\nWikiText-103 is a standard language modeling corpus composed of approximately 29,000 documents from English Wikipedia, containing 103 million words. We use the WikiText-103 “raw” corpus, which does not have rare words replaced by “UNK”. While GPT-2 uses BPE tokenization, we compute perplexity using the number of words rather than the number of BPE tokens for clarity.\nAlthough WikiText-103 does test long term dependencies, many of the documents are still shorter than the context size of the models we test. Therefore, we also use PG-19, which consists of books from the Project Gutenberg corpus. The average length of a WikiText-103 document is 3.6K words, while PG-19 documents (i.e. books) average 69K words, which far exceeds the context size of the models we test. However, the full PG-19 dataset is over 20 times larger than WikiText-103, so we use only a subset of it for training due to computational constraints. Specifically, we use only the first (alphabetically by filename) 1250 books of the PG-19 corpus, and use only the first 15000 tokens of each of the books in the validation set for early stopping. We make no modifications to the test set.\nIn all our experiments we use the HuggingFace implementation of the pretrained GPT-2 small model (12 layers, 768-dimensional hidden state). For both the recurrent and baseline models, the GPT-2 model was fine-tuned, not left frozen. We selected learning rates for both our models and the baseline separately, by evaluating on WikiText-103 for the same set of candidate learning rates. We used the same learning rates for the PG-19 experiments without further hyperparameter search. We fine-tune all models for 2 epochs, measuring the validation loss every 2 million tokens. All models were trained with Adam (Kingma & Ba, 2014), warming the learning rate up linearly from 0 to its final value over 100 steps. The feedforward network used to produce hprev,i from window i− 1 consisted of 3 hidden layers with dimension 200. We fixed `ins to be 2.4\nRecall from Section 4 that we are interested in evaluating the models in a setting similar to how they would be used in practice. To that end, we report separate perplexities for different degrees of overlap between adjacent windows of text, as described in Section 4.2. For our models, we train with the same overlap that we test with, as unlike the baseline models, they cannot be trained with no overlap between adjacent windows and then tested with an overlap. This is because the embedding of the previous window of text is expected to represent all tokens up until the first token of the current window, but with an overlap of 30 for example, that embedding would be representing all tokens up until the 30th token of the current window." }, { "heading": "5.1 RESULTS", "text": "We first show that with the same amount of fine-tuning, our method achieves lower perplexity than a baseline GPT-2 model when evaluated using the same window size and degree of overlap between adjacent windows of text.\nIt is important to emphasize that the perplexities we report are based on pretrained models, and so should not be compared to models trained from scratch on these datasets. The GPT-2 models were trained on text from a web crawl from which all Wikipedia documents are removed, but this still leaves open the possibility of quotes from Wikipedia having been encountered, or text from PG-19.\nTable 1 shows the perplexity of our models and the non-recurrent GPT-2 models on the WikiText-103 dataset. The models compared here all use windows of 300 tokens, with varying degrees of overlap. The baseline models can only access information from the previous window of text through the overlapping tokens, while the recurrent models have a fixed size representation of the longer context. Our addition of recurrence increases the performance of the GPT-2 models in this setting, but by a relatively small amount. Increasing the overlap between each window of text decreases the perplexities of the baseline model as expected, but also decreases the perplexity of the recurrent models.5\n4During our preliminary experiments, we found that setting `ins to be one of the final layer in the network gave slightly worse results, but we did not re-tune this hyperparameter for PG-19 or our final architecture.\n5We did not attempt to train recurrent models with extremely high overlaps, as that would greatly increase the required training time.\nThis indicates that there is room to increase the capacity of the recurrence mechanism (potentially requiring more training data), as an ideal recurrence mechanism would render these overlapping tokens redundant. On the other hand, some useful information beyond what is contained in the local context is being propagated, as otherwise the baseline model should catch up in perplexity at higher overlaps. To investigate this further, we also experiment with the PG-19 dataset.\nThe results for the PG-19 experiments are shown in Table 2. While we find only small increases in performances on the WikiText-103 dataset, we see larger improvements on PG-19, confirming our prediction that the gains would be larger on a dataset that has a larger context available for each prediction on average. We find that adding our recurrence module leads to a model that gives as good a perplexity with no overlap between adjacent windows as an unmodified model does when evaluated with an overlap of 30 out of 300 tokens in each window. Training the recurrent model with a 5 token overlap gives perplexity lower than the baseline perplexity with an overlap of 50 or even 75. In terms of FLOPs, adding our recurrence module and overlapping adjacent windows of tokens by 50 is less than half as costly as using a non-recurrent model with an overlap of 200." }, { "heading": "5.2 EFFECT OF WINDOW SIZE", "text": "As one of our motivations is to retain performance while decreasing compute requirements, we experiment with varying the window size used by our model and an unmodified GPT-2 model. At smaller window sizes the recurrent model has access to much more information than GPT-2, which can only attend to the current window. Because of this, we expect our augmentation to cause the\nperformance to fall off less rapidly with decreasing window size. The results, shown in Figure 4, confirm this prediction, as the performance gap widens with smaller windows. Figure 5 contains the same points (and additional baseline curves for various overlaps), but in terms of FLOPs rather than window size. All of the results of the recurrent models lie on the Pareto frontier, meaning that to improve perplexity or computational cost, one must worsen the other. The non-monotonicity of the overlap 30 and 50 curves is due to the fact that at smaller window sizes, an overlap represents a higher fraction of the computation being used for positions that predictions were already produced for. Also note that while the baseline with overlap 50 curve has the lowest absolute perplexity in Figure 5, the recurrent models trained with overlaps shown in Table 2 still perform better." }, { "heading": "5.3 WHAT INFORMATION IS BEING PROPAGATED BETWEEN WINDOWS?", "text": "We now discuss some features that our models display in greedily decoded continuations from contexts in the PG-19 validation set, which illustrate types of information that the recurrent module passes (or fails to pass) forward. Samples are included in Tables 4 and 5 in the appendix.\nThe most common phenomenon we identify in these samples is successful propagation of topical information between adjacent windows. For instance, we see in Table 4 a context discussing geography and rivers, followed by a continuation maintaining the same topic, and we see a context discussing the topic of payment, leading to a mention of money in the continuation. We give more rigorous quantitative support of this claim in Section 5.3.1. Beyond passing of topical information, another success case in the generations is passing of certain information about characters between windows—in Table 5 we see that pronouns in the continuations often reflect characters mentioned in the context, and we see an example in which the continuation includes “the two women”, after a context mentioning “the aunts”. This behavior was likely learned due to the fact that PG-19 consists of narratives, so correctly passing character information between windows is quite beneficial.\nHowever, these examples also contain discontinuities between the context and the continuation, in terms of local syntax or facts of the narrative. We see that some sentences are not completed in the expected form (for instance, “There are lots of ways of being” is continued with a new quote rather than completion of the thought), and new characters are sometimes invented rather than continuing to reference those described in the context. One sample has a closing quotation mark, predicted from the previous window, being interpreted as an opening quotation mark. These are the types of issues that an overlap between adjacent windows easily addresses—a fact that likely accounts in part for the gap between the recurrent model with disjoint and overlapped execution in Table 2. A higher capacity recurrent module might fix these issues in exchange for additional computation." }, { "heading": "5.3.1 QUANTITATIVE EVALUATION OF TOPIC PROPAGATION", "text": "To verify the trend we identified of topic propagation in continuations generated by our recurrent models, we fit an LDA topic model (Blei et al., 2003) with 20 topics to 5000 books from the PG-19\ntraining set. Given a bag of words, this topic model will assign a distribution over topics, so we can use a statistical distance as a metric for the similarity between the topics of two segments of text.\nWe sampled 8000 contexts of 300 tokens from the PG-19 validation set, and computed argmax decoded continuations of 30 tokens from the same models used to generate Table 46. We then computed the Jensen-Shannon divergence (JSD) between the topic distribution of each context and the corresponding continuations. This procedure finds that continuations from the recurrent model have an average topic JSD of 0.5331, while those from the baseline model have an average topic JSD of 0.5951. For a given context, the continuation given by the recurrent model is likely to have a lower JSD at least 60% of the time (p < 0.00001)." }, { "heading": "5.4 QUESTION ANSWERING EXPERIMENTS", "text": "To investigate whether our recurrence method would be helpful in tasks other than language modeling, we ran a small experiment on the HotpotQA extractive question answering task, in the distractor setting. In this setting, 10 paragraphs of context are given which must be used to answer the given question. HotpotQA’s inputs can greatly exceed one 512 token window in length, making it an ideal test of our method. The questions are a mix of span-based and yes/no questions. In order to be able to reduce training time, we use a subset of 30000 randomly sampled questions from the training set.\nWe use the RoBERTa-base model for both the baseline and the recurrently augmented model. To evaluate whether recurrence improves encoder performance on this task, we directly finetune the models to predict answer span start and end tokens, as done for question answering by Devlin et al. (2019), and max pool the embedding of the [CLS] token across windows and use the result for three way classification between “span”, “yes”, and “no”. Because the non-recurrent baseline cannot process an entire example at once, we begin each input window with the question, separated from the text by a [SEP] token. We use this input format for both the baseline and recurrent models.\nFor both models, we use a learning rate of 2e-5 and train for 4 epochs. For the recurrent model, we mean-pool the final RoBERTa layer for the previous window, and use a 2 layer, 768-dimensional hidden layer MLP to produce an embedding which is inserted at the second layer of the next window (i.e., `ins = 2).\nTable 3 shows F1 and exact match scores for both models on the HotpotQA dev set. Adding the recurrence module improves both scores by about 1 point, indicating that our method of propagating information between windows can be beneficial for question answering in addition to language modeling. It should be noted that these values are not directly comparable to scores on the HotpotQA leaderboard, as we only used a subset of the training set, in addition to evaluating on the dev set rather than the private test set.7 Nonetheless, we find these initial experiments to be highly promising, especially given the lack of hyperparameter tuning." }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "We showed that augmenting a pretrained language model with a recurrence module during finetuning can allow increased performance given a fixed computational budget. Our method can be similarly applied to improve the computational efficiency of pretrained models that already exist for many languages and domains, as well as for future models that will be developed. It can also allow their application to longer contexts than they were trained for, increasing their flexibility.\n6The baseline receives one token of context to begin generating from 7These initial results thus represent preliminary experiments that completed prior to the revision deadline; the next version of the paper will have more thorough results including when training on the entire training set." }, { "heading": "A APPENDIX", "text": "Here we provide some example continuations for contexts from the PG-19 validation set. The samples were generated with greedy argmax decoding, which leads to a large amount of repetition, however we were more concerned with reducing variance and identifying the most likely continuation than optimizing for sample quality." } ]
2,020
null
SP:eaac43a5cb483c71834b394b015d191cb8cbd815
[ "The authors introduce a framework for sufficient conditions for proving universality of a general class of neural networks that operate on point clouds which takes as input a set of coordinates of points and as output a feature for each point, such that the network is invariant to joint translation of the coordinates, equivariant to permutation of the points and equivariant to joint SO(3) transformations of the coordinates and output features of all points. Notably, this class contains Tensor Field Networks (TFN). The authors accomplish this by writing the network as a composition of an equivariant function from a class F_feat and followed by a linear pooling layer. When the F_feat class satisfies a “D-spanning” criterion and the pooling layer is universal, the network is universal. For a simple class of networks and for TFNs, the authors prove D-spanning. Linear universality of the pooling layer follows from simple representation theory.", "This paper mainly explores the representation ability of invariability of a point cloud network from the theoretical perspective. The universal approximation property for equivariant architectures under shape-preserving transformations is discussed. First, the authors derived two sufficient conditions for equivariant architectures with the universal approximation property. Then, they examined two methods based on the Tensor Field Network to prove that such a property holds for both of them. At last, the authors propose alternative methods which also satisfy the universal approximation property. " ]
Learning functions on point clouds has applications in many fields, including computer vision, computer graphics, physics, and chemistry. Recently, there has been a growing interest in neural architectures that are invariant or equivariant to all three shape-preserving transformations of point clouds: translation, rotation, and permutation. In this paper, we present a first study of the approximation power of these architectures. We first derive two sufficient conditions for an equivariant architecture to have the universal approximation property, based on a novel characterization of the space of equivariant polynomials. We then use these conditions to show that two recently suggested models (Thomas et al., 2018; Fuchs et al., 2020) are universal, and for devising two other novel universal architectures.
[ { "affiliations": [], "name": "CLOUD NETWORKS" }, { "affiliations": [], "name": "Nadav Dym" }, { "affiliations": [], "name": "Haggai Maron" } ]
[ { "authors": [ "Matan Atzmon", "Haggai Maron", "Yaron Lipman" ], "title": "Point convolutional neural networks by extension operators", "venue": "arXiv preprint arXiv:1803.10091,", "year": 2018 }, { "authors": [ "Alexander Bogatskiy", "Brandon Anderson", "Jan T Offermann", "Marwah Roussi", "David W Miller", "Risi Kondor" ], "title": "Lorentz group equivariant neural network for particle physics", "venue": "arXiv preprint arXiv:2006.04780,", "year": 2020 }, { "authors": [ "Feng Dai", "Yuan Xu" ], "title": "Approximation theory and harmonic analysis on spheres and balls, volume", "venue": null, "year": 2013 }, { "authors": [ "Carlos Esteves", "Christine Allen-Blanchette", "Ameesh Makadia", "Kostas Daniilidis" ], "title": "Learning so (3) equivariant representations with spherical cnns", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Fabian B Fuchs", "Daniel E Worrall", "Volker Fischer", "Max Welling" ], "title": "Se (3)-transformers: 3d rototranslation equivariant attention networks", "venue": "arXiv preprint arXiv:2006.10503,", "year": 2020 }, { "authors": [ "William Fulton", "Joe Harris" ], "title": "Representation theory: a first course, volume 129", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "arXiv preprint arXiv:1704.01212,", "year": 2017 }, { "authors": [ "Yulan Guo", "Hanyun Wang", "Qingyong Hu", "Hao Liu", "Li Liu", "Mohammed Bennamoun" ], "title": "Deep learning for 3d point clouds: A survey", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2020 }, { "authors": [ "Nicolas Keriven", "Gabriel Peyré" ], "title": "Universal invariant and equivariant graph neural networks", "venue": "CoRR, abs/1905.04943,", "year": 2019 }, { "authors": [ "Risi Kondor" ], "title": "N-body networks: a covariant hierarchical neural network architecture for learning atomic potentials", "venue": "arXiv preprint arXiv:1803.01588,", "year": 2018 }, { "authors": [ "Risi Kondor", "Zhen Lin", "Shubhendu Trivedi" ], "title": "Clebsch–gordan nets: a fully fourier space spherical convolutional neural network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Risi Kondor", "Hy Truong Son", "Horace Pan", "Brandon Anderson", "Shubhendu Trivedi" ], "title": "Covariant compositional networks for learning graphs", "venue": "arXiv preprint arXiv:1801.02144,", "year": 2018 }, { "authors": [ "Hanspeter Kraft", "Claudio Procesi" ], "title": "Classical invariant theory, a primer", "venue": "Lecture Notes,", "year": 2000 }, { "authors": [ "Jiaxin Li", "Yingcai Bi", "Gim Hee Lee" ], "title": "Discrete rotation equivariance for point cloud recognition", "venue": "In 2019 International Conference on Robotics and Automation (ICRA),", "year": 2019 }, { "authors": [ "Takanori Maehara", "Hoang NT" ], "title": "A simple proof of the universality of invariant/equivariant graph neural networks, 2019", "venue": null, "year": 2019 }, { "authors": [ "Haggai Maron", "Heli Ben-Hamu", "Hadar Serviansky", "Yaron Lipman" ], "title": "Provably powerful graph networks", "venue": "arXiv preprint arXiv:1905.11136,", "year": 2019 }, { "authors": [ "Haggai Maron", "Heli Ben-Hamu", "Nadav Shamir", "Yaron Lipman" ], "title": "Invariant and equivariant graph networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Haggai Maron", "Ethan Fetaya", "Nimrod Segol", "Yaron Lipman" ], "title": "On the universality of invariant networks", "venue": "In International conference on machine learning,", "year": 2019 }, { "authors": [ "Haggai Maron", "Or Litany", "Gal Chechik", "Ethan Fetaya" ], "title": "On learning sets of symmetric elements", "venue": "arXiv preprint arXiv:2002.08599,", "year": 2020 }, { "authors": [ "Christopher Morris", "Martin Ritzert", "Matthias Fey", "William L Hamilton", "Jan Eric Lenssen", "Gaurav Rattan", "Martin Grohe" ], "title": "Weisfeiler and leman go neural: Higher-order graph neural networks", "venue": "arXiv preprint arXiv:1810.02244,", "year": 2018 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Adrien Poulenard", "Marie-Julie Rakotosaona", "Yann Ponty", "Maks Ovsjanikov" ], "title": "Effective rotationinvariant point cnn with spherical harmonics kernels", "venue": "In 2019 International Conference on 3D Vision (3DV),", "year": 2019 }, { "authors": [ "Charles R Qi", "Hao Su", "Kaichun Mo", "Leonidas J Guibas" ], "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "venue": "Proc. Computer Vision and Pattern Recognition (CVPR), IEEE,", "year": 2017 }, { "authors": [ "Charles Ruizhongtai Qi", "Li Yi", "Hao Su", "Leonidas J Guibas" ], "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Raghunathan Ramakrishnan", "Pavlo O Dral", "Matthias Rupp", "O Anatole Von Lilienfeld" ], "title": "Quantum chemistry structures and properties of 134 kilo molecules", "venue": "Scientific data,", "year": 2014 }, { "authors": [ "Siamak Ravanbakhsh" ], "title": "Universal equivariant multilayer perceptrons", "venue": "arXiv preprint arXiv:2002.02912,", "year": 2020 }, { "authors": [ "Nimrod Segol", "Yaron Lipman" ], "title": "On universal equivariant set networks", "venue": "arXiv preprint arXiv:1910.02421,", "year": 2019 }, { "authors": [ "Hadar Serviansky", "Nimrod Segol", "Jonathan Shlomi", "Kyle Cranmer", "Eilam Gross", "Haggai Maron", "Yaron Lipman" ], "title": "Set2graph: Learning graphs from sets", "venue": "arXiv preprint arXiv:2002.08772,", "year": 2020 }, { "authors": [ "Nathaniel Thomas", "Tess Smidt", "Steven Kearnes", "Lusann Yang", "Li Li", "Kai Kohlhoff", "Patrick Riley" ], "title": "Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds", "venue": "arXiv preprint arXiv:1802.08219,", "year": 2018 }, { "authors": [ "Minjie Wang", "Lingfan Yu", "Da Zheng", "Quan Gan", "Yu Gai", "Zihao Ye", "Mufei Li", "Jinjing Zhou", "Qi Huang", "Chao Ma" ], "title": "Deep graph library: Towards efficient and scalable deep learning on graphs", "venue": "arXiv preprint arXiv:1909.01315,", "year": 2019 }, { "authors": [ "Yue Wang", "Yongbin Sun", "Ziwei Liu", "Sanjay E Sarma", "Michael M Bronstein", "Justin M Solomon" ], "title": "Dynamic graph cnn for learning on point clouds", "venue": "Acm Transactions On Graphics (tog),", "year": 2019 }, { "authors": [ "Maurice Weiler", "Mario Geiger", "Max Welling", "Wouter Boomsma", "Taco Cohen" ], "title": "3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data. 2018", "venue": "URL http: //arxiv.org/abs/1807.02547", "year": 2018 }, { "authors": [ "Daniel Worrall", "Gabriel Brostow" ], "title": "Cubenet: Equivariance to 3d rotation and translation", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Dmitry Yarotsky" ], "title": "Universal approximations of invariant maps by neural networks", "venue": "arXiv preprint arXiv:1804.10306,", "year": 2018 }, { "authors": [ "Thomas" ], "title": "See our treatment of Theorem 2 for more details. F COMPARISON WITH ORIGINAL TFN PAPER In this Appendix we discuss three superficial differences between the presentation of the TFN architecture", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Designing neural networks that respect data symmetry is a powerful approach for obtaining efficient deep models. Prominent examples being convolutional networks which respect the translational invariance of images, graph neural networks which respect the permutation invariance of graphs (Gilmer et al., 2017; Maron et al., 2019b), networks such as (Zaheer et al., 2017; Qi et al., 2017a) which respect the permutation invariance of sets, and networks which respect 3D rotational symmetries (Cohen et al., 2018; Weiler et al., 2018; Esteves et al., 2018; Worrall & Brostow, 2018; Kondor et al., 2018a). While the expressive power of equivariant models is reduced by design to include only equivariant functions, a desirable property of equivariant networks is universality: the ability to approximate any continuous equivariant function. This is not always the case: while convolutional networks and networks for sets are universal (Yarotsky, 2018; Segol & Lipman, 2019), popular graph neural networks are not (Xu et al., 2019; Morris et al., 2018).\nIn this paper, we consider the universality of networks that respect the symmetries of 3D point clouds: translations, rotations, and permutations. Designing such networks is a popular paradigm in recent years (Thomas et al., 2018; Fuchs et al., 2020; Poulenard et al., 2019; Zhao et al., 2019). While there have been many works on the universality of permutation invariant networks (Zaheer et al., 2017; Maron et al., 2019c; Keriven & Peyré, 2019), and a recent work discussing the universality of rotation equivariant networks (Bogatskiy et al., 2020), this is a first paper which discusses the universality of networks which combine rotations, permutations and translations.\nWe start the paper with a general, architecture-agnostic, discussion, and derive two sufficient conditions for universality. These conditions are a result of a novel characterization of equivariant polynomials for the symmetry group of interest. We use these conditions in order to prove universality of the prominent Tensor Field Networks (TFN) architecture (Thomas et al., 2018; Fuchs et al., 2020). The following is a weakened and simplified statement of Theorem 2 stated later on in the paper: Theorem (Simplification of Theorem 2). Any continuous equivariant function on point clouds can be approximated uniformly on compact sets by a composition of TFN layers.\nWe use our general discussion to prove the universality of two additional equivariant models: the first is a simple modification of the TFN architecture which allows for universality using only low dimensional filters. The second is a minimal architecture which is based on tensor product representations, rather than the more commonly used irreducible representations of SO(3). We discuss the advantages and disadvantages of both approaches.\nTo summarize, the contributions of this paper are: (1) A general approach for proving the universality of rotation equivariant models for point clouds; (2) A proof that two recent equivariant models (Thomas et al., 2018; Fuchs et al., 2020) are universal; (3) Two additional simple and novel universal architectures." }, { "heading": "2 PREVIOUS WORK", "text": "Deep learning on point clouds. (Qi et al., 2017a; Zaheer et al., 2017) were the first to apply neural networks directly to the raw point cloud data, by using pointwise functions and pooling operations. Many subsequent works used local neighborhood information (Qi et al., 2017b; Wang et al., 2019b; Atzmon et al., 2018). We refer the reader to a recent survey for more details (Guo et al., 2020). In contrast with the aforementioned works which focused solely on permutation invariance, more related to this paper are works that additionally incorporated invariance to rigid motions. (Thomas et al., 2018) proposed Tensor Field Networks (TFN) and showed their efficacy on physics and chemistry tasks.(Kondor et al., 2018b) also suggested an equivariant model for continuous rotations. (Li et al., 2019) suggested models that are equivariant to discrete subgroups of SO(3). (Poulenard et al., 2019) suggested an invariant model based on spherical harmonics. (Fuchs et al., 2020) followed TFN and added an attention mechanism. Recently, (Zhao et al., 2019) proposed a quaternion equivariant point capsule network that also achieves rotation and translation invariance.\nUniversal approximation for invariant networks. Understanding the approximation power of invariant models is a popular research goal. Most of the current results assume that the symmetry group is a permutation group. (Zaheer et al., 2017; Qi et al., 2017a; Segol & Lipman, 2019; Maron et al., 2020; Serviansky et al., 2020) proved universality for several Sn-invariant and equivariant models. (Maron et al., 2019b;a; Keriven & Peyré, 2019; Maehara & NT, 2019) studied the approximation power of high-order graph neural networks. (Maron et al., 2019c; Ravanbakhsh, 2020) targeted universality of networks that use high-order representations for permutation groups(Yarotsky, 2018) provided several theoretical constructions of universal equivariant neural network models based on polynomial invariants, including an SE(2) equivariant model. In a recent work (Bogatskiy et al., 2020) presented a universal approximation theorem for networks that are equivariant to several Lie groups including SO(3). The main difference from our paper is that we prove a universality theorem for a more complex group that besides rotations also includes translations and permutations." }, { "heading": "3 A FRAMEWORK FOR PROVING UNIVERSALITY", "text": "In this section, we describe a framework for proving the universality of equivariant networks. We begin with some mathematical preliminaries:" }, { "heading": "3.1 MATHEMATICAL SETUP", "text": "An action of a group G on a real vector space W is a collection of maps ρ(g) : W → W defined for any g ∈ G, such that ρ(g1) ◦ ρ(g2) = ρ(g1g2) for all g1, g2 ∈ G, and the identity element of G is mapped to the identity mapping on W . We say ρ is a representation of G if ρ(g) is a linear map for every g ∈ G. As is customary, when it does not cause confusion we often say that W itself is a representation of G .\nIn this paper, we are interested in functions on point clouds. Point clouds are sets of vectors in R3 arranged as matrices:\nX = (x1, . . . , xn) ∈ R3×n.\nMany machine learning tasks on point clouds, such as classification, aim to learn a function which is invariant to rigid motions and relabeling of the points. Put differently, such functions are required\nto be invariant to the action of G = R3 o SO(3)× Sn on R3×n via ρG(t, R, P )(X) = R(X − t1Tn )PT , (1) where t ∈ R3 defines a translation, R is a rotation and P is a permutation matrix. Equivariant functions are generalizations of invariant functions: If G acts on W1 via some action ρ1(g), and on W2 via some other group action ρ2(g), we say that a function f : W1 → W2 is equivariant if f(ρ1(g)w) = ρ2(g)f(w),∀w ∈W1 and g ∈ G. Invariant functions correspond to the special case where ρ2(g) is the identity mapping for all g ∈ G. In some machine learning tasks on point clouds, the functions learned are not invariant but rather equivariant. For example, segmentation tasks assign a discrete label to each point. They are invariant to translations and rotations but equivariant to permutations – in the sense that permuting the input causes a corresponding permutation of the output. Another example is predicting a normal for each point of a point cloud. This task is invariant to translations but equivariant to both rotations and permutations.\nIn this paper, we are interested in learning equivariant functions from point clouds into WnT , where WT is some representation of SO(3). The equivariance of these functions is with respect to the action ρG on point clouds defined in equation 1, and the action of G on WnT defined by applying the rotation action from the left and permutation action from the right as in 1, but ‘ignoring’ the translation component. Thus, G-equivariant functions will be translation invariant. This formulation of equivariance includes the normals prediction example by taking WT = R3, as well as the segmentation case by setting WT = R with the trivial identity representation. We focus on the harder case of functions into WnT which are equivariant to permutations, since it easily implies the easier case of permutation invariant functions to WT .\nNotation. We use the notation N+ = N ∪ {0} and N∗+ = ⋃ r∈N Nr+. We set [D] = {1, . . . , D} and [D]0 = {0, . . . , D}. Proofs. Proofs appear in the appendices, arranged according to sections." }, { "heading": "3.2 CONDITIONS FOR UNIVERSALITY", "text": "The semi-lifted approach In general, highly expressive equivariant neural networks can be achieved by using a ‘lifted approach’, where intermediate features in the network belong to high dimensional representations of the group. In the context of point clouds where typically n 3, many papers, e.g., (Thomas et al., 2018; Kondor, 2018; Bogatskiy et al., 2020) use a ‘semi-lifted’ approach, where hidden layers hold only higher dimensional representations of SO(3), but not high order permutation representations. In this subsection, we propose a strategy for achieving universality with the semi-lifted approach.\nWe begin by an axiomatic formulation of the semi-lifted approach (see illustration in inset): we assume that our neural networks are composed of two main components: the first component is a family Ffeat of parametric continuous G-equivariant functions ffeat which map the original point cloud R3×n to a semi-lifted point cloud Wnfeat = ⊕n i=1Wfeat, where Wfeat is a lifted (i.e., high-order) representation of SO(3). The second component is a family of parametric linear SO(3)-equivariant functions Fpool, which map from the high order representation Wfeat down to the target representation WT . Each such SO(3)–equivariant function Λ : Wfeat →WT can be extended to a SO(3)×Sn equivariant function Λ̂ : Wnfeat → WnT by applying Λ elementwise. For every positive integer C, these two families of functions induce a family of functions FC obtained by summing C different compositions of these functions:\nFC(Ffeat,Fpool) = {f |f(X) = C∑ c=1 Λ̂c(gc(X)), (Λc, gc) ∈ Fpool ×Ffeat}. (2)\nConditions for universality We now describe two conditions that guarantee universality using the semi-lifted approach. The first step is showing, as in (Yarotsky, 2018), that continuousG-equivariant functions CG(R3×n,WnT ) can be approximated by G-equivariant polynomials PG(R3×n,WnT ). Lemma 1. Any continuous G-equivariant function in CG(R3×n,WnT ) can be approximated uniformly on compact sets by G-equivariant polynomials in PG(R3×n,WnT ).\nUniversality is now reduced to the approximation of G-equivariant polynomials. Next, we provide two conditions which guarantee that G-equivariant polynomials of degree D can be expressed by function spaces FC(Ffeat,Fpool) as defined in equation 2. The idea behind these conditions is that explicit characterizations of polynomials equivariant to the joint action of translations, rotations and permutations is challenging. However, it is possible to explicitly characterize polynomials equivariant to translations and permutations (but not rotations). The key observation is then that this characterization can be rewritten as a sum of functions to Wnfeat, a high dimensional representations of SO(3) which is equivariant to translations, permutations and rotations, composed with a linear map which is permutation equivariant (but does not respect rotations). Accordingly, our first condition is that Ffeat contains a spanning set of such functions to Wnfeat. We call this condition D-spanning:\nDefinition 1 (D-spanning). For D ∈ N+, let Ffeat be a subset of CG(R3×n,Wnfeat). We say that Ffeat is D-spanning, if there exist f1, . . . , fK ∈ Ffeat, such that every polynomial p : R3×n → Rn of degree D which is invariant to translations and equivariant to permutations, can be written as\np(X) = K∑ k=1 Λ̂k(fk(X)), (3)\nwhere Λk : Wfeat → R are all linear functionals, and Λ̂k : Wnfeat → Rn are the functions defined by elementwise applications of Λk.\nIn Lemma 4 we explicitly construct a D-spanning family of functions. This provides us with a concrete condition which implies D-spanning for other function families as well.\nThe second condition is that Fpool contains all SO(3) linear equivariant layers. We call this condition Linear universality. Intuitively, taking linear rotation equivariant Λk in equation 3 ensures that the resulting function p will be rotation equivariant and thus fully G-equivariant, and linear universality guarantees the ability to express all such G invariant functions.\nDefinition 2 (Linear universality). We say that a collection Fpool of equivariant linear functionals between two representations Wfeat and WT of SO(3) is linearly universal, if it contains all linear SO(3)-equivariant mappings between the two representations.\nWhen these two conditions apply, a rather simple symmetrization arguments leads to the following theorem:\nTheorem 1. If Ffeat is D-spanning and Fpool is linearly universal, then there exists some C(D) ∈ N such that for all C ≥ C(D) the function space FC(Ffeat,Fpool) contains all G-equivariant polynomials of degree ≤ D.\nProof idea. By the D-spanning assumption, there exist f1, . . . , fK ∈ Ffeat such that any vector valued polynomial invariant to translations and equivariant to permutations is of the form\np(X) = K∑ k=1 Λ̂k(fk(X)), (4)\nWhile by definition this holds for functions p whose image is Rn, this is easily extended to functions to WnT as well.\nIt remains to show that when p is also SO(3)-equivariant, we can choose Λk to be SO(3) equivariant. This is accomplished by averaging over SO(3).\nAs a result of Theorem 1 and Lemma 1 we obtain our universality result (see inset for illustration) Corollary 1. For all C,D ∈ N+, let FC,D denote function spaces generated by a pair of functions spaces which are D-spanning and linearly universal as in equation 2. Then any continuous G-equivariant function in CG(R3×n,WnT ) can be approximated uniformly on compact sets by equivariant functions in\nF = ⋃ D∈N FC(D),D." }, { "heading": "3.3 UNIVERSALITY CONDITIONS IN ACTION", "text": "In the remainder of the paper, we prove the universality of severalG-equivariant architectures, based on the framework we discussed in the previous subsection. We discuss two different strategies for achieving universality, which differ mainly in the type of lifted representations of SO(3) they use: (i) The first strategy uses (direct sums of) tensor-product representations; (ii) The second uses (direct sums of) irreducible representations. The main advantage of the first strategy from the perspective of our methodology is that achieving the D-spanning property is more straightforward. The advantage of irreducible representations is that they almost automatically guarantees the linear universality property.\nIn Section 4 we discuss universality through tensor product representations, and give an example of a minimal tensor representation network architecture that would satisfy universality. In section 5 we discuss universality through irreducible representations, which is currently the more common strategy. We show that the TFN architecture (Thomas et al., 2018; Fuchs et al., 2020) which follows this strategy is universal, and describe a simple tweak that achieves universality using only low order filters, though the representations throughout the network are high dimensional." }, { "heading": "4 UNIVERSALITY WITH TENSOR REPRESENTATIONS", "text": "In this section, we prove universality for models that are based on tensor product representations, as defined below. The main advantage of this approach is that D-spanning is achieved rather easily. The main drawbacks are that its data representation is somewhat redundant and that characterizing the linear equivariant layers is more laborious.\nTensor representations We begin by defining tensor representations. For k ∈ N+ denote Tk = R3 k\n. SO(3) acts on Tk by the tensor product representation, i.e., by applying the matrix Kronecker product k times: ρk(R) := R⊗k. The inset illustrates the vector spaces and action for k = 1, 2, 3. With this action, for any i1, . . . , ik ∈ [n], the map from R3×n to Tk defined by\n(x1, . . . , xn) 7→ xi1 ⊗ xi2 . . .⊗ xik (5) is SO(3) equivariant.\nA D-spanning family We now show that tensor representations can be used to define a finite set of D-spanning functions. The lifted representation Wfeat will be given by\nW Tfeat = D⊕ T=0 TT .\nTheD-spanning functions are indexed by vectors ~r = (r1, . . . , rK), where each rk is a non-negative integer. Denoting T = ‖r‖1, the functions Q(~r) : R3×n → T nT , Q(~r) = (Q (~r) j ) n j=1 are defined for fixed j ∈ [n] by\nQ (~r) j (X) = n∑ i2,...,iK=1 x⊗r1j ⊗ x ⊗r2 i2 ⊗ x⊗r3i3 ⊗ . . .⊗ x ⊗rK iK . (6)\nThe functionsQ(~r)j are SO(3) equivariant as they are a sum of equivariant functions from equation 5. Thus Q(~r) is SO(3)×Sn equivariant. The motivation behind the definition of these functions is that known characterizations of permutation equivariant polynomials Segol & Lipman (2019) tell us that the entries of these tensor valued functions span all permutation equivariant polynomials (see the proof of Lemma 2 for more details).\nTo account for translation invariance, we compose the functions Q(~r) with the centralization operation and define the set of functions\nQD = {ι ◦Q(~r)(X − 1\nn X1n1\nT n )| ‖~r‖1 ≤ D}, (7)\nwhere ι is the natural embedding that takes each TT into W Tfeat = ⊕D\nT=0 TT . In the following lemma, we prove that this set is D-spanning.\nLemma 2. For every D ∈ N+, the set QD is D-spanning.\nProof idea. It is known (Segol & Lipman, 2019) (Theorem 2) that polynomials p : R3×n → Rn which are Sn-equivariant, are spanned by polynomials of the form p~α = (p j ~α) n j=1, defined as\npj~α(X) = n∑ i2,...,iK=1 xα1j x α2 i2 . . . xαkik (8)\nwhere ~α = (α1, . . . , αK) and each αk ∈ N3+ is a multi-index. We first show that these polynomials can be extracted from Q(~r) and then use them to represent p.\nA minimal universal architecture Once we have shown that QD is D-spanning, we can design D-spanning architectures, by devising architectures that are able to span all elements of QD. As we will now show, the compositional nature of neural networks allows us to do this in a very clean manner.\nWe define a parametric function f(X,V |θ1, θ2) which maps R3×n⊕T nk to R3×n⊕T nk+1 as follows: For all j ∈ [n], we have fj(X,V ) = (xj , Ṽj(X,V )), where\nṼj(X,V |θ1, θ2) = θ1 (xj ⊗ Vj) + θ2 ∑ i (xi ⊗ Vi) (9)\nWe denote the set of functions (X,V ) 7→ f(X,V |θ1, θ2) obtained by choosing the parameters θ1, θ2 ∈ R, by Fmin . While in the hidden layers of our network the data is represented using both coordinates (X,V ), the input to the network only contains an X coordinate and the output only contains a V coordinate. To this end, we define the functions\next(X) = (X, 1n) and πV (X,V ) = V. (10)\nWe can achieve D-spanning by composition of functions in Fmin with these functions and centralizing:\nLemma 3. The function set QD is contained in\nFfeat = {ι ◦ πV ◦ f1 ◦ f2 ◦ . . . ◦ fT ◦ ext(X − 1\nn X1n1\nT n )| f j ∈ Fmin, T ≤ D}. (11)\nThus Ffeat is D-spanning.\nProof idea. The proof is technical and follows by induction on D.\nTo complete the construction of a universal network, we now need to characterize all linear equivariant functions from W Tfeat to the target representation WT . In Appendix G we show how this can be done for the trivial representation WT = R. This characterization gives us a set of linear functions Fpool, which combined withFfeat defined in equation 11 (corresponds to SO(3) invariant functions)\ngives us a universal architecture as in Theorem 1. However, the disadvantage of this approach is that implementation of the linear functions in Fpool is somewhat cumbersome. In the next section we discuss irreducible representations, which give us a systematic way to address linear equivariant mappings into any WT . Proving D-spanning for these networks is accomplished via the D-spanning property of tensor representations, through the following lemma Lemma 4. If all functions in QD can be written as\nι ◦Q(~r)(X − 1 n X1n1 T n ) = K∑ k=1 Âkfk(X),\nwhere fk ∈ Ffeat, Ak : Wfeat → W Tfeat and Âk : Wnfeat → (W Tfeat)n is defined by elementwise application of Ak, then Ffeat is D-spanning.\nWe note that as before, Ak are not necessarily SO(3)- equivariant.\nProof idea. The lemma follows directly from the assumptions." }, { "heading": "5 UNIVERSALITY WITH IRREDUCIBLE REPRESENTATIONS", "text": "In this section, we discuss how to achieve universality when using irreducible representations of SO(3). We will begin by defining irreducible representations, and explaining how linear universality is easily achieved by them, while the D-spanning properties of tensor representations can be preserved. This discussion can be seen as an interpretation of the choices made in the construction of TFN and similar networks in the literature. We then show that these architectures are indeed universal.\n5.1 IRREDUCIBLE REPRESENTATIONS OF SO(3)\nIn general, any finite-dimensional representation W of a compact group H can be decomposed into irreducible representations: a subspaceW0 ⊂W isH-invariant if hw ∈W0 for all h ∈ H,w ∈W0. A representationW is irreducible if it has no non-trivial invariant subspaces. In the case of SO(3), all irreducible real representations are defined by matrices D(`)(R), called the real Wigner D-matrices, acting on W` := R2`+1 by matrix multiplication. In particular, the representation for ` = 0, 1 are D(0)(R) = 1 and D(1)(R) = R.\nLinear maps between irreducible representations As mentioned above, one of the main advantages of using irreducible representations is that there is a very simple characterization of all linear equivariant maps between two direct sums of irreducible representations. We use the notation Wl for direct sums of irreducible representations, where l = (`1, . . . , `K) ∈ NK+ andWl = ⊕K k=1W`k .\nLemma 5. Let l(1) = (`(1)1 , . . . , ` (1) K1 ) and l(2) = (`(2)1 , . . . , ` (2) K2\n). A function Λ = (Λ1, . . . ,ΛK2) is a linear equivariant mapping between Wl(1) and Wl(2) , if and only if there exists a K1 ×K2 matrix M with Mij = 0 whenever ` (1) i 6= ` (2) j , such that\nΛj(V ) = K1∑ i=1 MijVi (12)\nwhere V = (Vi)K1i=1 and Vi ∈W`(1)i for all i = 1, . . . ,K1.\nProof idea. This lemma is a simple generalization of Schur’s lemma, a classical tool in representation theory, which asserts that a non-zero linear map between irreducible representations is a scaled multiply of the identity mapping. Lemma 5 was stated in the complex setting in Kondor (2018). While Schur’s lemma, and thus Lemma 5, does not always hold for representations over the reals, we observe here that it holds for real irreducible representations of SO(3) since their dimension is always odd.\nClebsch-Gordan decomposition of tensor products As any finite-dimensional representation of SO(3) can be decomposed into a direct sum of irreducible representations, this is true for tensor representations as well. In particular, the Clebsch-Gordan coefficients provide an explicit formula for decomposing the tensor product of two irreducible representationsW`1 andW`2 into a direct sum of irreducible representations. This decomposition can be easily extended to decompose the tensor product Wl1 ⊗Wl2 into a direct sum of irreducible representations, where l1, l2 are now vectors. In matrix notation, this means there is a unitary linear equivariant U(l1, l2) mapping of Wl1 ⊗Wl2 onto Wl, where the explicit values of l = l(l1, l2) and the matrix U(l1, l2) can be inferred directly from the case where `1 and `2 are scalars.\nBy repeatedly taking tensor products and applying Clebsch-Gordan decompositions to the result, TFN and similar architectures can achieve the D-spanning property in a manner analogous to tensor representations, and also enjoy linear universality since they maintain irreducible representations throughout the network." }, { "heading": "5.2 TENSOR FIELD NETWORKS", "text": "We now describe the basic layers of the TFN architecture (Thomas et al., 2018), which are based on irreducible representations, and suggest an architecture based on these layers which can approximate G-equivariant maps into any representation WnlT , lT ∈ N ∗ +. There are some superficial differences between our description of TFN and the description in the original paper, for more details see Appendix F.\nWe note that the universality of TFN also implies the universality of Fuchs et al. (2020), which is a generalization of TFN that enables adding an attention mechanism. Assuming the attention mechanism is not restricted to local neighborhoods, this method is at least as expressive as TFN.\nTFNs are composed of three types of layers: (i) Convolution (ii) Self-interaction and (iii) Nonlinearities. In our architecture, we only use the first two layers types, which we will now describe:1.\nConvolution. Convolutional layers involve taking tensor products of a filter and a feature vector to create a new feature vector, and then decomposing into irreducible representations. Unlike in standard CNN, a filter here depends on the input, and is a function F : R3 → WlD , where lD = [0, 1, . . . , D]T . The `-th component of the filter F (x) = [ F (0)(x), . . . , F (D)(x) ] will be given by\nF (`)m (x) = R (`)(‖x‖)Y `m(x̂), m = −`, . . . , ` (13)\nwhere x̂ = x/‖x‖ if x 6= 0 and x̂ = 0 otherwise, Y `m are spherical harmonics, and R(`) any polynomial of degree≤ D. In Appendix F we show that these polynomial functions can be replaced by fully connected networks, since the latter can approximate all polynomials uniformly.\nThe convolution of an input feature V ∈ Wnli and a filter F as defined above, will give an output feature Ṽ = (Ṽa)na=1 ∈Wnl0 , where lo = l(lf , li), which is given by\nṼa(X,V ) = U(lf , li) ( θ0Va +\nn∑ b=1 F (xa − xb)⊗ Vb\n) . (14)\nMore formally we will think of convolutional layer as functions of the form f(X,V ) = (X, Ṽ (X,V )). These functions are defined by a choice of D, a choice of a scalar polynomial R(`), ` = 0, . . . , D, and a choice of the parameter θ0 ∈ R in equation 14. We denote the set of all such functions f by FD.\nSelf Interaction layers. Self interaction layers are linear functions from Λ̂ : Wnl →WnlT , which are obtained from elementwise application of equivariant linear functions Λ : Wl →WlT . These linear functions can be specified by a choice of matrix M with the sparsity pattern described in Lemma 5.\nActivation functions. TFN, as well as other papers, proposed several activation functions. We find that these layers are not necessary for universality and thus we do not define them here.\nNetwork architecture. For our universality proof, we suggest a simple architecture which depends on two positive integer parameters (C,D): For given D, we will define Ffeat(D) as the set of\n1Since convolution layers in TFN are not linear, the non-linearities are formally redundant\nfunction obtained by 2D recursive convolutions\nFfeat(D) = {πV ◦ f2D ◦ . . . f2 ◦ f1 ◦ ext(X)| f j ∈ FD}, where ext and πV are defined as in equation 10. The output of a function in Ffeat(D) is in Wnl(D), for some l(D) which depends on D. We then define Fpool(D) to be the self-interaction layers which map Wnl(D) to W n lT\n. This choice of Ffeat(D) and Fpool(D), together with a choice of the number of channels C, defines the final network architecture FTFNC,D = FC(Ffeat(D),Fpool(D)) as in equation 2. In the appendix we prove the universality of TFN: Theorem 2. For all n ∈ N, lT ∈ N∗+,\n1. ForD ∈ N+, everyG-equivariant polynomial p : R3×n →WnT of degreeD is inFTFNC(D),D.\n2. Every continuous G-equivariant function can be approximated uniformly on compact sets by functions in ∪D∈N+FTFNC(D),D\nAs discussed previously, the linear universality of Fpool is guaranteed. Thus proving Theorem 2 amounts to showing that Ffeat(D) is D-spanning. This is done using the sufficient condition for D-spanning defined in Lemma 4.\nProof idea. The proof is rather technical and involved. A useful observation (see Dai & Xu (2013)) used in the proof is that the filters of orders ` = 0, 1, . . . , D, defined in equation 13, span all polynomial functions of degree D on R3. This observation is used to show that all functions in QD can be expressed by Ffeat(D) and so Ffeat is D-spanning, as stated in Lemma 2.\nAlternative architecture The complexity of the TFN network used to construct G-equivariant polynomials of degree D, can be reduced using a simple modifications of the convolutional layer in equation 14: We add two parameters θ1, θ2 ∈ R to the convolutional layer, which is now defined as:\nṼa(X,V ) = U(lf , li) ( θ1\nn∑ b=1 F (xa − xb)⊗ Vb + θ2 n∑ b=1 F (xa − xb)⊗ Va\n) . (15)\nWith this simple change, we can show that Ffeat(D) is D-spanning even if we only take filters of order 0 and 1 throughout the network. This is shown in Appendix E." }, { "heading": "6 CONCLUSION", "text": "In this paper, we have presented a new framework for proving the universality ofG-equivariant point cloud networks. We used this framework for proving the universality of the TFN model Thomas et al. (2018); Fuchs et al. (2020), and for devising two additional novel simple universal architectures. In the future we hope to extend these simple constructions to operational G-equivariant networks with universality guarantees and competitive practical performance.\nOur universal architectures do not require activation functions, and use a single self-interaction layer. In Appendix H we present an experiment indicating that the performance of TFN is not significantly altered by these simplifications. Our architectures also require high order representations, and our experiments show that using increasingly high order representations does indeed improve performance. To date, practical TFN implementation included a relatively small amount of layers, and did not use very high order representations. We believe our theoretical results will inspire interest in stable implementation of larger architectures. On the other hand, an interesting open problem is understanding whether universality can be achieved using only low-dimensional representations.\nFinally, we believe that the framework we developed here will be useful for proving the universality of other G-equivariant models for point cloud networks, and other related equivariant models. We note that large parts of our discussion can be easily generalized to symmetry groups of the form G = Rd oH × Sn acting on Rd×n, where H can be any compact topological group.\nAcknowledgments The authors would like to thank Fabian B. Fuchs for making code available and Taco Cohen for helpful discussion. N.D. is supported by THEORINET Simons award 814643." }, { "heading": "A NOTATION", "text": "We introduce some notation for the proofs in the appendices. We use the shortened notation X̄ = X − 1nX1n1 T n and denote the columns of X̄ by (x̄1, . . . , x̄n). We denote\nΣT = {~r ∈ N∗+| ‖~r‖1 = T}" }, { "heading": "B PROOFS FOR SECTION 3", "text": "B.1 G-EQUIVARIANT POLYNOMIALS ARE DENSE\nA first step in proving denseness of G-equivariance polynomials, and in the proof used in the next subsection is the following simple lemma, which shows that translation invariance can be dealt with simply by centralizing the point cloud.\nIn the following, ρWT is some representation of SO(3) on a finite dimensional real vector spaceWT . this induces an action ρWT×Sn of SO(3)× Sn on WnT by\nρWT×Sn(R,P )(Y ) = ρWT (R)Y P T\nThis is also the action of G which we consider, ρG = ρWT×Sn , where we have invariance with respect to the translation coordinate. The action of G on R3×n is defined in equation 1. Lemma B.1. A function f : R3×n → WnT is G-equivariant, if and only if there exists a function h which is equivariant with respect to the action of SO(3)× Sn on R3×n, and\nf(X) = h(X − 1 n X1n1 T n ) (16)\nProof. Recall thatG-equivariance means SO(3)×Sn equivariance and translation invariance. Thus if f is G-equivariant then equation 16 holds with h = f .\nOn the other hand, if f satisfies equation 16 then we claim it is G-equivariant. Indeed, for all (t, R, P ) ∈ Rd o SO(3)× Sn , since PT 1n1Tn = 1n1Tn = 1n1TnPT ,\nf (ρG(t, R, P )(X)) =f(R(X + t1n)P T ) = h(R(X + t1n)P T − 1 n R(X + t1)PT 1n1 T n )\n= h(R(X − 1 n X1n1 T n )P T ) = h\n( ρR3×Sn(R,P )(X − 1\nn X1n1\nT n ) ) = ρWT×Sn(R,P )h ( X − 1\nn X1n1\nT n ) = ρG(t, R, P )f(X).\nWe now prove denseness of G-equivariant polynomials in the space of G-invariant continuous functions (Lemma 1). Lemma 1. Any continuous G-equivariant function in CG(R3×n,WnT ) can be approximated uniformly on compact sets by G-equivariant polynomials in PG(R3×n,WnT ).\nProof of Lemma 1. Let K ⊆ R3×n be a compact set. We need to show that continuous Gequivariant functions can be approximated uniformly in K by G-equivariant polynomials. Let K0 denote the compact set which is the image of K under the centralizing map X 7→ X − 1nX1n1 T n . By Lemma B.1, it is sufficient to show that every SO(3)×Sn equivariant continuous function f can be approximated uniformly on K0 by a sequence of SO(3) × Sn equivariant polynomials pk. The argument is now concluded by the following general lemma:\nLemma B.2. Let G be a compact group, Let ρ1 and ρ2 be continuous2representations of G on the Euclidean spaces W1 and W2. Let K ⊆ W1 be a compact set. Then every equivariant function f : W1 → W2 can be approximated uniformly on K by a sequence of equivariant polynomials pk : W1 7→W2.\nLet µ be the Haar probability measure associated with the compact group G. Let K1 denote the compact set obtained as an image of the compact set G×K under the continuous mapping\n(g,X) 7→ ρ1(g)X. Using the Stone-Weierstrass theorem, let pk be a sequence of (not necessarily equivariant) polynomials which approximate f uniformly on K1. Every degree D polynomial p : W1 → W2 induces a G-equivariant function\n〈p〉(X) = ∫ G ρ2(g −1)p(ρ1(g)X)dµ(g).\nThis function 〈p〉 is a degree D polynomial as well: This is because 〈p〉 can be approximated uniformly on K1 by “Riemann Sums” of the form ∑N j=1 wjρ2(g −1 j )p(ρ1(gj)X) which are degree D polynomials, and because degree D polynomials are closed in C(K1).\nNow for all X ∈ K1, continuity of the function g 7→ ρ2(g−1) implies that the operator norm of ρ2(g −1) is bounded uniformly by some constant N > 0, and so\n|〈pk〉(X)− f(X)| = ∣∣∣∣∫ G ρ2(g −1)pk(ρ1(g)X)− ρ2(g−1)f(ρ1(g)X)dµ(g) ∣∣∣∣ =\n∣∣∣∣∫ G ρ2(g −1) [pk(ρ1(g)X)− f(ρ1(g)X)] dµ(g) ∣∣∣∣ ≤ N‖f − pk‖C(K1) → 0 B.2 PROOF OF THEOREM 1\nTheorem 1. If Ffeat is D-spanning and Fpool is linearly universal, then there exists some C(D) ∈ N such that for all C ≥ C(D) the function space FC(Ffeat,Fpool) contains all G-equivariant polynomials of degree ≤ D.\nProof. By the D-spanning assumption, there exist f1, . . . , fK ∈ Ffeat such that any vector valued polynomial p : R3×n → Rn invariant to translations and equivariant to permutations is of the form\np(X) = K∑ k=1 Λ̂k(fk(X)), (17)\nwhere Λk are linear functions to R. If p is a matrix valued polynomial mapping R3×n to WnT = Rt×n, which is invariant to translations and equivariant to permutations, then it is of the form p = (pij)i∈[t],j∈[n], and each pi = (pij)j∈[n] is itself invariant to translations and permutation equivariant. It follows that matrix valued p can also be written in the form equation 17, the only difference being that the image of the linear functions Λk is now Rt.\nNow let p : R3×n → WnT be a G-equivariant polynomial of degree ≤ D. It remains to show that we can choose Λk to be SO(3) equivariant. We do this by a symmetrization argument: denote the Haar probability measure on SO(3) by ν, and the action of SO(3) on Wfeat and WT by ρ1 and ρ2 respectively Denote p = (pj)nj=1 and fk = (f j k) n j=1. For every j = 1, . . . , n, we use the SO(3) equivariance of pj and f j k to obtain\npj(X) = ∫ SO(3) ρ2(R −1) ◦ pj(RX)dν(R) = K∑ k=1 ∫ SO(3) ρ2(R −1) ◦ Λk ◦ fkj (RX)dν(R)\n= K∑ k=1 ∫ SO(3) ρ2(R −1) ◦ Λk ( ρ1(R) ◦ f jk(X) ) dν(R) = K∑ k=1 Λ̃k ◦ f jk(X),\n2By this we mean that the maps (g,X) 7→ ρj(g)X, j = 1, 2 are jointly continuous\nwhere Λ̃k stands for the equivariant linear functional from Wfeat to WT , defined for w ∈Wfeat by\nΛ̃k(w) = ∫ SO(3) ρ2(R −1) ◦ Λk (ρ1(R)w) dν(R).\nThus we have shown that p is in FC(Ffeat,Fpool) for C = K, as required." }, { "heading": "C PROOFS FOR SECTION 4", "text": "We prove Lemma 2 Lemma 2. For every D ∈ N+, the set QD is D-spanning.\nProof. It is known (Segol & Lipman, 2019) (Theorem 2) that polynomials p : R3×n → Rn which are Sn-equivariant, are spanned by polynomials of the form p~α = (p j ~α) n j=1, defined as\npj~α(X) = n∑ i2,...,iK=1 xα1j x α2 i2 . . . xαkik (18)\nwhere ~α = (α1, . . . , αK) and each αk ∈ N3+ is a multi-index. It follows that Sn-equivariant polynomials of degree ≤ D are spanned by polynomials of the form pj~α where ∑K k=1 |αk| ≤ D. Denoting rk = |αk|, k = 1, . . .K, the sum of all rk by T , and ~r = (rk)Kk=1, we see that there exists a linear functional Λ~α,~r : TT → R such that\npj~α(X) = Λ~α,~r ◦Q ~r j(X) where we recall that Q~r = ( Q\n(~r) j (X) )n j=1 is defined in equation 6 as\nQ (~r) j (X) = n∑ i2,...,iK=1 x⊗r1j ⊗ x ⊗r2 i2 ⊗ x⊗r3i3 ⊗ . . .⊗ x ⊗rK iK .\nThus polynomials p = (pj)nj=1 which are of degree ≤ D, and are Sn equivariant, can be written as pj(X) = ∑ T≤D ∑ ~r∈ΣT ∑ ~α||αk|=rk Λ~α,~r ( Q (~r) j (X) ) = ∑ T≤D ∑ ~r∈ΣT Λ~r ( ι ◦Q(~r)j (X) ) , j = 1, . . . , n,\nwhere Λ~r = ∑ ~α||αk|=rk Λ~α,~r ◦ ι −1 T , and ι −1 T is the left inverse of the embedding ι. If p is also translation invariant, then\np(X) = p(X − 1 n X1n1 T n ) = ∑ T≤D ∑ ~r∈ΣT Λ̂~r ( ι ◦Q(~r)(X − 1 n X1n1 T n ) ) .\nThus QD is D-spanning.\nWe prove Lemma 3 Lemma 3. The function set QD is contained in\nFfeat = {ι ◦ πV ◦ f1 ◦ f2 ◦ . . . ◦ fT ◦ ext(X − 1\nn X1n1\nT n )| f j ∈ Fmin, T ≤ D}. (11)\nThus Ffeat is D-spanning.\nProof. In this proof we make the dependence of Ffeat on D explicit and denote Ffeat(D). We prove the claim by induction onD. AssumeD = 0. ThenQ0 contains only the constant function X 7→ 1n ∈ T n0 , and this is precisely the function πV ◦ ext ∈ Ffeat(0). Now assume the claim holds for all D′ with D − 1 ≥ D′ ≥ 0, and prove the claim for D. Choose ~r = (r1, . . . , rk) ∈ ΣT for some T ≤ D, we need to show that the function Q(~r) is in Ffeat(D). Since Ffeat(D − 1) ⊆ Ffeat(D) we know from the induction hypothesis that this is true if T < D. Now assume T = D. We consider two cases:\n1. If r1 > 0, we set r̃ = (r1 − 1, r2, . . . , rK). We know that ι ◦Q(r̃)(X̄) ∈ Ffeat(D − 1) by the induction hypothesis. So there exist f2, . . . , fD such that\nι ◦ πV ◦ f2 ◦ . . . ◦ fD ◦ ext(X̄) = ι ◦Q(r̃)(X̄). (19)\nNow choose f1 ∈ Fmin to be the function whose V coordinate Ṽ = (Ṽj)nj=1, is given by Ṽj(X,V ) = xj ⊗ Vj , obtained by setting θ1 = 1, θ2 = 0 in equation 9. Then , we have\nṼj(X̄,Q (r̃)(X̄)) = n∑ i2,...,iK=1 x̄j ⊗ x̄⊗(r1−1)j ⊗ x̄ ⊗r2 i2 ⊗ . . .⊗ x̄⊗rKiK\n= Q (~r) j (X̄).\nand so\nι ◦ πV ◦ f1 ◦ f2 ◦ . . . ◦ fD ◦ ext(X − 1\nn X1n1\nT n ) = ι ◦Q(~r)(X̄). (20)\nand ι ◦Q(~r)(X − 1nX1n1 T n ) ∈ Ffeat(D).\n2. If r1 = 0. We assume without loss of generality that r2 > 0. Set r̃ = (r2 − 1, r3, . . . , rK). As before by the induction hypothesis there exist f2, . . . , fD which satisfy equation 19. This time we choose f1 ∈ Fmin to be the function whose V coordinate Ṽ = (Ṽj)nj=1, is given by Ṽj(X,V ) = ∑ j xj ⊗ Vj , obtained by setting θ1 = 0, θ2 = 1 in equation 9. Then\nwe have\nṼj(X̄,Q (r̃)(X̄)) = n∑ j=1 n∑ i3,...,iK=1 x̄j ⊗ x̄⊗(r2−1)j ⊗ x̄ ⊗r2 i3 ⊗ . . .⊗ x̄⊗rKiK\n= n∑ i2,i3,...,iK=1 x̄⊗r2i2 ⊗ x̄ ⊗r2 i3 ⊗ . . .⊗ x̄⊗rKiK\n= Q (~r) j (X̄).\nThus equation 20 holds, and so again we have that ι ◦Q(~r)(X − 1nX1n1 T n ) ∈ Ffeat(D).\nFinally we prove Lemma 4\nLemma 4. If all functions in QD can be written as\nι ◦Q(~r)(X − 1 n X1n1 T n ) = K∑ k=1 Âkfk(X),\nwhere fk ∈ Ffeat, Ak : Wfeat → W Tfeat and Âk : Wnfeat → (W Tfeat)n is defined by elementwise application of Ak, then Ffeat is D-spanning.\nProof. If the conditions in Lemma 4 hold, then sinceQD is D-spanning, every translation invariant and permutation equivariant polynomials p of degree D can be written as\np(X) = ∑\n~r|‖~r‖1≤D\nΛ̂~r\n( ι ◦Q(~r)(X − 1\nn X1n1\nT n )\n) = ∑ ~r|‖~r‖1≤D Λ̂~r ( K~r∑ k=1 ι ◦ Âk,~rfk,~r(X) )\n= ∑\n~r|‖~r‖1≤D\nK~r∑ k=1 Λ̂k,~r (fk,~r(X))\nwhere we denote Λk,~r = Λ~r ◦ ι ◦Ak,~r. Thus we proved Ffeat is D-spanning." }, { "heading": "D PROOFS FOR SECTION 5", "text": "We prove Lemma 5\nLemma 5. Let l(1) = (`(1)1 , . . . , ` (1) K1 ) and l(2) = (`(2)1 , . . . , ` (2) K2\n). A function Λ = (Λ1, . . . ,ΛK2) is a linear equivariant mapping between Wl(1) and Wl(2) , if and only if there exists a K1 ×K2 matrix M with Mij = 0 whenever ` (1) i 6= ` (2) j , such that\nΛj(V ) = K1∑ i=1 MijVi (12)\nwhere V = (Vi)K1i=1 and Vi ∈W`(1)i for all i = 1, . . . ,K1.\nProof. As mentioned in the main text, this lemma is based on Schur’s lemma. This lemma is typically stated for complex representations, but holds for odd dimensional real representation as well. We recount the lemma and its proof here for completeness (see also (Fulton & Harris, 2013)).\nLemma D.1 (Schur’s Lemma for SO(3)). Let Λ : W`1 → W`2 be a linear equivariant map. If `1 6= `2 then Λ = 0. Otherwise Λ is a scalar multiply of the identity.\nProof. Let Λ : W`1 → W`2 be a linear equivariant map. The image and kernel of Λ are invariant subspaces of W`1 and W`2 , respectively. It follows that if Λ 6= 0 then Λ is a linear isomorphism so necessarily `1 = `2. Now assume `1 = `2. Since the dimension of W`1 is odd, Λ has a real eigenvalue λ. The linear function Λ − λI is equivariant and has a non-trivial kernel, so Λ − λI = 0.\nWe now return to the proof of Lemma 5. Note that each Λj : Wl(1) → W`(2)j is linear and SO(3) equivariant. Next denote the restrictions of each Λj to W`(1)i , i = 1, . . . ,K2 by Λij , and note that\nΛj(V1, . . . , VK1) = K1∑ i=1 Λij(Vi). (21)\nBy considering vectors in Wl(1) of the form (0, . . . , 0, Vi, 0 . . . , 0) we see that each Λij : W`(1)i →\nW ` (2) j\nis linear and SO(3)-equivariant. Thus by Schur’s lemma, if `(1)i = ` (2) j then Λij(Vi) = MijVi\nfor some real Mij , and otherwise Mij = 0. Plugging this into equation 21 we obtain equation 12.\nWe prove Theorem 2 which shows that the TFN network described in the main text is universal:\nTheorem 2. For all n ∈ N, lT ∈ N∗+,\n1. ForD ∈ N+, everyG-equivariant polynomial p : R3×n →WnT of degreeD is inFTFNC(D),D.\n2. Every continuous G-equivariant function can be approximated uniformly on compact sets by functions in ∪D∈N+FTFNC(D),D\nProof. As mentioned in the main text, we only need to show that the function space Ffeat(D) is D-spanning. Recall that Ffeat(D) is obtained by 2D consecutive convolutions with D-filters. In general, we denote the space of functions defined by applying J consecutive convolutions by GJ,D.\nIf Y is a space of functions from R3×n → Y n, we denote by 〈Y, TT 〉 the space of all functions p : R3×n → T nT of the form\np(X) = K∑ k=1 Âkfk(X), (22)\nwhere Ak : Y → TT are linear functions, Âk : Y n → T nT are induced by elementwise application, and fk ∈ Y . This notation is useful because: (i) by Lemma 4 it is sufficient to show that Q(~r)(X̄) is in 〈G2D,D, TT 〉 for all ~r ∈ ΣT and all T ≤ D, and because (ii) it enables comparison of the expressive power of function spaces Y1,Y2 whose elements map to different spaces Y n1 , Y n2 , since the elements in 〈Yi, TT 〉, i = 1, 2 both map to the same space. In particular, note that if for every f ∈ Y2 there is a g ∈ Y1 and a linear map A : Y1 → Y2 such that f(X) = Â ◦ g(X), then 〈Y2, TT 〉 ⊆ 〈Y1, TT 〉. We now use this abstract discussion to prove some useful results: the first is that for the purpose of this lemma, we can ‘forget about’ the multiplication by a unitary matrix in equation 14, used for decomposition into irreducible representations: To see this, denote by G̃J,D the function space obtained by taking J consecutive convolutions withD-filters without multiplying by a unitary matrix in equation 14. Since Kronecker products of unitary matrices are unitary matrices, we obtain that the elements in GJ,D and G̃J,D differ only by multiplication by a unitary matrix, and thus 〈G̃J,D, TT 〉 ⊆ 〈GJ,D, TT 〉 and 〈GJ,D, TT 〉 ⊆ 〈G̃J,D, TT 〉, so both sets are equal. Next, we prove that adding convolutional layers (enlarging J) or taking higher order filters (enlarging D) can only increase the expressive power of a network.\nLemma D.2. For all J,D, T ∈ N+,\n1. 〈GJ,D, TT 〉 ⊆ 〈GJ+1,D, TT 〉.\n2. 〈GJ,D, TT 〉 ⊆ 〈GJ,D+1, TT 〉.\nProof. The first claim follows from the fact that every function f in 〈GJ,D, TT 〉 can be identified with a function in 〈GJ+1,D, TT 〉 by taking the J + 1 convolutional layer in equation 14 with θ0 = 1, F = 0.\nThe second claim follows from the fact that D-filters can be identified with D + 1-filters whose D + 1-th entry is 0.\nThe last preliminary lemma we will need is\nLemma D.3. For every J,D ∈ N+, and every t, s ∈ N+, if p ∈ 〈GJ,D, Tt〉, then the function q defined by\nqa(X) = n∑ b=1 (x̄a − x̄b)⊗s ⊗ pb(X)\nis in 〈GJ+1,D, Tt+s〉.\nProof. This lemma is based on the fact that the space of s homogeneous polynomial on R3 is spanned by polynomials of the form ‖x‖s−`Y `m(x) for ` = s, s − 2, s − 4 . . . (Dai & Xu, 2013). For each such `, and s ≤ D, these polynomials can be realized by filters F (`) by setting R(`)(‖x‖) = ‖x‖s so that\nF (`)m (x) = ‖x‖sY `m(x̂) = ‖x‖s−`Y `m(x). For every D ∈ N and s ≤ D, we can construct a D-filter F s,D = (F (0), . . . , F (D)) where F (s), F (s−2), . . . are as defined above and the other filters are zero. Since both the entries of F s,D(x), and the entries of x⊗s, span the space of s-homogeneous polynomials on R3, it follows that there exists a linear mapping Bs : WlD → Ts so that\nx⊗s = Bs(F s,D(x)),∀x ∈ R3. (23)\nThus, since p can be written as a sum of compositions of linear mappings with functions in GJ,D as in equation 22, and similarly x⊗s is obtained as a linear image of functions in G1,D as in equation 23, we deduce that\nn∑ b=1 (xa − xb)⊗ pb(X) = n∑ b=1 (x̄a − x̄b)⊗ pb(X)\nis in 〈GJ+1,D, Tt+s〉\nAs a final preliminary, we note thatD-filters can perform an averaging operation by settingR(0) = 1 and θ0, R(1), . . . , R(D) = 0 in equation 13 and equation 14 . We call thisD-filter an averaging filter.\nWe are now ready to prove our claim: we need to show that for every D,T ∈ N+ where T ≤ D, for every ~r ∈ ΣT , the function Q(~r) is in 〈G2D,D, TT 〉. Note that due to the inclusion relations in Lemma D.2 it is sufficient to prove this for the case T = D. We prove this by induction on D. For D = 0, vectors ~r ∈ Σ0 contains only zeros and so\nQ(~r)(X̄) = 1n = πV ◦ ext(X) ∈ 〈G0,0, T0〉.\nWe now assume the claim is true for all D′ with D > D′ ≥ 0 and prove the claim is true for D. We need to show that for every ~r ∈ ΣD the function Q(~r) is in 〈G2D,D, TD〉. We prove this yet again by induction, this time on the value of r1: assume that ~r ∈ ΣD and r1 = 0.. Denote by r̃ the vector in ΣD−1 defined by r̃ = (r2 − 1, r3, . . . , rK). By the induction assumption on D, we know that Q(r̃)(X̄) ∈ G2(D−1),D−1,D−1 and so\nqa(X) = n∑ b=1 (x̄a − x̄b)⊗Q(r̃)b (X̄) = n∑ b=1 (x̄a − x̄b)⊗ x̄⊗r2−1b ⊗ n∑ i3,...,iK=1 x̄⊗r3i3 ⊗ . . .⊗ x̄ ⊗rK iK\n= ( x̄a ⊗\nn∑ b=1 Q (r̃) b (X̄)\n) −Q(~r)(X̄)\nis in 〈G2D−1,D−1, TD〉 by Lemma D.3, which is contained in 〈G2D−1,D, TD〉 by Lemma D.2. Since x̄a has zero mean, while Q (~r) a (X̄) does not depend on a since r1 = 0, applying an averaging filter to qa gives us a constant value −Q(~r)a (X̄) in each coordinate a ∈ [n], and so Q(~r)(X̄) is in 〈G2D,D, TD〉. Now assume the claim is true for all ~r ∈ ΣD which sum to D, and whose first coordinate is smaller than some r′1 ≥ 1, we now prove the claim is true when the first coordinate of ~r is equal to r′1. The vector r̃ = (r2, . . . , rK) obtained from ~r by removing the first coordinate, sums to D′ = D − r′1 < D, and so by the induction hypothesis on D we know that Q(r̃) ∈ 〈G2D′,D′ , TD′〉. By Lemma D.3 we obtain a function qa ∈ 〈G2D′+1,D′ , TD〉 ⊆ 〈G2D,D, TD〉 defined by\nqa(X) = n∑ b=1 (x̄a − x̄b)⊗r1 ⊗Q(r̃)b (X̄)\n= n∑ b=1 (x̄a − x̄b)⊗r1 ⊗ x̄⊗r2b ⊗ n∑ i3,...,iK=1 x̄⊗r3i3 ⊗ . . .⊗ x̄ ⊗rK iK\n= Q(~r)a (X̄) + additional terms\nwhere the additional terms are linear combinations of functions of the form PDQ (r′) a (X̄) where r′ ∈ ΣD and their first coordinate r1 is smaller than r′1, and PD : TD → TD is a permutation. By the induction hypothesis on r1, each such Q(r ′) is in 〈G2D,D, TD〉. It follows that PDQ(r ′) a (X̄), a = 1, . . . , n, and thus Q(~r)(X̄), are in 〈G2D,D, TD〉 as well. This concludes the proof of Theorem 2." }, { "heading": "E ALTERNATIVE TFN ARCHITECTURE", "text": "In this appendix we show that replacing the standard TFN convolutional layer with the layer defined in equation 15:\nṼa(X,V ) = U(lf , li) ( θ1\nn∑ b=1 F (xa − xb)⊗ Vb + θ2 n∑ b=1 F (xa − xb)⊗ Va\n) ,\nwe can obtain D-spanning networks using 2D consecutive convolutions with 1-filters (that is, filters inWl1 , where l1 = [0, 1] T ). Our discussion here is somewhat informal, meant to provide the general\nideas without delving into the details as we have done for the standard TFN architecture in the proof of Theorem 2. In the end of our discussion we will explain what is necessary to make this argument completely rigorous.\nWe will only need two fixed filters for our argument here: The first is the 1-filter FId = (F (0), F (1)) defined by setting R(0)(‖x‖) = 0 and R(1)(‖x‖) = ‖x‖ to obtain\nFId(x) = ‖x‖Y 1(x̂) = ‖x‖x̂ = x.\nThe second is the filter F1 defined by setting R(0)(‖x‖) = 1 and R(1)(‖x‖) = 0, so that\nF1(x) = 1.\nWe prove our claim by showing that a pair of convolutions with 1-filters can construct any convolutional layer defined in equation 9 for the D-spanning architecture using tensor representations. The claim then follows from the fact that D convolutions of the latter architecture suffice for achieving D-spanning, as shown in Lemma 3.\nConvolutions for tensor representations, defined in equation 9, are composed of two terms:\nṼ tensor,1a (X̄, V ) = x̄a ⊗ Va and Ṽ tensor,2a (X̄, V ) = n∑ b=1 x̄b ⊗ Vb.\nTo obtain the first term Ṽ tensor,1a , we set θ1 = 0, θ2 = 1/n, F = FId in equation 15 we obtain (the decomposition into irreducibles of) Ṽ tensor,1a (X̄, V ) = x̄a ⊗ Va. Thus this term can in fact be expressed by a single convolution. We can leave this outcome unchanged by a second convolution, defined by setting θ1 = 0, θ2 = 1/n, F = F1.\nTo obtain the second term Ṽ tensor,2a , we apply a first convolution with θ1 = −1, F = FId, θ2 = 0, to obtain\nn∑ b=1 (xb − xa)⊗ Vb = n∑ b=1 (x̄b − x̄a)⊗ Vb = Ṽ tensor,2a (V, X̄)− x̄a ⊗ n∑ b=1 Vb\nBy applying an additional averaging filter, defined by setting θ1 = 1n , F = F1, θ2 = 0, we obtain Ṽ tensor,2a (V, X̄). This concludes our ‘informal proof’.\nOur discussion here has been somewhat inaccurate, since in practice FId(x) = (0, x) ∈ W0 ⊕W1 and F1(x) = (1, 0) ∈ W0 ⊕W1. Moreover, in our proof we have glossed over the multiplication by the unitary matrix used to obtain decomposition into irreducible representations. However the ideas discussed here can be used to show that 2D convolutions with 1-filters can satisfy the sufficient condition for D-spanning defined in Lemma 4. See our treatment of Theorem 2 for more details." }, { "heading": "F COMPARISON WITH ORIGINAL TFN PAPER", "text": "In this Appendix we discuss three superficial differences between the presentation of the TFN architecture in Thomas et al. (2018) and our presentation here:\n1. We define convolutional layers between features residing in direct sums of irreducible representations, while (Thomas et al., 2018) focuses on features which inhabit a single irreducible representation. This difference is non-essential, as direct sums of irreducible representations can be represented as multiple channels where each feature inhabits a single irreducible representation.\n2. The term θ0Va in equation 14 appears in (Fuchs et al., 2020), but does not appear explicitly in (Thomas et al., 2018). However it can be obtained by concatenation of the input of a self-interaction layer to the output, and then applying a self-interaction layer.\n3. We take the scalar functions R(`) to be polynomials, while (Thomas et al., 2018) take them to be fully connected networks composed with radial basis functions. Using polynomial scalar bases is convenient for our presentation here since it enables exact expression of\nequivariant polynomials. Replacing polynomial bases with fully connected networks, we obtain approximation of equivariant polynomials instead of exact expression. It can be shown that if p is a G-equivariant polynomial which can be expressed by some network FC,D defined with filters coming from a polynomial scalar basis, then p can be approximated on a compact set K, up to an arbitrary error, by a similar network with scalar functions coming from a sufficiently large fully connected network." }, { "heading": "G TENSOR UNIVERSALITY", "text": "In this section we show how to construct the complete setFpool of linear SO(3) invariant functionals from W Tfeat = ⊕D T=0 TT to R. Since each such functional Λ is of the form\nΛ(w0, . . . , wD) = D∑ T=0 ΛT (wT ),\nwhere each ΛT is SO(3)-invariant, it is sufficient to characterize all linear SO(3)-invariant functionals Λ : TD → R. It will be convenient to denote\nW = R3 and W⊗D ∼= R3 D = TD.\nWe achieve our characterization using the bijective correspondence between linear functional Λ : W⊗D → R and multi-linear functions Λ̃ : WD → R: each such Λ corresponds to a unique Λ̂, such that Λ̃(ei1 , . . . , eiD ) = Λ(ei1 ⊗ . . .⊗ eiD ), ∀(i1, . . . , iD) ∈ [3]D, (24) where e1, e2, e3 denote the standard basis elements of R3. We define a spanning set of equivariant linear functionals onW⊗D via a corresponding characterization for multi-linear functionals onWD. Specifically, set KD = {k ∈ N+|D − 3k is even and non-negative. } For k ∈ KD we define a multi-linear functional:\nΛ̃k(w1, . . . , wD) = det(w1, w2, w3)× . . .× det(w3k−2, w3k−1, w3k)× 〈w3k+1, w3k+2〉 × . . . × 〈wD−1, wD〉, (25)\nand for (k, σ) ∈ KD × SD we define\nΛ̃k,σ(w1, . . . , wD) = Λ̃k(wσ(1), . . . , wσ(D)) (26)\nProposition 1. The space of linear invariant functions from TD to R is spanned by the set of linear invariant functionals λD = {Λk,σ| (k, σ) ∈ KD × SD} induced by the multi-linear functional Λ̃k,σ described in equation 25 and equation 26\nWe note that (i) equation 24 provides a (cumbersome) way to compute all linear invariant functionals Λk,σ explicitly by evaluating the corresponding Λ̃k,σ on the 3D elements of the standard basis and (ii) the set λD is spanning, but is not linearly independent. For example, since 〈w1, w2〉 = 〈w2, w1〉, the space of SO(3) invariant functionals on T2 = W⊗2 is one dimensional while |λ2| = 2.\nProof of Proposition 1. We first show that the bijective correspondence between linear functional Λ : W⊗D → R and multi-linear functions Λ̃ : WD → R, extends to a bijective correspondence between SO(3)-invariant linear/multi-linear functionals. The action of SO(3) on WD is defined by\nρ̃(R)(w1, . . . , wD) = (Rw1, . . . , RwD).\nThe action ρ(R) = R⊗D of SO(3) on W⊗D is such that the map\n(w1, . . . , wD) 7→ w1 ⊗ w2 . . . wD\nis SO(3)- equivariant. It follows that if Λ̃ and Λ satisfy equation 24, then for all R ∈ SO(3), the same equation holds for the pair Λ̃ ◦ ρ̃(R) and Λ ◦ ρ(R). Thus SO(3)-invariance of Λ̃ is equivalent to SO(3)-invariance of Λ.\nMulti-linear functionals onWD invariant to ρ̃ are a subset of the set of polynomials onWD invariant to ρ̃. It is known (see (Kraft & Procesi, 2000), page 114), that all such polynomials are algebraically generated by functions of the form\ndet(wi1 , wi2 , wi3) and 〈wj1 , wj2〉, where i1, i2, i3, j1, j2 ∈ [D].\nEquivalently, SO(3)-invariant polynomials are spanned by linear combinations of polynomials of the form det(wi1 , wi2 , wi3) det(wi4 , wi5 , wi6) . . . 〈wj1 , wj2〉〈wj3 , wj4〉 . . . . (27) When considering the subset of multi-linear invariant polynomials, we see that they must be spanned by polynomials as in equation 27, where each w1, . . . , wD appears exactly once in each polynomial in the spanning set. These precisely correspond to the functions in λD." }, { "heading": "H EXPERIMENTS", "text": "This section provides an experimental evaluation of different design choices of the TFN architecture, inspired by our theoretical analysis. We study the following questions:\n1. The importance of non-linear activation. Our proof shows that using non-linear activation functions is not necessary for proving universality. Here, we empirically test the possibility of removing these layers.\n2. The importance of high-dimensional irreducible representations. Our theoretical analysis shows that in order to represent/approximate high degree polynomials, high-order representations should be used. Here, we check whether using high-order representations has practical benefits.\n3. The effect of self-interaction layers. Our proof suggests that it is enough to use self interaction linear layers at the end of the model. We empirically compare this approach with the more common approach of using self-interaction layers after each convolutional layer.\nDataset. We use the QM9 (Ramakrishnan et al., 2014) dataset for our experiments. The dataset contains 134K molecules, with node 3D positions, 5 categorical node features and 4 categorical edge features. The task is a molecule property prediction regression task.\nFramework. We used pytorch (Paszke et al., 2017)as the deep learning framework and the Deep Graph Library (DGL) (Wang et al., 2019a) as the graph learning framework. All experiments ran on NVIDIA GV100 GPUs.\nExperimental setup. We use the the TFN implementation from Fuchs et al. (2020). We trained each model variant for 50 epochs on the homo target variable using an `1 loss function and the ADAM optimizer with learning rate 10−3 and report results on the test set on the final epoch averaged over two runs. We used the default parameters and data splits from Fuchs et al. (2020).\nArchitecture. The architecture consists of 4 TFN convolutional layers, each followed by a linear self-interaction layer. We used 16 copies of each irreducible representation used. We used normbased non-linearity as in the original TFN paper (Thomas et al., 2018). These convolutional layers are followed by a max-pooling layer and two fully connected layers with 16d features in the hidden layer, where d is the maximal degree of irreducible representations used.\nResults. Table 1 and Figure 1 present the results. The main conclusions are: (1) The experiments show that, at least for this task, using non-linear activations does not improve performance. This result fits our theoretical analysis which shows that these layers are not needed for universality. (2) Figure 1 presents a plot of error vs representation degrees used. The plot clearly shows that using high-dimensional representations (up to order 3) improves performance, which also fits our analysis. Using representation orders higher than 3 is significantly more time consuming, and was found to have little effect on the results (as in in Fuchs et al. (2020)), though we believe this to be application-dependent. (3) Using self interaction layers only at the end of the model is shown to have marginal negative effect on the results." } ]
2,021
null
SP:49ef158a8170a8002d1111080db8009d5a6419d1
[ "This paper provides a unique perspective on the implicit regularization effect of gradient descent that has been observed and studied previously. The authors point out that the discrete steps taken by the gradient descent updates means that the path followed through the optimization landscape is not that of steepest descent, but some alternate path. Thinking of GD as trying to solve the continuous time evolution equation implied by GD, they analyze the errors that the actual updates make in solving this equation. Given these errors, they construct an alternate ODE whose solution has a discretization that is precisely the GD updates (up to higher order corrections in the learning rate). Determining the loss implied by this alternative ODE gives an additional term, proportional to the norm squared of the gradient, the learning rate, and the number of parameters. This \"Implicit Regularization'' leads to flatter optimization solutions, implying a positive effect on the generalization properties of models optimized under GD.", "The authors show the discrete steps of gradient descent implicitly regularize models by penalizing trajectories that have large loss-gradients, which is called Implicit Gradient Regularization in the paper. The authors adopt a standard argument from the backward error analysis of Runge-Kutta methods to show this phenomenon. In the paper, the authors also provide some empirical results which indicate gradient descent leads to flat minima where test errors are small and solutions are robust to noisy parameter perturbations." ]
Gradient descent can be surprisingly good at optimizing deep neural networks without overfitting and without explicit regularization. We find that the discrete steps of gradient descent implicitly regularize models by penalizing gradient descent trajectories that have large loss gradients. We call this Implicit Gradient Regularization (IGR) and we use backward error analysis to calculate the size of this regularization. We confirm empirically that implicit gradient regularization biases gradient descent toward flat minima, where test errors are small and solutions are robust to noisy parameter perturbations. Furthermore, we demonstrate that the implicit gradient regularization term can be used as an explicit regularizer, allowing us to control this gradient regularization directly. More broadly, our work indicates that backward error analysis is a useful theoretical approach to the perennial question of how learning rate, model size, and parameter regularization interact to determine the properties of overparameterized models optimized with gradient descent.
[ { "affiliations": [], "name": "David G.T. Barrett" }, { "affiliations": [], "name": "Benoit Dherin" } ]
[ { "authors": [ "Alnur Ali", "J. Zico Kolter", "Ryan J. Tibshirani" ], "title": "A continuous-time view of early stopping for least squares regression", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Alnur Ali", "Edgar Dobriban", "Ryan Tibshirani" ], "title": "The implicit regularization of stochastic gradient flow for least squares", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Sanjeev Arora", "Nadav Cohen", "Wei Hu", "Yuping Luo" ], "title": "Implicit regularization in deep matrix factorization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sanjeev Arora", "Simon Du", "Wei Hu", "Zhiyuan Li", "Ruosong Wang" ], "title": "Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Sanjeev Arora", "Simon S Du", "Wei Hu", "Zhiyuan Li", "Russ R Salakhutdinov", "Ruosong Wang" ], "title": "On exact computation with an infinitely wide neural net", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "David Balduzzi", "Sebastien Racaniere", "James Martens", "Jakob Foerster", "Karl Tuyls", "Thore Graepel" ], "title": "The mechanics of n-player differentiable games", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machine-learning practice and the classical bias–variance trade-off", "venue": "In Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "James Bradbury", "Roy Frostig", "Peter Hawkins", "Matthew James Johnson", "Chris Leary", "Dougal Maclaurin", "Skye Wanderman-Milne" ], "title": "JAX: composable transformations of Python+NumPy programs, 2018", "venue": "URL http://github.com/google/jax", "year": 2018 }, { "authors": [ "Yuan Cao", "Quanquan Gu" ], "title": "Generalization bounds of stochastic gradient descent for wide and deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Pratik Chaudhari", "Stefano Soatto" ], "title": "Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Lenaic Chizat", "Francis Bach" ], "title": "A note on lazy training in supervised differentiable programming", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Soham De", "Samuel Smith" ], "title": "Batch normalization biases residual blocks towards the identity function in deep networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Harris Drucker", "Yann Le Cun" ], "title": "Double backpropagation increasing generalization performance", "venue": "In International Joint Conference on Neural Networks,", "year": 1992 }, { "authors": [ "Yuanyuan Feng", "Tingran Gao", "Lei Li", "Jian-guo Liu", "Yulong Lu" ], "title": "Uniform-in-time weak error analysis for stochastic gradient descent algorithms via diffusion approximation", "venue": "Communications in Mathematical Sciences,", "year": 2020 }, { "authors": [ "Guilherme França", "Michael I. Jordan", "René Vidal" ], "title": "On dissipative symplectic integration with applications to gradient-based optimization", "venue": null, "year": 2004 }, { "authors": [ "Mario Geiger", "Arthur Jacot", "Stefano Spigler", "Franck Gabriel", "Levent Sagun", "Stéphane d’Ascoli", "Giulio Biroli", "Clément Hongler", "Matthieu Wyart" ], "title": "Scaling description of generalization with number of parameters in deep learning", "venue": "Journal of Statistical Mechanics: Theory and Experiment,", "year": 2020 }, { "authors": [ "Gauthier Gidel", "Francis Bach", "Simon Lacoste-Julien" ], "title": "Implicit regularization of discrete gradient dynamics in linear neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Xavier Glorot", "Y. Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Suriya Gunasekar", "Blake E Woodworth", "Srinadh Bhojanapalli", "Behnam Neyshabur", "Nati Srebro" ], "title": "Implicit regularization in matrix factorization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Suriya Gunasekar", "Jason Lee", "Daniel Soudry", "Nathan Srebro" ], "title": "Characterizing implicit bias in terms of optimization geometry", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Ernst Hairer", "Christian Lubich" ], "title": "The life-span of backward error analysis for numerical integrators", "venue": "Numerische Mathematik,", "year": 1997 }, { "authors": [ "Ernst Hairer", "Christian Lubich", "Gerhard Wanner" ], "title": "Geometric numerical integration, volume 31", "venue": "Springer-Verlag, Berlin, second edition,", "year": 2006 }, { "authors": [ "Moritz Hardt", "Ben Recht", "Yoram Singer" ], "title": "Train faster, generalize better: Stability of stochastic gradient descent", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Tom Hennigan", "Trevor Cai", "Tamara Norman", "Igor" ], "title": "Babuschkin. Haiku: Sonnet for JAX", "venue": "http://github.com/deepmind/dm-haiku,", "year": 2020 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clement Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Stanislaw Jastrzebski", "Maciej Szymczak", "Stanislav Fort", "Devansh Arpit", "Jacek Tabor", "Kyunghyun Cho", "Krzysztof Geras" ], "title": "The break-even point on optimization trajectories of deep neural networks", "venue": "In International Conference on Learning Representations,", "year": 2021 }, { "authors": [ "Ziwei Ji", "Matus Telgarsky" ], "title": "The implicit bias of gradient descent on nonseparable data", "venue": "In Proceedings of the Thirty-Second Conference on Learning Theory,", "year": 2019 }, { "authors": [ "Ryo Karakida", "Shotaro Akaho", "Shun-ichi Amari" ], "title": "Universal statistics of fisher information in deep neural networks: Mean field approach", "venue": "In Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Jaehoon Lee", "Lechao Xiao", "Samuel Schoenholz", "Yasaman Bahri", "Roman Novak", "Jascha SohlDickstein", "Jeffrey Pennington" ], "title": "Wide neural networks of any depth evolve as linear models under gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Aitor Lewkowycz", "Yasaman Bahri", "Ethan Dyer", "Jascha Sohl-Dickstein", "Guy Gur-Ari" ], "title": "The large learning rate phase of deep learning: the catapult mechanism", "venue": null, "year": 2003 }, { "authors": [ "Hao Li", "Zheng Xu", "Gavin Taylor", "Christoph Studer", "Tom Goldstein" ], "title": "Visualizing the loss landscape of neural nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Qianxiao Li", "Cheng Tai", "Weinan E" ], "title": "Stochastic modified equations and adaptive stochastic gradient algorithms", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning overparameterized neural networks via stochastic gradient descent on structured data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yuanzhi Li", "Colin Wei", "Tengyu Ma" ], "title": "Towards explaining the regularization effect of initial large learning rate in training neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Tengyuan Liang", "Alexander Rakhlin" ], "title": "Just interpolate: Kernel \"ridgeless\" regression can generalize", "venue": "The Annals of Statistics,", "year": 2018 }, { "authors": [ "Henry Lin", "Max Tegmark" ], "title": "Why does deep and cheap learning work so well", "venue": "Journal of Statistical Physics,", "year": 2016 }, { "authors": [ "Chaoyue Liu", "Libin Zhu", "Mikhail Belkin" ], "title": "Toward a theory of optimization for over-parameterized systems of non-linear equations: the lessons of deep learning", "venue": null, "year": 2003 }, { "authors": [ "Chao Ma", "Lei Wu", "Weinan E" ], "title": "The quenching-activation behavior of the gradient descent dynamics for two-layer neural network models", "venue": null, "year": 2006 }, { "authors": [ "Stephan Mandt", "Matthew D. Hoffman", "David M. Blei" ], "title": "Stochastic gradient descent as approximate bayesian inference", "venue": "Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Lars Mescheder", "Sebastian Nowozin", "Andreas Geiger" ], "title": "The numerics of gans", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ari S. Morcos", "David G.T. Barrett", "Neil C. Rabinowitz", "Matthew Botvinick" ], "title": "On the importance of single directions for generalization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Mor Shpigel Nacson", "Nathan Srebro", "Daniel Soudry" ], "title": "Stochastic gradient descent on separable data: Exact convergence with a fixed learning rate", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Vaishnavh Nagarajan", "J Zico Kolter" ], "title": "Gradient descent gan optimization is locally stable", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Vaishnavh Nagarajan", "J. Zico Kolter" ], "title": "Generalization in deep networks: The role of distance from initialization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Behnam Neyshabur", "Ryota Tomioka", "Nathan Srebro" ], "title": "In search of the real inductive bias: On the role of implicit regularization in deep learning", "venue": "In Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Samet Oymak", "Mahdi Soltanolkotabi" ], "title": "Overparameterized nonlinear learning: Gradient descent takes the shortest path", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Daniel S. Park", "Jascha Sohl-dickstein", "Quoc V. Le", "Sam Smith" ], "title": "The effect of network width on stochastic gradient descent and generalization: an empirical study", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Tomaso A. Poggio", "Andrzej Banburski", "Qianli Liao" ], "title": "Theoretical issues in deep networks: Approximation, optimization and generalization", "venue": null, "year": 1908 }, { "authors": [ "Chongli Qin", "Yan Wu", "Jost Tobias Springenberg", "Andrew Brock", "Jeff Donahue", "Timothy P. Lillicrap", "Pushmeet Kohli" ], "title": "Training generative adversarial networks by solving ordinary differential equations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Noam Razin", "Nadav Cohen" ], "title": "Implicit regularization in deep learning may not be explainable by norms", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Daniel A. Roberts" ], "title": "Sgd implicitly regularizes generalization error", "venue": "NIPS", "year": 2018 }, { "authors": [ "Levent Sagun", "Utku Evci", "V. Guney", "Yann Dauphin", "Leon Bottou" ], "title": "Empirical analysis of the hessian of over-parametrized neural networks", "venue": null, "year": 2017 }, { "authors": [ "Damien Scieur", "Vincent Roulet", "Francis Bach", "Alexandre d'Aspremont" ], "title": "Integration methods and optimization algorithms", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Leslie N. Smith" ], "title": "Cyclical learning rates for training neural networks", "venue": "In Winter Conference on Applications of Computer Vision,", "year": 2017 }, { "authors": [ "Sam Smith", "Quoc V. Le" ], "title": "A bayesian perspective on generalization and stochastic gradient descent", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Daniel Soudry", "Elad Hoffer", "Mor Shpigel Nacson", "Suriya Gunasekar", "Nathan Srebro" ], "title": "The implicit bias of gradient descent on separable data", "venue": "Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": "Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Arun Suggala", "Adarsh Prasad", "Pradeep K Ravikumar" ], "title": "Connecting optimization and regularization paths", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Enzo Tartaglione", "Skjalg Lepsøy", "Attilio Fiandrotti", "Gianluca Francini" ], "title": "Learning sparse neural networks via sensitivity-driven regularization", "venue": "In International Conference on Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "James H. Wilkinson" ], "title": "Error analysis of floating-point computation", "venue": "Numerische Mathematik,", "year": 1960 }, { "authors": [ "Ashia C Wilson", "Rebecca Roelofs", "Mitchell Stern", "Nati Srebro", "Benjamin Recht" ], "title": "The marginal value of adaptive gradient methods in machine learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Blake Woodworth", "Suriya Gunasekar", "Jason Lee", "Edward Moroshko", "Pedro Savarese", "Itay Golan", "Daniel Soudry", "Nathan Srebro" ], "title": "Kernel and rich regimes in overparametrized models", "venue": "In Proceedings of Thirty Third Conference on Learning Theory,", "year": 2020 }, { "authors": [ "Jingzhao Zhang", "Aryan Mokhtari", "Suvrit Sra", "Ali Jadbabaie" ], "title": "Direct runge-kutta discretization achieves acceleration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yaoyu Zhang", "Zhiqin Xu", "Tao Luo", "Zheng Ma" ], "title": "A type of generalization error induced by initialization in deep neural networks", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Difan Zou", "Yuan Cao", "Dongruo Zhou", "Quanquan Gu" ], "title": "Gradient descent optimizes overparameterized deep relu networks", "venue": "Machine Learning,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "The loss surface of a deep neural network is a mountainous terrain - highly non-convex with a multitude of peaks, plateaus and valleys (Li et al., 2018; Liu et al., 2020). Gradient descent provides a path through this landscape, taking discrete steps in the direction of steepest descent toward a sub-manifold of minima. However, this simple strategy can be just as hazardous as it sounds. For small learning rates, our model is likely to get stuck at the local minima closest to the starting point, which is unlikely to be the most desirable destination. For large learning rates, we run the risk of ricocheting between peaks and diverging. However, for moderate learning rates, gradient descent seems to move away from the closest local minima and move toward flatter regions where test data errors are often smaller (Keskar et al., 2017; Lewkowycz et al., 2020; Li et al., 2019). This phenomenon becomes stronger for larger networks, which also tend to have a smaller test error (Arora et al., 2019a; Belkin et al., 2019; Geiger et al., 2020; Liang & Rakhlin, 2018; Soudry et al., 2018). In addition, models with low test errors are more robust to parameter perturbations (Morcos et al., 2018). Overall, these observations contribute to an emerging view that there is some form of implicit regularization in gradient descent and several sources of implicit regularization have been identified.\nWe have found a surprising form of implicit regularization hidden within the discrete numerical flow of gradient descent. Gradient descent iterates in discrete steps along the gradient of the loss, so after each step it actually steps off the exact continuous path that minimizes the loss at each point. Instead of following a trajectory down the steepest local gradient, gradient descent follows a shallower path. We show that this trajectory is closer to an exact path along a modified loss surface, which can be calculated using backward error analysis from numerical integration theory (Hairer et al., 2006). Our core idea is that the discrepancy between the original loss surface and this modified loss surface is a form of implicit regularization (Theorem 3.1, Section 3).\nWe begin by calculating the discrepancy between the modified loss and the original loss using backward error analysis and find that it is proportional to the second moment of the loss gradients, which we call Implicit Gradient Regularization (IGR). Using differential geometry, we show that IGR is also proportional to the square of the loss surface slope, indicating that it encourages optimization paths with shallower slopes and optima discovery in flatter regions of the loss surface. Next, we\n∗equal contribution\nexplore the properties of this regularization in deep neural networks such as MLP’s trained to classify MNIST digits and ResNets trained to classify CIFAR-10 images and in a tractable two-parameter model. In these cases, we verify that IGR effectively encourages models toward minima in the vicinity of small gradient values, in flatter regions with shallower slopes, and that these minima have low test error, consistent with previous observations. We find that IGR can account for the observation that learning rate size is correlated with test accuracy and model robustness. Finally, we demonstrate that IGR can be used as an explicit regularizer, allowing us to directly strengthen this regularization beyond the maximum possible implicit gradient regularization strength." }, { "heading": "2 THE MODIFIED LOSS LANDSCAPE INDUCED BY GRADIENT DESCENT", "text": "The general goal of gradient descent is to find a weight vector θ̂ in parameter space Rm that minimizes a loss E(θ). Gradient descent proceeds by iteratively updating the model weights with learning rate h in the direction of the steepest loss gradient:\nθn+1 = θn − h∇θE(θn) (1) Now, even though gradient descent takes steps in the direction of the steepest loss gradient, it does not stay on the exact continuous path of the steepest loss gradient, because each iteration steps off the exact continuous path. Instead, we show that gradient descent follows a path that is closer to the exact continuous path given by θ̇ = −∇θẼ(θ), along a modified loss Ẽ(θ), which can be calculated analytically using backward error analysis (see Theorem 3.1 and Section 3), yielding:\nẼ(θ) = E(θ) + λRIG(θ), (2) where\nλ ≡ hm 4\n(3)\nand\nRIG(θ) ≡ 1\nm m∑ i=1 (∇θiE(θ)) 2 (4)\nImmediately, we see that this modified loss is composed of the original training loss E(θ) and an additional term, which we interpret as a regularizer RIG(θ) with regularization rate λ. We call RIG(θ) the implicit gradient regularizer because it penalizes regions of the loss landscape that have large gradient values, and because it is implicit in gradient descent, rather than being explicitly added to our loss.\nDefinition. Implicit gradient regularization is the implicit regularisation behaviour originating from the use of discrete update steps in gradient descent, as characterized by Equation 2.\nWe can now make several predictions about IGR which we will explore in experiments: Prediction 2.1. IGR encourages smaller values of RIG(θ) relative to the loss E(θ).\nGiven Equation 2 and Theorem 3.1, we expect gradient descent to follow trajectories that have relatively small values of RIG(θ). It is already well known that gradient descent converges by reducing the loss gradient so it is important to note that this prediction describes the relative size of RIG(θ) along the trajectory of gradient descent. To expose this phenomena in experiments, great care must be taken when comparing different gradient descent trajectories. For instance, in our deep learning experiments, we compare models at the iteration time of maximum test accuracy (and we consider other controls in the appendix), which is an important time point for practical applications and is not trivially determined by the speed of learning (Figures 1, 2). Also, related to this, since the regularization rate λ is proportional to the learning rate h and network size m (Equation 3), we expect that larger models and larger learning rates will encourage smaller values of RIG(θ) (Figure 2). Prediction 2.2. IGR encourages the discovery of flatter optima.\nIn section 3 we will show that RIG(θ) is proportional to the square of the loss surface slope. Given this and Prediction 2.1, we expect that IGR will guide gradient descent along paths with shallower loss surface slopes, thereby encouraging the discovery of flatter, broader optima. Of course, it is possible to construct loss surfaces at odds with this (such as a Mexican-hat loss surface, where all minima are equally flat). However, we will provide experimental support for this using loss surfaces that are of widespread interest in deep learning, such as MLPs trained on MNIST (Figure 1, 2, 3).\nPrediction 2.3. IGR encourages higher test accuracy.\nGiven Prediction 2.2, we predict that IGR encourages higher test accuracy since flatter minima are known empirically to coincide with higher test accuracy (Figure 2). Prediction 2.4. IGR encourages the discovery of optima that are more robust to parameter perturbations.\nThere are several important observations to make about the properties of IGR: 1) It does not originate in any specific model architecture or initialization, although our analysis does provide a formula to explain the influence of these model properties through IGR; 2) Other sources of implicit regularization also have an impact on learning, alongside IGR, and the relative importance of these contributions will likely depend on model architecture and initialization; 3) In defining λ and RIG we chose to set λ proportional to the number of parameters m. To support this choice, we demonstrate in experiments that the test accuracy is controlled by the IGR rate λ. 4) The modified loss and the original loss share the same global minima, so IGR vanishes when the gradient vanishes. Despite this, the presence of IGR has an impact on learning since it changes the trajectory of gradient descent, and in over-parameterized models this can cause the final parameters to reach different solutions. 5) Our theoretical results are derived for full-batch gradient descent, which allows us to isolate the source of implicit regularisation from the stochasticity of stochastic gradient descent (SGD). Extending our theoretical results to SGD is considerably more complicated, and as such, is beyond the scope of this paper. However, in some of our experiments, we will demonstrate that IGR persists in SGD, which is especially important for deep learning. Next, we will provide a proof for Theorem 3.1, and we will provide experimental support for our predictions." }, { "heading": "3 BACKWARD ERROR ANALYSIS OF GRADIENT DESCENT", "text": "In this section, we show that gradient descent follows the gradient flow of the modified loss Ẽ (Equation 2) more closely than that of the original loss E. The argument is a standard argument from the backward error analysis of Runge-Kutta methods (Hairer et al., 2006). We begin by observing that gradient descent (Equation 1) can be interpreted as a Runge-Kutta method numerically integrating the following ODE:\nθ̇ = −∇θE(θ) (5) In the language of numerical analysis, gradient descent is the explicit Euler method numerically integrating the vector field f(θ) = −∇E(θ). The explicit Euler method is of order 1, which means that after one gradient descent step θn = θn−1 − h∇E(θn−1), the deviation from the gradient flow ‖θn−θ(h)‖ is of orderO(h2), where θ(h) is the solution of Equation 5 starting at θn−1 and evaluated at time h. Backward error analysis was developed to deal with this discrepancy between the discrete steps of a Runge-Kutta method and the continuous exact solutions (or flow) of a differential equation. The main idea is to modify the ODE vector field θ̇ = f(θ) with corrections in powers of the step size\nf̃(θ) = f(θ) + hf1(θ) + h 2f2(θ) + · · · (6)\nso that the numerical steps θn approximating the original Equation 5 now lie exactly on the solutions of the modified equation θ̇ = f̃(θ). In other words, backward error analysis finds the corrections fi in Equation 6 such that θn = θ̃(nh) for all n, where θ̃(t) is the solution of the modified equation starting at θ0. In theory, we can now precisely study the flow of the modified equation to infer properties of the numerical method because its steps follow the modified differential equation solutions perfectly in a formal sense. The following result is a direct application of backward error analysis to gradient descent: Theorem 3.1. Let E be a sufficiently differentiable function on a parameter space θ ∈ Rm. The modified equation for gradient flow (Equation 5) is of the form\nθ̇ = −∇Ẽ(θ) +O(h2) (7)\nwhere Ẽ = E + λRIG is the modified loss introduced in Equation 2. Consider gradient flow with the modified loss θ̇ = −∇Ẽ(θ) and its solution θ̃(t) starting at θn−1. Now the local error ‖θn − θ̃(h)‖ between θ̃(h) and one step of gradient descent θn = θn−1 − h∇E(θn−1) is of order O(h3), while it is of order O(h2) for gradient flow with the original loss.\nProof. We begin by computing f1 for which the first two orders in h of the Taylor’s Series of the modified equation solution θ(t) at t = h coincide with one gradient descent step. Since θ′(t) = f̃(θ), we see that θ′′(t) = f̃ ′(θ)f̃(θ) and we find that\nθ + hf(θ) = θ + hf(θ) + h2 ( f1(θ) + 1\n2 f ′(θ)f(θ)\n) ,\nyielding f1(θ) = −f ′(θ)f(θ)/2. Now, when f is a gradient vector field with f = −∇E, we find:\nf1(θ) = − 1\n2 (D2θE)∇θE(θ) = −\n1 4 ∇‖∇E(θ)‖2,\nwhere D2θE is the Hessian of E(θ). Putting this together, we obtain the first order modified equation: θ̇ = f + hf1 +O(h2) = −∇ ( E(θ) + h\n4 ‖∇E(θ)‖2\n) +O(h2),\nwhich is a gradient system with modified loss\nẼ(θ) = E(θ) + h\n4 ‖∇E(θ)‖2.\nAs for the local error, if θ(h) is a solution of gradient flow starting at θn−1, we have in general that θ(h) = θn + O(h2). The correction f1 is constructed so that it cancels out the O(h2) term in the expansion of its solution, yielding θ̃(h) = θn +O(h3).\nRemark 3.2. A direct application of a standard result in backward error analysis (Hairer & Lubich (1997), Thm. 1) indicates that the learning rate range where the gradient flow of the modified loss provides a good approximation of gradient descent lies below h0 = CR/M , where∇E is analytic and bounded by M in a ball of radius R around the initialization point and where C depends on the Runge-Kutta method only, which can be estimated for gradient descent. We call this the moderate learning rate regime. For each learning rate below h0, we can provably find an optimal truncation of the modified equation whose gradient flow is exponentially close to the steps of gradient descent, so the higher term corrections are likely to contribute to the dynamics. Given this, we see that the exact value of the upper bound for the moderate regime will correspond to a setting where the optimal truncation is the first order correction only. Calculating this in general is difficult and beyond the scope of this paper. Nonetheless, our experiments strongly suggest that this moderate learning rate regime overlaps substantially with the learning rate range typically used in deep learning.\nNext, we give a purely geometric interpretation of IGR, supporting Prediction 2.2. Consider the loss surface S associated with a loss function E defined over the parameter space θ ∈ Rm. This loss surface is defined as the graph of the loss: S = {(θ,E(θ)) : θ ∈ Rm} ⊂ Rm+1. We define α(θ) to be the angle between the tangent space TθS to S at θ and the parameter plane, i.e., the linear subspace {(θ, 0) : θ ∈ Rm} in Rm+1. We can compute this angle using the inner product between the normal vectorN(θ) to S at θ and the normal vector ẑ to the parameter plane: α(θ) = arccos 〈N(θ), ẑ〉. Now we can define the loss surface slope at θ as being the tangent of this angle: slope(θ) := tanα(θ). This is a natural extension of the 1-dimensional notion of slope. With this definition, we can now reformulate the modified loss function in a purely geometric fashion: Proposition 3.3. The modified loss Ẽ in Equation 2 can be expressed in terms of the loss surface slope as Ẽ(θ) = E(θ) + h4 slope 2(θ).\nThis proposition is an immediate consequence of Theorem 3.1 and Corollary A.6.1 in Appendix A.2. It tells us that gradient descent with higher amounts of implicit regularization (higher learning rate) will implicitly minimize the loss surface slope locally along with the original training loss. Prediction 2.2 claims that this local effect of implicit slope regularization accumulates into the global effect of directing gradient descent trajectories toward global minima in regions surrounded by shallower slopes - toward flatter (or broader) minima. Remark 3.4. It is important to note that IGR does not help gradient descent to escape from local minima. In the learning rate regime where the truncated modified equation gives a good approximation for gradient descent, the steps of gradient descent follow the gradient flow of the modified loss closely. As Proposition A.10 shows, the local minima of the original loss are still local minima of the modified loss so gradient descent within this learning rate regime remains trapped within the basin of attraction of these minima. IGR does not lead to an escape from local minima, but instead, encourages a shallower path toward flatter solutions close to the submanifold of global interpolating minima, which the modified loss shares with the original loss (Proposition A.10)." }, { "heading": "4 EXPLICIT GRADIENT REGULARIZATION", "text": "For overparameterized models, we predict that the strength of IGR relative to the original loss can be controlled by increasing the learning rate h (Prediction 2.1). However, gradient descent becomes unstable when the learning rate becomes too large. For applications where we wish to increase the strength of IGR beyond this point, we can take inspiration from implicit gradient regularization to motivate Explicit Gradient Regularization (EGR), which we define as:\nEµ(θ) = E(θ) + µ‖∇E(θ)‖2 (8)\nwhere, µ is the explicit regularization rate, which is a hyper-parameter that we are free to choose, unlike the implicit regularization rate λ (Equation 3) which can only by controlled indirectly. Now, we can do gradient descent on Eµ with small learning rates and large µ.\nAlthough EGR is not the primary focus of our work, we will demonstrate the effectiveness of EGR for a simple two parameter model in the next section (Section 5) and for a ResNet trained on Cifar-10 (Figure 3c). Our EGR experiments act as control study in this work, to demonstrate that the RIG term arising implicitly in gradient descent can indeed improve test accuracy independent of confounding effects that may arise when we control IGR implicitly through the learning rate. Namely, if we had not observed a significant boost in model test accuracy by adding the RIG term explicitly, our prediction that implicit regularization helps to boost test accuracy would have been in doubt.\nRelated work: Explicit regularization using gradient penalties has a long history. In early work, Drucker & Le Cun (1992) used a gradient penalty (using input gradients instead of parameter gradients). Hochreiter & Schmidhuber (1997) introduced a regularization penalty to guide gradient descent toward flat minima. EGR is also reminiscent of other regularizers such as dropout, which similarly encourages robust parameterizations (Morcos et al., 2018; Srivastava et al., 2014; Tartaglione et al., 2018). More recently, loss gradient penalties have been used to stabilize GAN training (Nagarajan & Kolter, 2017; Balduzzi et al., 2018; Mescheder et al., 2017; Qin et al., 2020). The success of these explicit regularizers demonstrates the importance of this type of regularization in deep learning." }, { "heading": "5 IGR AND EGR IN A 2-D LINEAR MODEL", "text": "In our first experiment we explore implicit and explicit gradient regularization in a simple twoparameter model with a loss given by E(a, b) = (y − f(x; a, b))2 /2, where f(x; a, b) = abx is our model, a, b ∈ R are the model parameters and x, y ∈ R is the training data. We have chosen this model because we can fully visualize the gradient descent trajectories in 2-d space. For a single data point, this model is overparameterized with global minima located along a curve attractor defined by the hyperbola ab = y/x. For small learning rates, gradient descent follows the gradient flow of the loss from an initial point (a0, b0) toward the line attractor. For larger learning rates, gradient descent follows a longer path toward a different destination on the line attractor (Figure 1a).\nWe can understand these observations using Theorem 3.1, which predicts that gradient descent is closer to the modified flow given by ȧ = −∇aẼ(a, b) and ḃ = −∇bẼ(a, b) where Ẽ(a, b) = E(a, b) + λRIG(a, b) is the modified loss from Equation 2, RIG(a, b) = ( |a|2 + |b|2 ) x2E(a, b) is the implicit regularization term from Equation 4 and λ = h/2 is the implicit regularization rate from Equation 3, with learning rate h. Although the modified loss and the original loss have the same global minima, they generate different flows. Solving the modified flow equations numerically starting from the same initial point (a0, b0) as before, we find that the gradient descent trajectory is closer to the modified flow than the exact flow, consistent with Theorem 3.1 (Figure 1a).\nNext, we investigate Prediction 2.1, that the strength of the implicit gradient regularization RIG(a, b) relative to the original loss E(a, b) can be controlled by increasing the regularization rate λ. In this case, this means that larger learning rates should produce gradient descent trajectories that lead to minima with a smaller value of RIG(a, b)/E(a, b) = x2 ( |a|2 + |b|2 ) . It is interesting to note that this is proportional to the parameter norm, and also, to the square of the loss surface slope. In our numerical experiments, we find that larger learning rates lead to minima with smaller L2 norm (Figure 1b), closer to the flatter region in the parameter plane, consistent with Prediction 2.1 and 2.2. The extent to which we can strengthen IGR in this way is restricted by the learning rate. For excessively\nlarge learning rates, gradient descent ricochets from peak to peak, until it either diverges or lands in the direct vicinity of a minimum, which is sensitively dependent on initialization (Figure A.1).\nTo go beyond the limits of implicit gradient regularization, we can explicitly regularize this model using Equation 8 to obtain a regularized loss Eµ(a, b) = E(a, b) + µ ( |a|2 + |b|2 ) x2E(a, b). Now,\nif we numerically integrate ȧ = −∇aEµ(a, b) and ḃ = −∇bEµ(a, b) starting from the same initial point (a0, b0), using a very large explicit regularization rate µ (and using gradient descent with a very small learning rate h for numerical integration, see Appendix A.4) we find that this flow leads to global minima with a small L2 norm (Figure 1a) in the flattest region of the loss surface. This is not possible with IGR, since it would require learning rates so large that gradient descent would diverge." }, { "heading": "6 IGR AND EGR IN DEEP NEURAL NETWORKS", "text": "Next, we empirically investigate implicit gradient regularization and explicit gradient regularization in deep neural networks. We consider a selection of MLPs trained to classify MNIST digits and we also investigate Resnet-18 trained to classify CIFAR-10 images. All our models are implemented using Haiku (Hennigan et al., 2020).\nTo begin, we measure the size of implicit regularization in MLPs trained to classify MNIST digits with a variety of different learning rates and network sizes (Figure 2). Specifically, we train 5-layer MLPs with nl units per layer, where nl ∈ {50, 100, 200, 400, 800, 1600}, h ∈ {0.5, 0.1, 0.05, 0.01, 0.005, 0.001, 0.0005}, using ReLu activation functions and a cross entropy loss (see Appendix A.5 for further details and see Figures A.3, A.4 and A.5 for training and test data curves). We report RIG and test accuracy at the time of maximum test accuracy for each network that fits the training data exactly. We choose this time point for comparison because it is important for practical applications. We find that RIG is smaller for larger learning rates and larger networks (Figure 2a), consistent with Theorem 3.1 and Prediction 2.1. Next, we measure the loss surface slope in 5-layer MLPs, with 400 units per layer, trained to classify MNIST digits with a range of different learning rates. We find that neural networks with larger learning rates, and hence, with stronger IGR have smaller slopes at the time of maximum test accuracy (Figure 3a). We also measure the loss surface slope in the vicinity of these optima. To do this, we add multiplicative Gaussian\nnoise to every parameter according to θp = θ(1 + η), where θ are the parameters of a fully trained model and θp are the parameters after the addition of noise, where η ∼ N (0, σ). We find that neural networks trained with larger learning rates have flatter slopes and these slopes remain small following larger perturbations (Figure 3a). These numerical results are consistent with our prediction that IGR encourages the discovery of flatter optima (Prediction 2.2)\nNext, we observe that improvements in test set accuracy are correlated with increases in regularization rate (Figure 2b), and also with increases in learning rate and network size (Figure A.6). This is consistent with Prediction 2.3. Furthermore, the correlation between test set accuracy and network size m supports our use of network size scaling in Equation 3 and 4.\nNext, we explore the robustness of deep neural networks in response to parameter perturbations. In previous work, it has been reported that deep neural networks are robust to a substantial amount of parameter noise, and that this robustness is stronger in networks with higher test accuracy (Morcos et al., 2018). We measure the degradation in classification accuracy as we increase the amount of multiplicative Gaussian noise and find that neural networks with larger learning rates, and hence, with stronger IGR, are more robust to parameter perturbations after training (Figure 3c), consistent with Prediction 2.4. This may explain the origin, in part, of deep neural network robustness.\nWe also explore IGR in several other settings. For ResNet-18 models trained on CIFAR-10, we find that RIG is smaller and test accuracy is higher for larger learning rates (at the time of maximum test accuracy) (Figure A.7, A.8), consistent again with Theorem 3.1 and Predictions 2.1 and 2.3. We also explore IGR using different stopping time criteria (other than the time of maximum test accuracy), such as fixed iteration time (Figures A.3, A.4), and fixed physical time (Figure A.5) (where iteration time is rescaled by the learning rate, see Appendix A.5 for further information). We explore IGR for full batch gradient descent and for stochastic gradient descent (SGD) with a variety of different batch sizes (Figure A.6) and in all these cases, our numerical experiments are consistent with Theorem 3.1. These supplementary experiments are designed to control for the presence, and absence, of other sources of implicit regularisation - such as model architecture choice, SGD stochasticity and the choice of stopping time criteria.\nFinally, we provide an initial demonstration of explicit gradient regularization (EGR). Specifically, we train a ResNet-18 using our explicit gradient regularizer (Equation 8) and we observe that EGR produces a boost of more than 12% in test accuracy (see Figure 3c). This initial experiment indicates that EGR may be a useful tool for training of neural networks, in some situations, especially where IGR cannot be increased with larger learning rates, which happens, for instance, when learning rates are so large that gradient descent diverges. However, EGR is not the primary focus of our work here, but for IGR, which is our primary focus, this experiment provides further evidence that IGR may play an important role as a regularizer in deep learning." }, { "heading": "7 RELATED WORK", "text": "Implicit regularization: Many different sources of implicit regularization have been identified, including early-stopping (Hardt et al., 2016), model initialization (Glorot & Bengio, 2010; Li & Liang, 2018; Nagarajan & Kolter, 2018; Gunasekar et al., 2018; Zhang et al., 2019; Zou et al., 2019), model architecture (Li et al., 2018; Lin & Tegmark, 2016; Ma et al., 2020), stochasticity (Keskar et al., 2017; Soudry et al., 2018; Roberts, 2018; Ali et al., 2020; Chaudhari & Soatto, 2018; De & Smith, 2020; Mandt et al., 2017; Park et al., 2019; Sagun et al., 2017; Smith & Le, 2018; Wilson et al., 2017; Jastrzebski et al., 2021), implicit L2 regularity (Soudry et al., 2018; Neyshabur et al., 2015; Ali et al., 2019; Ji & Telgarsky, 2019; Nacson et al., 2019; Poggio et al., 2019; Suggala et al., 2018), low rank biases (Arora et al., 2019a; Gunasekar et al., 2017; Razin & Cohen, 2020) among other possibilities. A number of studies have investigated implicit regularization in the discrete steps of gradient descent for specific datasets, losses, or architectures (Soudry et al., 2018; Neyshabur et al., 2015; Gidel et al., 2019). IGR might also be useful for understanding implicit regularization in deep matrix factorization with gradient descent (Arora et al., 2019a; Gunasekar et al., 2017; Razin & Cohen, 2020), where gradient descent seems to have a low-rank bias. Our work may also provide a useful perspective on the break-even point in deep learning (Jastrzebski et al., 2021). At a break-even point our backward analysis suggests that gradient descent with large learning rates will move toward flatter regions, consistent with this work. Stochastic effects are also likely to contribute to the trajectory at break-even points.\nLearning rate schedules and regimes: Implicit gradient regularization can be used to understand the role of learning schedules, since learning rate controls the relative strength of implicit regularization and loss optimization. For example, for a cyclical learning rate schedule (Smith, 2017), cyclically varying learning rates between large and small learning rates can be interpreted as a cyclical variation between large and small amounts of IGR (i.e., alternate phases of optimization and regularization). A number of studies have identified various learning rates regimes characterized by different convergence and generalization properties. For instance Li et al. (2019) identifies a small learning rate regime where a network tends to memorize and a large learning rate regime characterized by increased generalization power. This is consistent with IGR which we believe is most useful at the start of training, orienting the search toward flatter regions, and less important in later stages of the training, when a flatter region has been reached, and where convergence to any of the flatter minima is more important. This is also consistent with Jastrzebski et al. (2021) who showed the importance of large learning rates at the beginning of training in encouraging more favourable optimization trajectories. Also Lewkowycz et al. (2020) identifies a lazy phase, a catapult phase, and a divergent phase, which may be related to the range of backward error analysis applicability.\nNeural Tangent Kernel: The Neural Tangent Kernel (NTK) is especially interesting (Arora et al., 2019b;c; Chizat & Bach, 2019; Jacot et al., 2018; Lee et al., 2019; Oymak & Soltanolkotabi, 2019; Woodworth et al., 2020; Cao & Gu, 2019) since, in the case of the least square loss, the IGR termRIG can be related to the NTK (see Appendix A.3). This is particularly interesting because it suggests that the NTK may play a role beyond the kernel regime, into the rich regime. In this context, IGR is also related to the trace of the Fisher Information Matrix (Karakida et al., 2019).\nRunge-Kutta methods: To the best of our knowledge, backward analysis has not been used previously to investigate implicit regularization in gradient based optimizers. However, Runge-Kutta methods have been used to understand old (and devise new) gradient-based optimization methods (Betancourt et al., 2018; Scieur et al., 2017; Zhang et al., 2018; França et al., 2020). A stochastic version of the modified equation was used (Li et al., 2017; Feng et al., 2020) to study stochastic gradient descent in the context of stochastic differential equations and diffusion equations with a focus on convergence and adaptive learning, and very recently França et al. (2020) used backward analysis to devise new optimizers to control convergence and stability." }, { "heading": "8 DISCUSSION", "text": "Following our backward error analysis, we now understand gradient descent as an algorithm that effectively optimizes a modified loss with an implicit regularization term arising through the discrete nature of gradient descent. This leads to several predictions that we confirm experimentally: (i) IGR penalizes the second moment of the loss gradients (Prediction 2.1), and consequently, (ii) it penalizes minima in the vicinity of large gradients and encourages flat broad minima in the vicinity of small gradients (Prediction 2.2); (iii) these broad minima are known to have low test errors, and consistent with this, we find that IGR produces minima with low test error (Prediction 2.3); (iv) the strength of regularization is proportional to the learning rate and network size (Equation 3), (v) consequently, networks with small learning rates or fewer parameters or both will have less IGR and worse test error, and (vi) solutions with high IGR are more robust to parameter perturbations (Prediction 2.4).\nIt can be difficult to study implicit regularization experimentally because it is not always possible to control the impact of various alternative sources of implicit regularization. Our analytic approach to the study of implicit regularization in gradient descent allows us to identify the properties of implicit gradient regularization independent of other sources of implicit regularization. In our experimental work, we take great care to choose models and datasets that were sufficiently simple to allow us to clearly expose implicit gradient regularization, yet, sufficiently expressive to provide insight into larger, less tractable settings. For many state-of-the-art deep neural networks trained on large real-world datasets, IGR is likely to be just one component of a more complex recipe of implicit and explicit regularization. However, given that many of the favourable properties of deep neural networks such as low test error capabilities and parameter robustness are consistent with IGR, it is possible that IGR is an important piece of the regularization recipe.\nThere are many worthwhile directions for further work. In particular, it would be interesting to use backward error analysis to calculate the modified loss and implicit regularization for other widely used optimizers such as momentum, Adam and RMSprop. It would also be interesting to explore the properties of higher order modified loss corrections. Although this is outside the scope of our work here, we have provided formulae for several higher order terms in the appendix. More generally, we hope that our work demonstrates the utility of combining ideas and methods from backward analysis, geometric numerical integration theory and machine learning and we hope that our contribution supports future work in this direction." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank Samuel Smith, Soham De, Mihaela Rosca, Yan Wu, Chongli Qin, Mélanie Rey, Yee Whye Teh, Sébastien Racaniere, Razvan Pascanu, Daan Wierstra, Ethan Dyer, Aitor Lewkowycz, Guy Gur-Ari, Michael Munn, David Cohen, Alejandro Cabrera and Shakir Mohamed for helpful discussion and feedback. We would like to thank Alex Goldin, Guy Scully, Elspeth White and Patrick Cole for their support. We would also like to thank our families, especially Wendy; Susie, Colm and Fiona for their support, especially during these coronavirus times." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 BACKWARD ERROR ANALYSIS OF THE EXPLICIT EULER METHOD", "text": "In this section, we provide formulae for higher order backward analysis correction terms for the explicit Euler method, including the first order correction which is required to complete the proof of Theorem 3.1.\nWe start by restating the general problem addressed by backward error analysis. To begin, consider a first order differential equation\nθ̇ = f(θ), (A.1)\nwith vector field f : Rm → Rm. The explicit Euler method\nθn = θn−1 + hf(θn), (A.2)\nwith step size h produces a sequence of discrete steps θ0, θ1, . . . , θn, . . . approximating the solution θ(t) of Equation A.1 with initial condition θ0. (In other word, θn approximates θ(nh) for n ≥ 0.) However, at each discrete step the Euler method steps off the continuous solution θ(t) with a one-step error (or local error) ‖θ1− θ(h)‖ of orderO(h2). Backward error analysis was introduced in numeric integration to study the long term error (or global error) ‖θn − θ(nh)‖ of the numerical method. More generally, backward error analysis is useful for studying the long term behavior of a discrete numeric method using continuous flows, such as its numerical phase portrait near equilibrium points, its asymptotic stable orbits, and conserved quantities among other properties (see Hairer & Lubich (1997); Hairer et al. (2006) for a detailed exposition). It stems from the work of Wilkinson (1960) in numerical linear algebra where the general idea in the study of a numeric solution via backward error analysis is to understand it as an exact solution for the original problem but with modified data. When applied to numerical integration of ODEs this idea translates into finding a modified vector field f̃ with corrections fi’s to the original vector field f in powers of the step-size\nf̃(θ) = f(θ) + hf1(θ) + h 2f2(θ) + · · · (A.3)\nso that the numeric method steps now exactly follow the (formal) solution of the modified equation:\nθ̇ = f̃(θ) (A.4)\nIn other words, if θ(t) is the solution of the modified equation (A.4) and θn is the nth discrete step of the numerical method, now one has:\nθn = θ(nh) (A.5)\nIn general, the sum of the corrections fi’s in (A.3) diverges, and the modified vector field f̃ is only a formal power series. For practical purposes, one needs to truncate the modified vector field. If we truncate the modified vector field up to order n (i.e., we discard the higher corrections fl for l ≥ n+1), the one-step error between the numeric method and the solution of the truncated modified equation is now of order O(hn+1). It is also possible to bound the long term error, but in this section we will only formally derive the higher corrections fi for the explicit Euler method. (We refer the reader to Hairer & Lubich (1997), for instance, for precise bounds on the error for the truncated modified equation.)\nTo derive the corrections fi’s for the explicit Euler method, it is enough to consider a single step of gradient descent θ + hf(θ) and identify the corresponding powers of h in\nθ + hf(θ) = Taylor|t=0 θ(h)\nby expanding the solution θ(h) of the modified equation (A.4) starting at θ into its Taylor series at zero:\nθ(h) = θ + ∑ n≥1 hn n! θ(n)(0). (A.6)\nFor a gradient flow f(θ) = −∇E(θ) its Jacobian f ′(θ) is symmetric. In this case, it is natural to look for a modified vector field whose Jacobian is still symmetric (i.e., the higher order corrections can be expressed as gradients). If we assume this, we have the following expression for the higher derivatives of the modified flow solution:\nLemma A.1. In the notation above and with f̃ whose Jacobian is symmetric, we have\nθ̇(0) = f̃(θ) (A.7)\nθ̈(0) = d\ndθ\n‖f̃(θ)‖2\n2 (A.8)\nθ(n)(0) = dn−2\ndtn−2 d dθ\n‖f̃(θ)‖2\n2 , n ≥ 2, (A.9)\nwhere d n−2\ndtn−2 d dθg(θ) is shorthand to denote the operator\ndn−2 dtn−2 ∣∣t=0 ddθ ∣∣θ=θ(t)g(θ) and θ(t) is the\nsolution of the modified equation A.4. We will will use this shorthand notation throughout this section.\nProof. By definition θ(t) is the solution of (A.4) with initial condition θ(0) = θ, hence θ̇(0) = f̃(θ). Differentiating both sides of (A.4) with respect to t, we obtain\nθ̈(t) = f̃ ′(θ(t))θ̇(t) = f̃ ′(θ(t))f̃(θ(t)) = d dθ ∣∣θ=θ(t) ‖f̃(θ)‖22 .\nThe last equation is obtained because we assumed f̃ ′(θ) is symmetric. Now the higher derivatives are obtained by differentiating both sides of the last equation n− 2 times with respect to t and setting t = 0.\nThe previous lemma gives a formula for the derivatives θ(n)(0). However in order to compare the powers of h we need to expand these derivatives into power of h, which is what the next lemma does:\nLemma A.2. In the notation above, we have θ(n)(0) = ∑ k≥0 hkLn,k(θ) n ≥ 2, (A.10)\nwhere we define\nLn,k(θ) = dn−2\ndtn−2 d dθ ∑ i+j=k 〈fi(θ), fj(θ)〉 2 , (A.11)\nwith f0(θ) being the original vector field f(θ), and 〈 , 〉 denoting the inner product of two vectors.\nProof. This follows immediately from (A.9) and by expanding ‖f̃(θ)‖2 in powers of h:\n‖f̃(θ)‖2\n2 =\n1 2 〈f0(θ) + hf1(θ) + · · · , f0(θ) + hf1(θ) + · · · 〉\n= ∑ k≥0 hk ∑ i+j=k 〈fi(θ), fj(θ)〉 2 .\nPutting everything together, we now obtain a Taylor series for the solution of the modified equation as a formal power series in h:\nLemma A.3. In the notation above, we have θ(h) = θ + ∑ l≥0 hl+1 ( fl(θ) +Hl(f0, f1, . . . , fl−1)(θ) ) , (A.12)\nwhere f0 is the original vector field f , H0 = 0 and we define recursively\nHl(f0, f1, . . . , fl−1)(θ) = ∑\nn+k=l+1 n≥2, l≥0\n1 n! Ln,k(θ) (A.13)\nProof. Replacing θ(n)(0) by their expression in (A.10) in the Taylor series (A.6), we obtain\nθ(h) = θ + hf̃(θ) + ∑ n≥2 ∑ k≥0 hn+k n! Ln,k(θ)\n= θ + ∑ l≥0 hl+1 ( fl(θ) + ∑ n+k=l+1 n≥2, l≥0 1 n! Ln,k(θ) ) ,\nwhich finishes the proof.\nNow comparing the Taylor series for the modified equation solution in its last form in (A.12) with one step of the Euler method\nθ + hf(θ) = θ + hf(θ) + ∑ l≥1 hl+1 ( fl(θ) +Hl(θ) ) for each order of h, we obtain the following proposition:\nProposition A.4. The corrections fi’s for the Euler method modified equation in (A.3) are given by the general recursive formula:\nfl(θ) = − ∑\nn+k=l+1 n≥2, l≥0\n1 n! Ln,k(θ), (A.14)\nwhere the Ln,k are defined by Equation A.11.\nLet us use (A.14) to compute the first order correction in the modified equation for the Euler explicit method:\nExample A.5. Suppose f = −∇E, then the first order correction is\nf1(θ) = − 1\n2 L2,0(θ) = −\n1\n4\nd\ndθ ‖∇E(θ)‖2\nwhich is the only correction we use in Theorem 3.1. The remarkable thing is that now the first two terms of (A.3) can be understood as the gradient of a modified function, yielding the following form for the modified equation (A.4):\nθ̇ = −∇Ẽ +O(h2),\nwith Ẽ = E + h4 ‖∇E‖ 2, which is the main result of Theorem 3.1." }, { "heading": "A.2 GEOMETRY OF IMPLICIT GRADIENT REGULARIZATION", "text": "In this section, we provide all the details for a proof of Proposition 3.3 and for our claim concerning the relationship between the loss surface slope and the implicit gradient regularizer, which we package in Corollary A.6.1. The geometry underlying implicit gradient regularization makes it apparent that gradient descent has a bias toward flat minima (Keskar et al., 2017; Hochreiter & Schmidhuber, 1997).\nTo begin with, consider a loss E over the parameter space θ ∈ Rm. The loss surface is defined as the graph of the loss: S = {(θ,E(θ)) : θ ∈ Rm} ⊂ Rm+1. It is a submanifold of Rm+1 of co-dimension 1, which means that the space of directions orthogonal to S at a given point (θ,E(θ)) is spanned by a single unit vector, the normal vector N(θ) to S at (θ,E(θ)). There is a natural parameterization for surfaces given by the graphs of functions, the Monge parameterization, where the local chart is the parameter plane: θ → (θ,E(θ)). Using this parameterization it is easy to see that the tangent space to S at (θ,E(θ)) is spanned by the tangent vectors:\nvi(θ) = (0, . . . , 1, . . . , 0,∇θiE(θ)),\nfor i = 1, . . . ,m and where the 1 is at the ith position. Now that we have the tangent vectors, we can verify that the following vector is the normal vector, since its inner product with all the tangent vectors is zero and its norm is one:\nN(θ) = 1√\n1 + ‖∇E(θ)‖2 (−∇θ1E(θ), . . . ,−∇θmE(θ), 1)\nWe can compute the cosine of the angle between the normal vector N(θ) at (θ,E(θ)) and the vector ẑ = (0, . . . , 0, 1) that is perpendicular to the parameter plane by taking the inner product between these two vectors, immediately yielding the following Proposition:\nProposition A.6. Consider a loss E and its loss surface S as above. The cosine of the angle between the normal vector N(θ) to the loss surface at (θ,E(θ)) and the vector ẑ = (0, . . . , 0, 1) perpendicular to the parameter plane can be expressed in terms of the implicit gradient regularizer as follows:\n〈N(θ), ẑ〉 = 1√ 1 +mRIG(θ)\n(A.15)\nNow observe that if 〈N(θ), ẑ〉 is zero this means that the tangent plane to S at (θ,E(θ)) is orthogonal to the parameter space, in which case the loss surface slope is maximal and infinite at this point! On the contrary, when 〈N(θ), ẑ〉 is equal to 1, the tangent plane at (θ,E(θ)) is parallel to the parameter plane, making S look like a plateau in a neighborhood of this point.\nLet us make precise what we mean by loss surface slope. First notice that the angle between ẑ (which is the unit vector normal to the parameter plane) and N(θ) (which is the vector normal to the tangent plane to S) coincides with the angle between this two planes. We denote by α(θ) this angle:\nα(θ) = arccos〈N(θ), ẑ〉. (A.16)\nNow, we define the loss surface slope at (θ,E(θ)) by the usual formula\nslope(θ) = tanα(θ). (A.17)\nAs we expect, when the loss surface slope is zero this means that the tangent plane to the loss surface is parallel to the parameter plane (i.e., α(θ) = 0), while when the slope goes to infinity it means the tangent plane is orthogonal to the parameter plane (i.e., α(θ) = π/2).\nThe following corollary makes it clear that implicit gradient regularization in gradient descent orients the parameter search for minima toward flatter regions of the parameter space, or flat minima, which have been found to be more robust and to possess more generalization power (see Keskar et al. (2017); Hochreiter & Schmidhuber (1997)):\nCorollary A.6.1. The slope of the loss surface S at (θ,E(θ)) can be expressed in terms of the implicit gradient regularizer as follows:\nslope(θ) = √ mRIG(θ) (A.18)\nProof. From (A.15) and the fact that cosα(θ) = 〈N(θ), ẑ〉, we have that\n1\ncos2 α(θ) = 1 +mRIG(θ).\nNow basic trigonometry tells us that in general 1/ cos2 α = 1 + tan2 α, which implies here that tan2 α(θ) = mRIG(θ). Taking the square root of this last expression finishes the proof.\nRemark A.7. Corollary A.6.1 gives us a very clear understanding of implicit gradient regularization. Namely, the quantity that is regularized is nothing other than the square of the slope RIG(θ) = 1 m slope 2(θ) and the modified loss becomes Ẽ(θ) = E(θ) + h4 slope 2(θ). For explicit gradient regularization, we can now also understand the explicitly regularized loss in terms of the slope:\nEµ(θ) = E(θ) + µ slope 2(θ),\nThis makes it clear that this explicit regularization drives the model toward flat minima (with zero slope).\nRemark A.8. There is another connection between IGR and the underlying geometry of the loss surface through the metric tensor. It is a well-known fact from Riemannian geometry that the metric tensor g(θ) for surfaces in the Monge parameterization θ → (θ,E(θ)) has the following form:\ngij(θ) = δij +∇θiE(θ)∇θjE(θ), where δij is the Kronecker delta. Now the determinant |g|, which defines the local infinitesimal volume element on the loss surface, can also be expressed in terms of the implicit gradient regularizer: Namely, |g(θ)| = 1 + ‖∇E(θ)‖2 = 1 +mRIG(θ). Solving this equation above for RIG, we obtain a geometric definition for the implicit gradient regularizer:\nRIG(θ) = 1\nm (|g(θ)| − 1), (A.19)\nwhich incidentally is zero when the surface looks like an Euclidean space.\nWe conclude this section by showing that the increase in parameter norm can be bounded by the loss surface slope at each gradient descent step. Proposition A.9. Let θn be the parameter vector after n gradient descent updates. Then the increase in parameter norm is controlled by the loss surface slope as follows:∣∣‖θn+1‖ − ‖θn‖∣∣ ≤ h slope(θn) (A.20) Proof. The triangle inequality applied to one step of gradient descent ‖θn+1‖ = ‖θn − h∇E(θn)‖ yields (‖θn+1‖−‖θn‖) ≤ h‖∇E(θn)‖, which concludes the proof, since the gradient norm coincides with the loss surface slope.\nWe now prove a proposition that relates the geometry of the minima of E and Ẽ. Proposition A.10. Let E be a non-negative loss. Then local minima of E are local minima of the modified loss Ẽ. Moreover, the two losses have the same interpolating solutions (i.e., locus of zeros).\nProof. Since ∇Ẽ = ∇E + h2D 2E∇E, where D2E is the Hessian of E, it is clear that a critical point of E is a critical point of Ẽ. Suppose now that θ∗ is a local minimum of E. This means that there is a neighbourhood of θ∗ where E(θ) ≥ E(θ∗). We can add h4 ‖∇E(θ)‖\n2 on the left of this inequality, since it is a positive quantity, and we can also add h4 ‖∇E(θ\n∗)‖2 on the right of the inequality, since it is zero. This shows that in a neighborhood of θ∗, we also have that Ẽ(θ) ≥ Ẽ(θ∗). This means that θ∗ is also a local minimum of Ẽ. Finally, let us see that E and Ẽ share the same locus of zeros. Let θ be a zero of E. Since E is non-negative and E(θ) = 0 then θ is a global minima, which implies that ∇E(θ) = 0 also, and hence Ẽ(θ) = 0. Now for positive E, Ẽ(θ) = 0 trivially implies E(θ) = 0." }, { "heading": "A.3 THE NTK CONNECTION", "text": "In the case of the least square loss, the modified equation as well as the implicit gradient regularizer take a very particular form, involving the Neural Tangent Kernel (NTK) introduced in Jacot et al. (2018). Proposition A.11. Consider a model fθ : Rd → Rc with parameters θ ∈ Rm and with least square loss E(θ) = ∑n i=1 ‖fθ(xi)− yi‖2. The modified loss can then be expressed as\nẼ(θ) = E(θ) + h n∑ i,j=1 Ti (θ)Kθ(xi, xj) j(θ), (A.21)\nwhere Kθ is the Neural Tangent Kernel defined by\nKθ(xi, xj) := ∇ i(θ)T∇ j(θ), (A.22) where k(θ) = fθ(xk)− yk ∈ Rc is the error vector on data point xk. In the particular case when the model output is one-dimensional, we can write the modified loss compactly as follows:\nẼ(θ) = (θ)T (1 +Kθ) (θ), (A.23)\nProof. Let us begin by computing the gradient\n∇E(θ) = n∑ i=1 ∇‖fθ(x)− y‖2\n= 2 n∑ i=1 〈∇θfθ(x), (fθ(x)− y)〉\n= 2 n∑ i=1 ∇ i(θ) i(θ),\nsince ∇ i(θ) = ∇θfθ(xi) is a matrix. Using that result, we can compute the implicit gradient regularizer in this case:\nRIG(θ) = 1\nm 〈∇E(θ),∇E(θ)〉\n= 4\nm n∑ i,j=1 〈∇ i(θ) i(θ),∇ j(θ) j(θ)〉\n= 4\nm n∑ i,j=1 i(θ) T∇ i(θ)T∇ j(θ) j(θ)\n= 4\nm n∑ i,j=1 i(θ) TKθ(xi, xj) j(θ),\nwhich concludes the first part of the proof. Now when the model output is one-dimensional, then the i(θ) are no longer vectors but scalars. We can then collect them into a single vector (θ) = ( 1(θ), . . . , n(θ)). Similarly, the terms Kθ(xi, xj) are no longer matrices but scalars, allowing us to collect them into a single matrix Kθ. In this notation we now see that the original loss can be written as E = T , yielding Ẽ = T (1 +Kθ) for the modified loss, concluding the proof.\nFor the least square loss, we see that the IGR is expressed in terms of the NTK. Therefore the NTK entries will tend to be minimized during gradient descent. In particular, the cross terms Kθ(xi, xj) will be pushed to zero. This means that gradient descent will push the maximum error direction ∇ k(θ) at different data points to be orthogonal to each other (i.e., ∇ k(θ)T∇ l(θ) ' 0). This is good, since a gradient update is nothing other than a weighted sum of these error directions. If they are orthogonal, this means that the gradient update contribution at point xk will not affect the gradient update contribution at point xl, so the individual data point corrections are less likely to nullifying each other as gradient descent progresses." }, { "heading": "A.4 EXPERIMENT DETAILS FOR THE 2-D LINEAR MODEL", "text": "In this section, we provide supplementary results (Figure A.1), hyper-parameter values (Table A.1) and modified loss derivations for the two parameter model described in Section 5. This model has a loss given by: E = (y − abx)2 /2 (A.24) where a, b ∈ R are the model parameters and x, y ∈ R is the training data. The implicit regularization term for this model can be calculated using Equation 4, yielding:\nRIG = (∇aE)2 + (∇bE)2\n2 = (a2 + b2)x2E. (A.25)\nThe implicit regularization rate can be calculated using Equation 3, yielding:\nλ = mh/4 = h/2. (A.26)\nThe modified loss can be calculated using Equation 2, yielding: Ẽ = E + λRIG = E ( 1 + λ ( a2 + b2 ) x2 ) . (A.27)\nHere, we can see that the global minima for E(a, b) (i.e. the zeros) are the same as the global minima for Ẽ(a, b) since 1 + λ ( a2 + b2 ) x2 is positive. However, as we will see, the corresponding gradient flows are different.\nThe exact modified gradient flow for the modified loss is given by: ȧ = −∇aẼ = −∇aE ( 1 + λ ( a2 + b2 ) x2 ) − λ ( 2ax2 ) E\nḃ = −∇bẼ = −∇bE ( 1 + λ ( a2 + b2 ) x2 ) − λ ( 2bx2 ) E, (A.28)\nThe exact gradient flow for the original loss is given by\nȧ = −∇aE ḃ = −∇bE, (A.29)\nThe exact numerical flow of gradient descent is given by\nan+1 = an − h∇aE bn+1 = bn − h∇bE, (A.30)\nwhere (an, bn) are the parameters at iteration step n. For this model, we have∇aE = −bx(y − abx) and ∇bE = −ax(y − abx). Remark A.12. In vector notation, we see that the modified gradient flow equation is\nθ̇ = −(1 + λx2‖θ‖2)∇E(θ)− (2λx2E)θ,\nwith θ = (a, b). The last term −(2λx2E)θ is a central vector field re-orienting the original vector field ∇E away from the steepest slopes and toward the origin which coincides in this example with flatter regions where the minimal norm global minima are located. This phenomenon becomes stronger for parameters further away from the origin, where, coincidentally the slopes are the steepest. Specifically, slope(θ) = ‖θ‖|x| √ 2E.\nIn Figure 1a, we plot the trajectories of four different flows starting from the same initial point (a0, b0), to illustrate the impact of IGR. We look at two initial points, chosen to illustrate the behaviour of gradient descent in different settings. The full set of hyper-parameters used for this experiment is given in Table A.1. First, we calculate the numerical flow of gradient descent with a small learning rate hS using Equation A.30. Next, we plot the numerical flow of gradient descent with a moderate learning rate hM using Equation A.30. We then calculate the modified gradient flow by solving Equation A.28 numerically using the Euler method, starting from initial point (a0, b0) and using λ = hM/2. For this numerical calculation, we use a very small Euler method step size hEuler so that the Euler method follows the gradient flow of the modified loss accurately. We observe that this modified flow is close to the numerical flow of gradient descent, consistent with Theorem 3.1.\nWe also plot the trajectory of gradient descent for a large learning rate hL, where backward analysis is no longer applicable (Fig. A.1) and observe that gradient descent ricochets across the loss surface, stepping over line attractors until it lands in a low gradient region of the loss surface where it converges toward a global minima. This large learning rate regime can be unstable. For larger learning rates, or for different initial positions, we observe that gradient descent can diverge in this regime.\nIn Figure 1b, we explore the asymptotic behaviour of gradient descent by measuring R/E after convergence for a range of models, all initialized at a0 = 2.8 and b0 = 3.5, with a range of learning rates.\nWe also explore the impact of explicit gradient regularization, using Equation 8 to define the explicitly regularized modified loss for our two-parameter model:\nEµ = E ( 1 + µ ( a2 + b2 ) x2 ) . (A.31)\nWe use this modified loss for gradient descent:\nan+1 = an − hEuler∇aEµ bn+1 = bn − hEuler∇bEµ, (A.32)\nHere, we have used a very small learning rate, hEuler (Table A.1) and a very large value of µ (Table A.1). This allows us to achieve stronger regularization, since µ can be increased to a large value where gradient descent with h = 2µ would diverge. We observe that EGR can decrease the size of RIG after training (Figure 1a) and can increase test accuracy (Figure A.2)." }, { "heading": "A.5 DEEP NEURAL NETWORK EXPERIMENT DETAILS", "text": "In this section, we provide further details for the calculation of RIG(θ) in a deep neural network (Figure 2, 3, A.3, A.4, A.5, A.6, A.7, A.8). For all these experiments, we use JAX (Bradbury et al., 2018) and Haiku (Hennigan et al., 2020) to automatically differentiate and train deep neural networks for classification. Conveniently, the loss gradients that we compute with automatic differentiation are the same loss gradients that we need for the calculation of RIG(θ).\nWe calculate the size of implicit gradient regularization RIG(θ), during model training, using Equation 4. We observe that RIG(θ), the loss E(θ) and the ratio RIG/E(θ) all decrease as training progresses, for all learning rates considered (Figure A.3). We also observe that the parameter magnitudes grow during training, and this growth slows as RIG(θ) becomes small, in agreement with Proposition A.9. After a sufficiently large fixed number of training steps, we see that models with larger learning rates have much smaller values of RIG(θ) relative to E(θ), which appears to be consistent with Prediction 2.1. However, the speed of learning clearly depends on the learning rate h so it may not be reasonable to compare models after a fixed number of training iterations. Instead of stopping after a fixed number of iterations, we could stop training after n = T/h iterations, where T is the fixed physical time that naturally occurs in our backward analysis (Equation A.5). Again, we find that models with larger learning rates have lower values of RIG(θ) and E(θ) after a sufficiently large amount of physical training time T (Figure A.5). However, even for fixed physical time comparisons, we still need to choose an arbitrary physical time point T for making comparisons between models. The choice of stopping time is effectively an unavoidable form of implicit regularization. Instead of fixed iteration time or fixed physical time, we use the time of maximum test accuracy as the stopping time for model comparison in Figure 2, 3 and A.6. We choose this option because it is the most\nuseful time point for most real-world applications. For each model, we calculate E(θ), RIG(θ) and the test accuracy at the time of maximum test accuracy (which will be a different iteration time for each model) (Figure A.4). The observation that (i) fixed iteration stopping time, (ii) fixed physical stopping time, and (iii) maximum test accuracy stopping time all have smaller values ofRIG(θ)/E(θ) for larger values of λ, consistent with Prediction 2.1, indicates that the relationships between these quantities cannot be trivially explained to be a consequence of a particular choice stopping time regularization. In these examples, we use nl = 400 (corresponding to ∼ 9.6× 106 parameters) with batch size 32.\nIn Figure 2 and Figure A.6 we report RIG(θ) and test accuracy at the time of maximum test accuracy for a selection of networks of different size, trained with different learning rates. For models with sufficient capacity to solve the training task and simultaneously minimize RIG(θ), we expect RIG(θ)/E and test error to decrease as λ increases (Prediction 2.1 and 2.3). To expose this behaviour, we exclude models that fail to reach 100% MNIST training accuracy, such as models that diverge (in the large learning rate regime), and models with excessively small learning rates, which fail to solve the task, even after long training periods. We observe that test error is strongly correlated with the size of RIG after training (Figure A.6). We also confirm this for a range of batch sizes, including full batch gradient descent (Figure A.6, top right) with nl = 400, and for SGD with batch size 32 across a range of learning rates and network sizes (Figure A.6, bottom right).\nFinally, to explore IGR and EGR for a larger model we trained a ResNet-18 to classify CIFAR-10 images using Haiku (Hennigan et al., 2020). We used stochastic gradient descent for the training with a batch size of 512 for a range of learning rates l ∈ {0.005, 0.01, 0.05, 0.1, 0.2}. We observe the same behaviour as in the MNIST experiments: as the learning rate increases, the values of RIG decrease (Prediction 2.1), the test accuracy increases (Prediction 2.3) and the optimization paths follow shallower slopes leading to broader minima (Prediction 2.2). The experimental results are summarized by the training curves displayed in Figure A.7 and in Figure A.8, where we plot the relation between learning rate, RIG, and test accuracy taken at the time of maximum test accuracy for each training curve." } ]
2,021
IMPLICIT GRADIENT REGULARIZATION
SP:50fe6a0cf9b00e462adff4c4273b2604546b4023
[ "This paper develops a MLR based on hyperbolic geometry. The idea is based on well-known concept of horocycle and horospheres which are known to be hyperbolic counterpart of line and plane in Euclidean geometry (see Coxter). Then the authors show the universal approximation which kind of follows similarly from the Euclidean counterpart. In fact we can probably conject that this universal approximation holds for any manifolds with constant sectional curvature.", "This paper proposes new neural models for hyperbolic space, which unlike previous hyperbolic NN works, relies on the notion of horocycle in the Poincare disk. This novel framework has connections to spectral learnig in hyperbolic space. Representation theorems alla Cybenko for layers constructed from these neurons are presented. Finally, various experiments on clustering and classifying datasets using these neurons to generate hyperbolic embeddings are presented. " ]
We use hyperbolic Poisson kernel to construct the horocycle neuron model on hyperbolic spaces, which is a spectral generalization of the classical neuron model. We prove a universal approximation theorem for horocycle neurons. As a corollary, we obtain a state-of-the-art result on the expressivity of f a,p, which is used in the hyperbolic multiple linear regression. Our experiments get state-of-the-art results on the Poincare-embedding subtree classification task and the classification accuracy of the two-dimensional visualization of images.
[]
[ { "authors": [ "P.-A. Absil", "R. Mahony", "R. Sepulchre" ], "title": "Optimization Algorithms on Matrix Manifolds", "venue": null, "year": 2008 }, { "authors": [ "G. Alanis-Lobato", "G. Mier", "M.A. Andrade-Navarro" ], "title": "Efficient embedding of complex networks to hyperbolic space via their Laplacian", "venue": "Scientific Reports,", "year": 2016 }, { "authors": [ "V.I. Arnold" ], "title": "On functions of three variables", "venue": "In Collected Works. Vladimir I.Arnold-Collected Works. Springer,", "year": 2009 }, { "authors": [ "I. Balazevic", "C. Allen", "T. Hospedales" ], "title": "Multi-relational Poincaré graph embeddings", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "M. Belkin", "N. Partha" ], "title": "Laplacian eigenmaps and spectral techniques for embedding and clustering", "venue": "Advances in Neural Information Processing Systems", "year": 2002 }, { "authors": [ "Y. Bengio", "A. Courville", "P. Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 2013 }, { "authors": [ "S. Bonnabel" ], "title": "Stochastic gradient descent on Riemannian manifolds", "venue": "IEEE Transactions on Automatic Control,", "year": 2013 }, { "authors": [ "R. Bonola" ], "title": "Non-Euclidean Geometry", "venue": "Dover Books on Mathematics. Dover Publications,", "year": 2012 }, { "authors": [ "M. Bridson", "A. Haefliger" ], "title": "Metric Spaces of Non-Positive Curvature, volume 319", "venue": "ISBN 978-3-642-08399-0", "year": 2009 }, { "authors": [ "M.M. Bronstein", "J. Bruna", "Y. LeCun", "A. Szlam", "P. Vandergheynst" ], "title": "Geometric deep learning: Going beyond Euclidean data", "venue": "IEEE Signal Processing Magazine,", "year": 2017 }, { "authors": [ "J. Bruna", "W. Zaremba", "A. Szlam", "Y. LeCun" ], "title": "Spectral networks and locally connected networks on graphs", "venue": "In International Conference on Learning Representations (ICLR2014),", "year": 2014 }, { "authors": [ "S. Carroll", "B. Dickinson" ], "title": "Construction of neural nets using the Radon transform", "venue": "Joint Conference on Neural Networks,", "year": 1989 }, { "authors": [ "B.P. Chamberlain", "J.R. Clough", "M.P. Deisenroth" ], "title": "Hybed: Hyperbolic neural graph embedding, 2018", "venue": null, "year": 2018 }, { "authors": [ "I. Chami", "Z. Ying", "C. Ré", "J. Leskovec" ], "title": "Hyperbolic graph convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "H. Cho", "B. DeMeo", "J. Peng", "B. Berger" ], "title": "Large-margin classification in hyperbolic space", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "D.-A. Clevert", "T. Unterthiner", "S. Hochreiter" ], "title": "Fast and accurate deep network learning by exponential linear units (ELUs)", "venue": "In Proceedings of the International Conference on Learning Representations(ICLR", "year": 2016 }, { "authors": [ "T.S. Cohen", "M. Geiger", "J. Köhler", "M. Welling" ], "title": "Spherical CNNs", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "G. Cybenko" ], "title": "Approximation by superpositions of a sigmoidal function", "venue": "Mathematics of Control, Signals and Systems,", "year": 1989 }, { "authors": [ "C. Dugas", "Y. Bengio", "F. Bélisle", "C. Nadeau", "R. Garcia" ], "title": "Incorporating second-order functional knowledge for better option pricing", "venue": "Advances in Neural Information Processing Systems", "year": 2001 }, { "authors": [ "M. Einsiedler", "T. Ward" ], "title": "Functional Analysis, Spectral Theory, and Applications, volume 276 of Graduate Texts in Mathematics", "venue": "ISBN 978-3-31958539-0", "year": 2017 }, { "authors": [ "K.-I. Funahashi" ], "title": "On the approximate realization of continuous mappings by neural networks", "venue": "Neural Networks,", "year": 1989 }, { "authors": [ "O.-E. Ganea", "G. Becigneul", "T. Hofmann" ], "title": "Hyperbolic neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "O.-E. Ganea", "G. Becigneul", "T. Hofmann" ], "title": "Hyperbolic entailment cones for learning hierarchical embeddings", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "T. Ghosh", "M. Kirby" ], "title": "Supervised dimensionality reduction and visualization using centroid-encoder", "venue": null, "year": 2020 }, { "authors": [ "F. Girosi", "T. Poggio" ], "title": "Representation properties of networks: Kolmogorov’s theorem is irrelevant", "venue": "Neural Computation - NECO, 1:465–469,", "year": 1989 }, { "authors": [ "D. Grattarola", "L. Livi", "C. Alippi" ], "title": "Adversarial autoencoders with constant-curvature latent manifolds", "venue": "Appl. Soft Comput.,", "year": 2019 }, { "authors": [ "C. Gulcehre", "M. Denil", "M. Malinowski", "A. Razavi", "R. Pascanu", "K.M. Hermann", "P. Battaglia", "V. Bapst", "D. Raposo", "A. Santoro", "N. de Freitas" ], "title": "Hyperbolic attention networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "M. Hein", "M. Andriushchenko", "J. Bitterwolf" ], "title": "Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "S. Helgason" ], "title": "A duality for symmetric spaces with applications to group representations", "venue": "Adv. Math.,", "year": 1970 }, { "authors": [ "S. Helgason" ], "title": "Groups and Geometric Analysis: Integral Geometry, Invariant Differential Operators, and Spherical Functions. Mathematical surveys and monographs", "venue": "American Mathematical Society,", "year": 2000 }, { "authors": [ "G.E. Hinton", "L. Deng", "D. Yu", "G.E. Dahl", "A. Mohamed", "N. Jaitly", "A. Senior", "V. Vanhoucke", "P. Nguyen", "T.N. Sainath", "B. Kingsbury" ], "title": "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups", "venue": "IEEE Signal Processing Magazine,", "year": 2012 }, { "authors": [ "P. Hislop" ], "title": "The geometry and spectra of hyperbolic manifolds", "venue": "Proceedings of the Indian Academy of Sciences - Mathematical Sciences,", "year": 1994 }, { "authors": [ "K. Hornik", "M. Stinchcombe", "H. White" ], "title": "Multilayer feedforward networks are universal approximators", "venue": "Neural Networks,", "year": 1989 }, { "authors": [ "S. Ioffe", "C. Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37,", "year": 2015 }, { "authors": [ "V. Khrulkov", "L. Mirvakhabova", "E. Ustinova", "I. Oseledets", "V. Lempitsky" ], "title": "Hyperbolic image embeddings", "venue": "In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "A. Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "University of Toronto,", "year": 2012 }, { "authors": [ "M. Law", "R. Liao", "J. Snell", "R. Zemel" ], "title": "Lorentzian distance learning for hyperbolic representations", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Y. LeCun", "L. Bottou", "Y. Bengio", "P. Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "M. Leshno", "V. Ya. Lin", "A. Pinkus", "S. Schocken" ], "title": "Multilayer feedforward networks with a nonpolynomial activation function can approximate any function", "venue": "Neural Networks,", "year": 1993 }, { "authors": [ "Q. Liu", "M. Nickel", "D. Kiela" ], "title": "Hyperbolic graph neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "K. Minemura M. Hashizume", "A. Kowata", "K. Okamoto" ], "title": "An integral representation of an eigenfunction of the Laplacian on the Euclidean space", "venue": "Hiroshima Math. J.,", "year": 1972 }, { "authors": [ "E. Mathieu", "C. Le Lan", "C.J. Maddison", "R. Tomioka", "Y.W. Teh" ], "title": "Continuous hierarchical representations with Poincaré variational auto-encoders", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "L. McInnes", "J. Healy", "J. Melville" ], "title": "UMAP: Uniform manifold approximation and projection for dimension reduction, 2020", "venue": null, "year": 2020 }, { "authors": [ "P. Mettes", "E. van der Pol", "C. Snoek" ], "title": "Hyperspherical prototype networks", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "H. Mhaskar", "T.A. Poggio" ], "title": "Deep vs. shallow networks : An approximation theory perspective", "venue": "ArXiv,", "year": 2016 }, { "authors": [ "F. Monti", "D. Boscaini", "J. Masci", "E. Rodolà", "J. Svoboda", "M.M. Bronstein" ], "title": "Geometric deep learning on graphs and manifolds using mixture model CNNs", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Y. Nagano", "S. Yamaguchi", "Y. Fujita", "M. Koyama" ], "title": "A wrapped normal distribution on hyperbolic space for gradient-based learning", "venue": null, "year": 2019 }, { "authors": [ "V. Nair", "G.E. Hinton" ], "title": "Rectified linear units improve restricted Boltzmann machines", "venue": "In Proceedings of the 27th International Conference on International Conference on Machine Learning,", "year": 2010 }, { "authors": [ "A.Y. Ng", "M.I. Jordan", "Y. Weiss" ], "title": "On spectral clustering: Analysis and an algorithm", "venue": "Advances in Neural Information Processing Systems", "year": 2002 }, { "authors": [ "M. Nickel", "D. Kiela" ], "title": "Poincaré embeddings for learning hierarchical representations", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "M. Nickel", "D. Kiela" ], "title": "Learning continuous hierarchies in the Lorentz model of hyperbolic geometry", "venue": "In ICML,", "year": 2018 }, { "authors": [ "R.H. Nielsen" ], "title": "Kolmogorov’s mapping neural network existence theorem", "venue": "In Proceedings of the IEEE First International Conference on Neural Networks (San Diego, CA),", "year": 1987 }, { "authors": [ "J. Ontrup", "H. Ritter" ], "title": "A hierarchically growing hyperbolic self-organizing map for rapid structuring of large data", "venue": null, "year": 2005 }, { "authors": [ "F. Pedregosa", "G. Varoquaux", "A. Gramfort", "V. Michel", "B. Thirion", "O. Grisel", "M. Blondel", "P. Prettenhofer", "R. Weiss", "V. Dubourg", "J. Vanderplas", "A. Passos", "D. Cournapeau", "M. Brucher", "M. Perrot", "E. Duchesnay" ], "title": "Scikit-learn: Machine learning in Python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "D. Pfau", "S. Petersen", "A. Agarwal", "D.G.T. Barrett", "K.L. Stachenfeld" ], "title": "Spectral inference networks: Unifying deep and spectral learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "F. Sala", "C. De Sa", "A. Gu", "C. Re" ], "title": "Representation tradeoffs for hyperbolic embeddings", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "J. Shi", "J. Malik" ], "title": "Normalized cuts and image segmentation", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2000 }, { "authors": [ "J. Snell", "K. Swersky", "R.S. Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "CoRR, abs/1703.05175,", "year": 2017 }, { "authors": [ "P.-N. Tan", "M. Steinbach", "V. Kumar" ], "title": "Introduction to Data Mining", "venue": null, "year": 2005 }, { "authors": [ "A. Tifrea", "G. Becigneul", "O.-E. Ganea" ], "title": "Poincare glove: Hyperbolic word embeddings", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "A.A. Ungar" ], "title": "A gyrovector space approach to hyperbolic geometry", "venue": "Synthesis Lectures on Mathematics and Statistics,", "year": 2008 }, { "authors": [ "Y. Weiss", "A. Torralba", "R. Fergus" ], "title": "Spectral hashing", "venue": "Advances in Neural Information Processing Systems", "year": 2009 }, { "authors": [ "H. Xiao", "K. Rasul", "R. Vollgraf" ], "title": "Fashion-mnist: A novel image dataset for benchmarking machine learning algorithms, 2017", "venue": null, "year": 2017 }, { "authors": [ "H. Yang", "X.-Y. Zhang", "F. Yin", "C. Liu" ], "title": "Robust classification with convolutional prototype learning", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "T. Yu", "C. De Sa" ], "title": "Numerically accurate hyperbolic embeddings using tiling-based models", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Poggio" ], "title": "1989)); 2, desired representation properties of hyperbolic eigenfunctions are unknown, partially because H is non-compact; 3, results in spectral theory favor Hilbert spaces, while universal approximation theorems embrace more than L2", "venue": null, "year": 1989 }, { "authors": [ "LeCun" ], "title": "1998)(C.3), where 40 PCA is used for the quadratic network. Quadratic network has a similar structure to ours, because our neuron are contructed by quotient of quadratic functions followed by log", "venue": null, "year": 1998 } ]
[ { "heading": "1 INTRODUCTION", "text": "Conventional deep network techniques attempt to use architecture based on compositions of simple functions to learn representations of Euclidean data (LeCun et al., 2015). They have achieved remarkable successes in a wide range of applications (Hinton et al., 2012; He et al., 2016). Geometric deep learning, a niche field that has caught the attention of many authors, attempts to generalize conventional learning techniques to non-Euclidean spaces (Bronstein et al., 2017; Monti et al., 2017).\nThere has been growing interest in using hyperbolic spaces in machine learning tasks because they are well-suited for tree-like data representation (Ontrup & Ritter, 2005; Alanis-Lobato et al., 2016; Nickel & Kiela, 2017; Chamberlain et al., 2018; Nickel & Kiela, 2018; Sala et al., 2018; Ganea et al., 2018b; Tifrea et al., 2019; Chami et al., 2019; Liu et al., 2019; Balazevic et al., 2019; Yu & Sa, 2019; Gulcehre et al., 2019; Law et al., 2019). Many authors have introduced hyperbolic analogs of classical learning tools (Ganea et al., 2018a; Cho et al., 2019; Nagano et al., 2019; Grattarola et al., 2019; Mathieu et al., 2019; Ovinnikov, 2020; Khrulkov et al., 2020; Shimizu et al., 2020).\nSpectral methods are successful in machine learning, from nonlinear dimensionality reduction (Belkin & Partha, 2002) to clustering (Shi & Malik, 2000; Ng et al., 2002) to hashing (Weiss et al., 2009) to graph CNNs (Bruna et al., 2014) to spherical CNNs (Cohen et al., 2018) and to inference networks (Pfau et al., 2019). Spectral methods have been applied to learning tasks on spheres (Cohen et al., 2018) and graphs (Bruna et al., 2014), but not yet on hyperbolic spaces. This paper studies a spectral generalization of the FC (affine) layer on hyperbolic spaces.\nBefore presenting the spectral generalization of the affine layer, we introduce some notations. Let (·, ·)E be the inner product, | · | the Euclidean norm, and ρ an activation function. The Poincaré ball model of the hyperbolic space Hn(n≥2) is a manifold {x∈Rn : |x|<1} equipped with a Riemannian metric ds2Hn= ∑n i=1 4(1−|x|\n2)−2dx2i . The boundary of Hn under its canonical embedding in Rn is the unit sphere Sn−1. The classical neuron y=ρ((x,w)E+b) is of input x∈Rn, output y∈R, with trainable parameters w∈Rn, b∈R. An affine layer Rn → Rm is a concatenation of m neurons. An alternative representation of the neuron x 7→ρ((x,w)E+b) is given by 1\nx∈Rn 7→ ρ(λ(x, ω)E+b), ω∈Sn−1, λ, b∈R. (1) This neuron is constant over any hyperplane that is perpendicular to a fixed direction ω. In Hn, a horocycle is a n−1 dimensional sphere (one point deleted) that is tangential to Sn−1. Horocycles are hyperbolic counterparts of hyperplanes (Bonola, 2012). Horocyclic waves 〈x, ω〉H := 12 log 1−|x|2 |x−ω|2 are constant over any horocycle that is tangential to Sn−1 at ω. Therefore,\nx∈Hn 7→ ρ(λ〈x, ω〉H+b), ω∈Sn−1, λ, b∈R (2) 1if w 6= (0, . . . , 0), one can take ω = w/|w|, λ = |w|; else, one can take λ = 0 and any ω ∈ Sn−1.\ngeneralizes the classical neuron model (1), and a concatenation of finitely many (2) generalizes the FC (affine) layer. We call (2) a horocycle neuron. Figure 1 (middle) is an example on H2.\nThe neuron models in (1, 2) are related to spectral theory because (·, ω)E (respectively 〈·, ω〉H ) are building blocks of the Euclidean (respectively hyperbolic) Laplacian eigenspace. Moreover, many L2 spaces have a basis given by Laplacian eigenfunctions (Einsiedler & Ward, 2017). On one side, all Euclidean (respectively hyperbolic) eigenfunctions are some kind of “superposition” of (·, ω)E (respectively 〈·, ω〉H ). On the other side, neural networks based on (1) (respectively (2)) represent functions that are another kind of “superposition” of (·, ω)E (respectively 〈·, ω〉H ). They heuristically explain why the universal approximation property is likely to hold for networks constructed by (1) and (2). By using the Hahn Banach theorem, an injectivity theorem of Helgason, and integral formula, we prove that finite sums of horocycle neurons (2) are universal approximators (Theorem 2).\nLet p ∈ Hn, Tp(Hn) be the tangent space of Hn at p, a ∈ Tp(Hn), ⊕ be the Möbius addition (Ungar, 2008). We remind the reader that the following functions\nf1a,p(x) = 2|a| 1− |p|2 sinh −1 ( 2(−p⊕ x, a)E (1− | − p⊕ x|2)|a| ) (3)\nare building blocks of many hyperbolic learning tools (Ganea et al., 2018a; Mathieu et al., 2019; Shimizu et al., 2020). Figure 1 illustrates examples of different neuron models (1, 2, 3) on H2.\nIn Lemma 1, we shall present a close relationship between (2) and (3). Using this relationship and Theorem 2, we obtain a novel result on the expressivity of f1a,p (Corollary 1).\nThis article contributes to hyperbolic learning. We first apply spectral methods, such as the horocycle, to hyperbolic deep learning. We prove results on the expressivity of horocycle neurons (2) and f1a,p (3). With horocycle neurons, we obtain state-of-the-art results on the Poincaré-embedding subtree classification task and the classification accuracy of the 2-D visualization of images in in the experiment." }, { "heading": "2 RELATED WORK", "text": "Universal approximation There is a vast literature on universal approximation (Cybenko, 1989; Hornik et al., 1989; Funahashi, 1989; Leshno et al., 1993). Cybenko (1989)’s existential approach uses the Hahn Banach theorem and Fourier transform of Radon measures. To prove Theorem 2, we also use the Hahn Banach theorem, and additionally an integral formula (7) and an injectivity Theorem 1 of Helgason. Generalizing integral formulas and injectivity theorems is easier than generalizing Fourier transform of Radon measures on most non-Euclidean spaces. (Carroll & Dickinson, 1989) uses the inverse Radon transform to prove universal approximation theorems. This method relates to ours, as injectivity theorems are akin to inverse Radon transforms. However, using the injectivity theorem is an existential approach while using the inverse Radon transform is a constructive one.\nSpectral methods Spectral methods in Bronstein et al. (2017); Bruna et al. (2014); Cohen et al. (2018) use a basis of L2(X) given by eigenfunctions, whereX is a finite graph or the sphere. Because L2(Hn) has no eigenfunctions as a basis, our approach is different from theirs.\nHyperbolic deep learning One part of hyperbolic learning concerns embedding data into the hyperbolic space (Nickel & Kiela, 2017; Sala et al., 2018). Another part concerns learning architectures with hyperbolic data as the input (Ganea et al. (2018a); Cho et al. (2019)). Ganea et al. (2018a) proposes two ways to generalize the affine layer on hyperbolic spaces: one by replacing the linear and bias part of an affine map with (25, 26) of their paper; another one by using a concatenation of f1a,p in\ntheir hyperbolic multiple linear regression (MLR). The latter seems more relevant to ours. A level set of f1a,p is a hypercycle that has the same distance to a chosen geodesic hypersurface, while a level set of a horocycle neuron is a horocycle that has the same “spectral” distance to an ideal point at infinity. Based on functions similar to f1a,p, Mathieu et al. (2019); Shimizu et al. (2020) build the gyroplane layer and Poincaré FC layer. Ganea et al. (2018a); Cho et al. (2019) take geodesics as decision hyperplanes, while we (initially) take horocycles. We shall construct the horocycle multiple linear regression (MLR), where decision hypersurfaces are geodesics. Geodesics decision hyperplanes (Ganea et al., 2018a; Cho et al., 2019) and geodesic decision hypersurfaces here arise from different methods. Khrulkov et al. (2020) investigates hyperbolic image embedding, where prototypes (or models) of each class are center-based. We study a different one, and we shall call our prototypes end-based." }, { "heading": "3 HYPERBOLIC SPACES", "text": "This section reviews facts from hyperbolic geometry that are used in the proof of Theorem 2. For the reader who is not interested in the proof, (4) is enough for the implementation.\nHyperbolic metric We use the Poincaré model. The hyperbolic space Hn(n≥2) is the manifold {x∈Rn : |x|<1} equipped with a Riemannian metric ds2 = ∑n i=1 4(1−|x|2)−2dx2i . Let o be the origin of Hn. The distance function dHn satisfies dHn(o, x)=2 arctanh |x|.\nGeodesics, horocycles and corresponding points Geodesics in Hn are precisely circular arcs that are orthogonal to Sn−1. Horocycles in Hn are precisely (n−1)-dimensional spheres that are tangential to Sn−1 (Helgason, 1970). Horocycles are hyperbolic analogs of hyperplanes. Figure 2 illustrates geodesics and horocycles on H2. Hyperbolic Poisson kernel The Poisson kernel for Hn is P (x, ω)= (\n1−|x|2 |x−ω|2\n)n−1 , where\nx∈Hn, ω∈Sn−1 (Helgason (1970)[p.108]). The function 〈·, ω〉H defined by\n〈x, ω〉H = 1 2(n− 1) logP (·, ω) = 1 2 log\n1− |x|2 |x− ω|2 (4)\nis constant over any horocycle that is tangential to Sn−1 at ω (Figure 1 (middle), (6)).\nRiemannian volume The Riemannian volume induced by the metric ds2 on Hn is dVol = 2n(1− |x|2)−ndx1 . . . dxn. (5)\nHorocycles Let Ξ be the set of horocycles of Hn, and let Ξω be the set of all horocycles that are tangential to Sn−1 at ω. Given λ∈R, we let ξλ,ω be the unique horocycle that connects ω and tanh (λ/2) · ω. We have Ξω = ∪λ∈R{ξλ,ω} and Ξ = ∪ω∈Sn−1Ξω. The length of any geodesic (that ends at ω) line segment cut by ξλ1,ω and ξλ2,ω equals |λ1 − λ2| (A.2). Therefore |λ1 − λ2| is a natural distance function defined on Ξω, and the map λ→ ξλ,ω is an isometry between R and Ξω. This isometry is closely related to 〈·, ω〉H (A.3): for any x ∈ ξλ,ω , 〈x, ω〉H = λ/2. (6) The annoying /2 in (6) is a tradeoff that the metric here is different from that in Helgason (2000).\nIntegral formula For fixed ω ∈ Sn−1, Hn=∪λ∈Rξλ,ω. Let dVolξλ,ω be the measure induced by ds2 on ξλ,ω . Let L be a family of geodesics that end at ω, δ > 0, and U=L ∩ (∪λ≤α≤λ+δξα,ω). For l ∈ L, dH(l ∩ ξλ,ω, l ∩ ξλ+δ,ω)=δ (A.2), hence dVol(U) = δ · dVolξλ,ω (U ∩ ξλ,ω) and therefore∫\nHn f(x)dVol(x) = ∫ R (∫ ξλ,ω f(z)dVolξλ,ω (z) ) dλ. (7)\nThe above proof (for Hn) is essentially the same as that in (Helgason, 2000)[p.37] (for H2). To further convince the reader that (7) holds for all n, we give another simple proof in A.4.\nInjectivity theorem With respect to the canonical measure on Ξ, Helgason (1970)[p.13] proved Theorem 1 (Helgason). If f ∈ L1(Hn) and ∫ ξ f(z)dVolξ(z) = 0 for a.e ξ ∈ Ξ, then f = 0 a.e..\nTheorem 1 demonstrates that if the integral of f ∈ L1(Hn) over almost every horocycle is zero then f is also zero. This theorem and the integral formula (7) are essential for the proof of Theorem 2." }, { "heading": "4 LEARNING ARCHITECTURES AND EIGENFUNCTIONS OF THE LAPLACIAN", "text": "In this section, we discuss a heuristic connection between the representation properties of eigenfunctions and classical neurons, and then we define some horocycle-related learning tools." }, { "heading": "4.1 EIGENSPACES AND NEURON MODELS", "text": "On a Riemannian manifold X , the Laplace-Beltrami LX is the divergence of the gradient, and it has a well-known representation property (Einsiedler & Ward, 2017): if X is a compact Riemannian manifold or bounded domain in Rn, then L2(X) has a basis given by eigenfunctions. This statement is false if X is Rn or Hn (Hislop, 1994).\nEigenspaces of on Rn and Hn Our work is motivated by the theory of eigenspaces, in which Euclidean (respectively hyperbolic) eigenfunctions are obtained from (x, ω)E (respectively 〈x, ω〉H ) by some kind of superposition. For example, all smooth eigenfunctions of LRn are precisely the functions (M. Hashizume & Okamoto, 1972)[p.543]\nf(x) = ∫ Sn−1 eλ(x,ω)EdT (ω), (8)\nand eigenfunctions of LHn are precisely the functions (Helgason, 1970)[Theorem 1.7, p.139]\nf(x) = ∫ Sn−1 eλ〈x,ω〉HdT (ω), (9)\nwhere T in (8) and (9) are some technical linear forms of suitable functional spaces on Sn−1.\nNeuron models By (8) and (1), Euclidean eigenfunctions (respectively classical neurons) are superpositions of (·, ω)E and exp (respectively ρ), with homogeneity and additivity. By (9) and (2), hyperbolic eigenfunctions (respectively horocycle neurons) are superpositions of 〈·, ω〉H and exp (respectively ρ). The representation property of eigenfunctions on compact manifolds and bounded domains suggests that the universal approximation property is likely to hold for networks constructed by (·, ω)E or 〈·, ω〉H . However, this heuristic is not proof (A.5)." }, { "heading": "4.2 HOROCYCLE BASED LEARNING ARCHITECTURES", "text": "Horocycle neuron In the implementation of the horocycle neuron (2), we take 1 2\nlog (\n1−|x|2 |x−ω|2+ + ) for 〈x, ω〉H , where is a small constant to ensure numerical stability. For updating ω, we use the sphere optimization algorithm (Absil et al., 2008; Bonnabel, 2013) (A.6).\nHorocycle feature and horocycle decision hypersurface Given a non-origin point x ∈ Hn, for y ∈ Hn we define hx(y) = 〈y, x/|x|〉H and call it the horocycle feature attached to x. This feature is useful in the Poincaré embedding subtree classification task (see the experiment and Figure 3[left]). The horocycle is the hyperbolic analog of the Euclidean hyperplane, and therefore it could be a possible choice of decision hypersurface, which may arise from a level set of a horocycle feature.\nEnd-based clustering and end prototype Natural clustering is a topic in representation learning (Bengio et al., 2013), and the common prototype-based clusters are center-based (Tan et al., 2005). We propose a type of clustering that embeds high-dimensional data in Hn and places prototypes in Sn−1. Figure 3[right] is an example for n = 2. For ω ∈ Sn−1 and any b ∈ R, the function x ∈ Hn 7→ − log ( 1−|x|2 |x−ω|2 ) + b measures the relative distance of Hn from ω in Gromov’s bordification theory (Bridson & Haefliger (2009)[II.8], A.18). Moreover, we define Dist : Hn ×Sn−1 ×R→ R by\nDist(x, ω, b) = − log ( 1− |x|2\n|x− ω|2\n) + b = −2〈x, ω〉H + b. (10)\nIt is a relative distance function, and this is why Dist may assume negative values and why there is a bias term b in (10). Consider classes Cls = {C1, C2, . . . , CM} and labeled training examples {(X1, Y 1), . . . , (XN , Y N )}, where Xi ∈ RD are D-dimensional input features and Y i ∈ {1, 2, . . . ,M}. Each example Xi belongs to the class CY i . In light of (10), our goal is to find a neural network NNθ : RD → Hn that is parameterized by θ, prototypes ω1, . . . , ωM ∈ Sn−1, and real numbers b1, . . . , bM ∈ R such that\n# { 1≤i≤N : Y i = arg min\n1≤j≤M\n( Dist(NNθ(X i), ωj , bj) )}\nN (11)\nis maximized. We call {NNθ(Xj) : 1 ≤ j ≤ N} the end-based clustering and ωi end prototypes (in hyperbolic geometry, the end is an equivalence class of parallel lines in Figure 2[left]). In experiments, we take NNθ = Exp ◦ NN′θ, where NN ′ θ : R\nD → Rn is a standard neural network parameterized by θ and Exp : Rn → Hn is the exponential map of the hyperbolic space.\nHorocycle layer, horocycle multiple linear regression (MLR) and geodesic decision hypersurfaces We call a concatenation of (2) a horocycle layer, and we shall carefully describe a prototypical learning framework for end-based clusterings. Using the same notions as in the previous paragraph, the classification task has M classes, and NNθ = Exp ◦NN′θ : RD → Hn is a deep network. For prototypes ω1, . . . , ωM ∈ Sn−1, real numbers b1, . . . , bM ∈ R, and any exampleX , our feedforward for prediction will be\nx = NNθ(X), (Feature descriptor) SCj(X) = −Dist(x, ωj , bj), (Scores; Similarity)\nX ∈ Cargmax 1≤j≤M (SCj(X)). (Classifier)\nThe goal is to maximize the accuracy (11), and then we need a loss function for the backpropagation. Following the convention of prototypical networks (Snell et al., 2017; Yang et al., 2018), we choose an increasing function ρ (in our experiments, ρ(x) = x or ρ = tanh. 2) and let the distribution over classes for an input X (with label Y ) be\npθ(Y = Cj |X) ∝ e−ρ(Dist(NNθ(X),mj ,bj)) = e−ρ(−SCj(X)). 2One often takes ρ(x) = x2 in metric learning, which is improper here because Dist(x) could be negative.\nTherefore, given a batch of training examples, the loss function is L = − ∑\n(Xj ,Y j)∈Batch log pθ(Y = CY j |Xj) #Batch . (12)\nThe training proceeds by minimizing L, and we call this framework a horocycle MLR. The set of parameters of the framework is {θ} ∪ {ω1, . . . , ωM} ∪ {b1, . . . , bM}. It is worth mentioning that decision boundaries of the horocycle MLR are geodesics, which follows from\nSCi(X)=SCj(X)⇐⇒ log ( 1−|x|2\n|x−ωi|2\n) −bi = log ( 1−|x|2\n|x−ωj |2\n) −bj ⇐⇒\n|x−ωi| |x−ωj | = e bj−bi 2\nand the theorem of Apollonian circles (A.7).\nPoisson neuron and Poisson multiple linear regression (MLR) Although 〈x, ω〉H (4) is wellmotivated by the theory of eigenspaces (9) and fits naturally into metric learning (see 10 or also Corollary 1), it is only defined on Hn. Some readers might not be convinced that the neuron has to be defined on hyperbolic spaces. Therefore, we try to remove the log in (4) and define the Poisson neuron model by Pρw,λ,b(x) = ρ ( λ |w| 2−|x|2 |x−w|2 + b ) for w ∈ Rn, λ, b ∈ R, which is well-defined on Rn\\{w}. Notice that if |x| < |w| then |w| 2−|x|2 |x−w|2 = e 2〈x/|w|,w/|w|〉H . In A.8, Figure 7 illustrates an example of a Poisson neuron on R2. In the implementation, we take |w| 2−|x|2\n|x−w|2+ for |w|2−|x|2 |x−w|2 , where\nis a small constant for numerical stability. We call a concatenation of Poisson neurons a Poisson layer, and we use it with a deep neural network NNθ : RD → Rn to construct the Poisson MLR, which is similar to the horocycle MLR. Let w1, . . . , wM ∈ Rn and b1, . . . , bM ∈ R, the feedforward for prediction of our framework is\nx = NNθ(X), SCj(X) = BatchNorm(P ρ wj ,−1,bj (x)), X ∈ Cargmax 1≤j≤M (SCj(X)). (13)\nWe let the pθ(Y = Cj |X) ∝ eSCj(X) and take (12) as the loss. This framework is called a Poisson MLR. We use the usual optimization algorithms to update parameters in the Poisson neuron. The BatchNorm(Ioffe & Szegedy, 2015) seems crucial for (13) in the experiment. Figure 4 illustrates that high-confidence prediction regions (deep red areas) of the Poisson MLR are compact sets, in contrast to classical classifiers Hein et al. (2019)[Theorem 3.1]. We shall use this figure to explain an experiment in Section 6.4." }, { "heading": "5 REPRESENTATIONAL POWER", "text": "In this section, ρ is a continuous sigmoidal function (Cybenko, 1989), ReLU(Nair & Hinton, 2010), ELU(Clevert et al., 2016), or Softplus(Dugas et al., 2001). We remind the reader that ρ is sigmoidal if lim t→∞ ρ(t) = 1 and lim t→−∞ ρ(t) = 0. The following theorem justifies the representational power of horocycle neurons. Theorem 2. Let K be a compact set in Hn, and 1≤p<∞. Then finite sums of the form\nF (x) = N∑ i=1 αiρ(λi〈x, ωi〉H+bi), ωi∈Sn−1, αi, λi, bi∈R (14)\nare dense in Lp(K,µ), where µ is either dVol (5) or the induced Euclidean volume.\nWe provide a sketch of the proof here and go through the details in A.9. It suffices to prove the theorem for a sigmoidal function ρ and µ = dVol , as other cases follow from this one. Assume that these finite sums are not dense in Lp(K, dVol). By the Hahn-Banach theorem, there exists some nonzero h∈Lq(K, dVol), where q=p/(p− 1) if p>1 and q=∞ if p=1, such that ∫ K F (x)h(x)dVol(x) = 0 for all finite sums of the form (14). Extend h to be a function H that is defined on Hn by assigning H(x)=h(x) if x∈K and H(x)=0 if x∈Hn\\K. Using the property of sigmoidal functions, the bounded convergence theorem, and the integral formula (7), we prove that the integration of H on almost every horocycle is zero. By the injectivity Theorem 1, H is almost everywhere zero, which contradicts our assumption and completes the proof.\nIn A.10, we shall prove the same result for Poisson neurons. In A.11, we prove the following lemma, which demonstrates a close relationship between horocycle neurons and the widely used f1a,p (3).\nLemma 1. Let K be a compact set in Hn, ω ∈ Sn−1, and > 0. There are c, d ∈ R, p ∈ Hn, and a ∈ Tp(Hn) such that the function D(x) = cf1a,p(x) + d− 〈x, ω〉H satisfies ||D||Lp(K,dVol) < .\nThis lemma suggests that 〈·, ω〉H is a boundary point of some “compactification” of the space of f1a,p. The above lemma together with Theorem 2 implies\nCorollary 1. Let K be a compact set in Hn and 1≤p<∞. Finite sums of the form\nF (x) = N∑ i=1 αiρ(cif 1 ai,pi(x) + di), pi ∈ H n, ai ∈ Tpi(H n), αi, ci, di ∈ R,\nare dense in Lp(K,µ), where µ = dVol or µ is the induced Euclidean volume.\nThis result provides novel insights into the hyperbolic neural network (Ganea et al., 2018a), gyroplane layer (Mathieu et al., 2019), and Poincaré FC layer (Shimizu et al., 2020). Although level sets of f1a,p are hypercycles, our proof of Lemma 1 relies on the theory of horocycles. It would be interesting to have more natural approaches to treat the expressivity of f1a,p." }, { "heading": "6 EXPERIMENTS", "text": "In this section, we first play with the MNIST toy. Next, we apply a horocycle feature to the Poincaré embedding subtree classification task. After that, we construct 2-D clusterings of image datasets by using the horocycle MLR. Finally, we provide evidence for further possible applications of the Poisson MLR. We use the framework or some functions of Tensorflow, Keras, and scikit-learn (Abadi et al., 2015; Chollet et al., 2015; Pedregosa et al., 2011)." }, { "heading": "6.1 MNIST", "text": "The MNIST (LeCun et al., 1998) task is popular for testing hyperbolic learning tools (Ontrup & Ritter, 2005; Nagano et al., 2019; Mathieu et al., 2019; Grattarola et al., 2019; Ovinnikov, 2020; Khrulkov et al., 2020). We train two different classifiers. A.12, A.14, and code contain details. The first one is a single horocycle layer followed by the softmax classifier. The average test error rate after 600 epochs is 1.96%, and Theorem 2 provides the rationale for this experiment (A.13). The second one is a Poisson MLR. It is the best hyperbolic geometry related MNIST classifier (Table 1). In this table, Ontrup & Ritter (2005) uses the hyperbolic SOM, Grattarola et al. (2019) uses the adversarial autoencoder, and Khrulkov et al. (2020) uses the hyperbolic MLR. Our experiment performs well on MNIST suggests that horocycle and Poisson neurons are computationally efficient and easily coordinate with classical learning tools (such as the convolutional layer and the softmax)." }, { "heading": "6.2 POINCARÉ EMBEDDING SUBTREE CLASSIFICATION", "text": "Given a Poincaré embedding (Nickel & Kiela, 2017) PE : {WordNet noun} → HD of 82114 nouns and given a node x ∈ {WordNet noun}, the task is to classify all other nodes as being part of the subtree rooted at x (Ganea et al., 2018a). Our model is logistic regression, where the horocycle feature p ∈ {WordNet noun} 7→ hPE(x)(PE(p)/s) (s is a hyperparameter lying in [1, 1.5]) is the only predictor, and the dependent variable is whether p is in the subtree rooted at x. The decision hypersurface of this model is a horocycle, as illustrated in Figure 3 (left). In the experiment, we pre-train three different Poincaré embeddings3 in each of H2,H3,H5,H10. For each x ∈ {animal, group, location, mammal, worker} and D ∈ {2, 3, 5, 10}, we randomly select one of three pre-trained Poincaré embedding PE : {WordNet noun} → HD and then test the model. Table 2 reports the F1 classification scores and two standard deviations of 100 trials for each {x,D}. Different Poincaré embeddings account for the most variance of the performance. Our model is different from the existing ones. Firstly, we take the horocycle as the decision hypersurface, while others take the geodesic. Secondly, we train a logistic regression on top of the horocycle feature attached to PE(x), which is efficiently calculated, while others train the hyperbolic MLR with different parametrizations. On the number of parameters, we have three (independent of D), Ganea et al. (2018a) has 2D, and Shimizu et al. (2020) has D + 1. The number of parameters explains why our model is prominent in low dimensions." }, { "heading": "6.3 END-BASED CLUSTERING FOR 2D DIMENSION REDUCTION", "text": "In this experiment, we use the horocycle MLR (Section 4.2) to construct end-based clusterings NNθ : R\nD → H2 for MNIST, Fashion-MNIST(Xiao et al., 2017), and CIFAR-10(Krizhevsky, 2012). We take NNθ = Exp ◦ NN′θ, where Exp is the exponential map of H2 and NN ′ θ : R\nD → R2 is a network with four convolutional blocks for MNIST/Fashion-MNIST or a ResNet-32 structure for CIFAR-10. A.16 and code contain details.\n3https://github.com/dalab/hyperbolic_cones\nFigure 5 illustrates end-based clusterings for MNIST, Fashion-MNIST, and CIFAR-10, with performance reported in the caption. Our accuracy for Fashion-MNIST is 8% higher than all numbers presented in McInnes et al. (2020). Moreover, Table 3 compares the numbers of Yang et al. (2018); Ghosh & Kirby (2020), and ours for MNIST, and our methods are similar. We all use convolutional networks as the (Feature descriptor) and prototype-based functions as the loss. However, Yang et al. (2018); Ghosh & Kirby (2020) use the center-based prototype loss, while we use the end-based (12). Yang et al. (2018)[Figure 1] points out that the traditional CNN is good at linearly separating feature representations, but the learned features are of large intra-class variations. The horocycle MLR leads to the inter-class separability in the same way (angle accounts for label difference) a traditional CNN does. At the same time, it also obtains intra-class compactness (Figure 5)." }, { "heading": "6.4 POISSON MLR", "text": "Using a Poisson MLR whose feature descriptor is a ResNet-32 structure, we obtain a classifier with a test error rate of 6.46% on CIFAR-10. It is on par with other methods with similar network structures (Yang et al., 2018). Moreover, we apply Poisson MLR to the classification task of flowers (Tensorflow), which is a typical example of overfitting. Replacing the MLR part of the Keras model (Tensorflow) with a Poisson MLR, the new Poisson model shows better generalization performance (Figure 6). A.17 and code contain the details. This subsection provides evidence for further applications of horocycles." }, { "heading": "7 CONCLUSION", "text": "Based on the spectral theory of hyperbolic spaces, we introduce several horocycle-related learning tools. They find applications in the hyperbolic neural networks, the Poincaré embedding subtree classification task, and the visualization and classification of image datasets. We give an existential proof of a universal approximation theorem for shallow networks constructed by horocycle neurons or f1a,p. Hopefully, it will trigger further research on the expressivity problems, such as constructive approaches, quantitative results, and benefit of depth (Mhaskar & Poggio, 2016), on horocycle neurons, f1a,p, and similar functions on more general manifolds." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 NOTATIONS AND SYMBOLS", "text": "" }, { "heading": "Default Notations", "text": "Notation Description Related formula\nR The set of real numbers\nRn n dimensional Euclidean space x ∈ Rn, x = (x1, . . . , xn) (·, ·)E Euclidean inner product x ∈ Rn, y ∈ Rn, (x, y)E = ∑n i=1 xiyi 〈·, ·〉H Hyperbolic analogue of (·, ·)E x ∈ Hn, y ∈ Sn−1, 〈x, ω〉H = 12 log 1−|x|2 |x−ω|2\n| · | Euclidean norm x ∈ Rn, |x| = √ (x, x)E\nHn n dimensional hyperbolic space as a set, Hn = {x ∈ Rn : |x| < 1} Tp(X) Tangent space of X at p\nT (X) Tangent space of X T (X) = ∪p∈XTp(X) ds2Hn The canonical metric on Hn with curva-\nture -1 ds2Hn=\n∑n i=1 4(1−|x| 2)−2dx2i\ndVol Riemannian volume on Hn dVol = 2n(1− |x|2)−ndx1 . . . dxn Lp(K, dVol) Lp space Lp(K, dVol) = { f | ∫ K |f |pdVol <∞ } || · ||Lp(K,dVol) Lp norm f measurable on K, ||f ||Lp(K,dVol) = (∫ K |f |pdVol ) 1 p Sn−1 n− 1 dimensional sphere as a set, Sn−1 = {x ∈ Rn : |x| = 1}\nP (·, ·) Hyperbolic Poisson kernel x ∈ Hn, ω ∈ Sn−1, P (x, ω) = (\n1−|x|2 |x−ω|2 )n−1 f1a,p Model in the hyperbolic MLR f1a,p(x) = 2|a| 1−|p|2 sinh −1 ( 2(−p⊕x,a)E (1−|−p⊕x|2)|a|\n) dHn The hyperbolic distance function\nΞ The space of horocycles\nΞω The set of horocycles that are tangential to Sn−1 at ω\nLX Laplace-Beltrami operator on X\nhx The horocycle feature function hx(y) = 〈y, x/|x|〉H ξλ,ω The unique horocycle connecting ω and tanhλ/2 · ω. MLR Multiple linear regression\ndim dimension\nIK the indicator function of K\nDist Relative distance function Dist(x, ω, b) = −2〈x, ω〉H + b Cls Set of classes Cls = {C1, C2, . . . , CM} NNθ A network parameterized by θ\nNN′θ A network parameterized by θ\nExp Exponential map of the hyperbolic space\n(X1, Y 1) Labeled sample\nSCj Score function\npθ(Y = Cj |X) Prediction probability L Loss function\nPρw,λ,b Poisson neuron P ρ w,λ,b(x) = ρ\n( λ |w|\n2−|x|2 |x−w|2 + b ) PE Poincaré embedding" }, { "heading": "Conventional symbols", "text": "Symbol In most cases it refers\nn,m, i integers\nx, y, w points in Rn or Hn, or real numbers\no the origin of Rn or Hn\nb, c, d, α, δ real numbers\nλ real or complex number\nt real number, represent the timestamp in optimization\nω point in Sn−1\nρ an activation function\nf, g functions\nK a compact set\nX a manifold\np a point in Hn or on a manifold\na an element in Tp(Hn)\nξ a horocycle\nµ a measure\nL a family of geodesics lines\nl a geodesics line\nU a set in Hn\nF, h,H functions\nM number of classes\nD dimension" }, { "heading": "A.2 PROOF OF THE ISOMETRY", "text": "Given ω∈Sn−1 and λ∈R, we let ξλ,ω the unique horocycle that connects ω and tanh (λ/2) · ω. The length of any geodesic (that ends at ω) line segment cut by ξλ1,ω and ξλ2,ω equals |λ1 − λ2|. This fact is obvious in the half-space model.\nThere is a Riemannian isometry F : {z ∈ Rn : |z| < 1} → {(x1, · · · , xn) : x1 > 0} (the latter is with the metric ds2 = dx 2 1+···+dx 2 n\nx21 ) such that F (ω) = ∞ and F (o) = (1, 0, . . . , 0). Using\ndHn(o, tanh(λi/2)ω) = |λi|, d{(x1,··· ,xn):x1>0}((1, 0, . . . , 0), (e±λi , 0, . . . , 0)) = |λi|, F (ω) =∞ and F (o) = (1, 0, . . . , 0), we have F (tanh(λi/2)ω) = (eλi , 0, . . . , 0). Therefore, F maps ξλi,ω to {(x1, x2, . . . , xn) : x1 = eλi}. Any geodesic (that ends at ω) line segment cut by ξλ1,ω and ξλ2,ω is mapped by F to {(t, α2, . . . , αn) : (t− eλ1)(t− eλ2) < 0} for some fixed αj . It is easy to check the length of this segment with respect to dx 2 1+···+dx 2 n\nx21 (as αi are constants, the metric reduces to dx21/x 2 1\non this segment) is |λ1 − λ2|." }, { "heading": "A.3 PROOF OF (6)", "text": "Because x is on ξλ which is a sphere with center 1+tanhλ/2 2 ω and radius 1−tanhλ/2 2 , we have∣∣∣x− 1+tanhλ/22 ω∣∣∣2 = ∣∣∣ 1−tanhλ/22 ∣∣∣2, which leads to |x|2−(1+tanhλ/2)(x, ω)E+tanhλ/2|ω|2 =\n0, and then 1+tanhλ/22 |x− ω| 2 = 1−tanhλ/22 (|ω 2| − |x|2), and finally 〈x, ω〉H = 12 log |ω|2−|x|2 |x−ω|2 = 1 2 log 1+tanhλ/2 1−tanhλ/2 = λ/2." }, { "heading": "A.4 ANOTHER PROOF OF THE INTEGRAL FORMULA (7)", "text": "We use Hn for the upper half space model {(x1, · · · , xn) : x1 > 0} with the Riemannian volume dx1···dxnxn1 . Let ω = (∞, 0, . . . , 0) and o be (1, 0, . . . , 0) as in (A.2), then ξλ,ω = {(x1, x2, . . . , xn) : x1 = eλ}. The induced Riemannian metric on ξλ,ω (respectively volume dVolξλ,ω ) is dx22+···+dx 2 n e2λ (respectively dx2···dxn e(n−1)λ ). For any integral function f on Hn, using change of variable x1 = eλ∫ Hn f(x1, . . . , xn) dx1 · · · dxn xn1 = ∫ λ ∫ (x2,...,xn)∈Rn−1 f(eλ, x2, . . . , xn) dx2 · · · dxn enλ eλdλ\n= ∫ λ ∫ (x2,...,xn)∈Rn−1 f(eλ, x2, . . . , xn) dx2 · · · dxn e(n−1)λ dλ\n= ∫ λ ∫ ξλ,ω f(z)dVolξλ,ω (z)dλ.\nThe above identity is equivalent to the integral formula ∫ Hn\nf(x)dVol(x) =∫ R (∫ ξλ,ω f(z)dVolξλ,ω (z) ) dλ. presented in (7), according to the Riemannian isometry in (A.2)." }, { "heading": "A.5 THE HEURISTIC IS NOT A PROOF", "text": "The spectral theory does not directly lead to universal approximation theorems because of the following: 1, superpositions in (1, 2) and (8, 9) are different (similarly, although another kind of superposition in Hilbert’s 13th problem (Hilbert, 1935; Arnold, 2009) was a driving force for universal approximation theorems (Nielsen, 1987), the former is hardly relevant for networks (Girosi & Poggio, 1989)); 2, desired representation properties of hyperbolic eigenfunctions are unknown, partially because Hn is non-compact; 3, results in spectral theory favor Hilbert spaces, while universal approximation theorems embrace more than L2 space." }, { "heading": "A.6 OPTIMIZATION", "text": "The parameters update for the horocycle unit (2) involves the optimization problem on the sphere (for ω) and the hyperbolic space (for x). We use a standard algorithm of sphere optimization (Absil et al., 2008) to update ω, and in the supplement we present an optimization approach based on the geodesic polar-coordinates to update x.\nIn the implementation of a horocycle layer, the forward propagation is trivial, while the backpropagation involves optimization on the sphere and hyperbolic space. In the following, η is the learning rate, αt is the value of α (α may be η, s, z, ω, . . .) at the t-th step, TpX is the tangent fiber at p, ∇ is the gradient, and∇H is the hyperbolic gradient. It suffices to consider the layer s=〈z, ω〉.\nOptimization on the sphere The parameter update of ω in s=〈z, ω〉 involves the optimization on the sphere. The projection of ∂Lθ∂s ∇s(ωt) = ∂Lθ ∂s zt−ωt |zt−ωt|2 ∈ TωtR n onto TωtS n−1 is given by Absil et al. (2008)[p.48]\nvt = ∂Lθ ∂s zt − ωt |zt − ωt|2 − ∂Lθ ∂s ( zt − ωt |zt − ωt|2 , ωt ) ωt = ∂Lθ ∂s zt − (zt, ωt)ωt |zt − ωt|2 .\nTwo well-known update algorithms of wt Absil et al. (2008)[p.76] are:\nωt+1 = cos (ηt|vt|)ωt − sin (ηt|vt|)|vt|−1vt; ωt+1 = (ωt − ηtvt)/|ωt − ηtvt|." }, { "heading": "A.7 A PROOF OF APOLLONIUS THEOREM", "text": "Theorem 3 (Apollonius). Given distinct ω1, ω2 ∈ Sn−1 and a positive number λ, the locus {x : |x− ω1| = λ|x− ω2|} is a sphere orthogonal to Sn−1.\nProof. If λ is one then it is trivial. We assume now λ is not one. By |x− ω1| = λ|x− ω2|, we can have ∣∣∣∣x− ω1 − λω21− λ\n∣∣∣∣2 = |ω1 − λω2|2|1− λ|2 − 1. The locus is a sphere with center O = ω1−λω21−λ and radius R = √ |ω1−λω2|2 |1−λ|2 − 1. The theorem of Apollonius (in all dimension) claims that this sphere is orthogonal to Sn−1. To prove this, it suffices to prove |oO|2 = 1 +R2 (recall o is the origin of Hn), which follows from∣∣∣∣ω1 − λω21− λ ∣∣∣∣2 = √ |ω1 − λω2|2 |1− λ|2 − 1 2 + 1.\nA.8 INVERSION\nOn Rn ∪ {∞}, given the sphere {x : |x− w0| = r}, the corresponding inversion is given by\nIv(x) = w0 + r2(x− w0) |x− w0|2 .\nFor x ∈ Rn ∪ {∞}, Iv(x) is called the inverse of x with respect to {x : |x− w0| = r}." }, { "heading": "A.9 PROOF OF THEOREM 2", "text": "Theorem 2 Let K be a compact set in Hn, and 1≤p<∞. Then finite sums of the form\nF (x) = N∑ i=1 αiρ(λi〈x, ωi〉H+bi), ωi∈Sn−1, αi, λi, bi∈R\nare dense in Lp(K,µ), where µ is either dVol (5) or the induced Euclidean volume.\nProof. We first treat the case ρ is sigmoidal and µ = dVol . Assume that these finite sums are not dense in Lp(K, dVol). By the Hahn-Banach theorem, there exists some nonzero h∈Lq(K, dVol), where q=p/(p − 1) if p>1 and q=∞ if p=1, such that ∫ K F (x)h(x)dVol(x) = 0 for all fi-\nnite sums of the form (14). As K is a compact set, by Hölder’s inequality, ∫ K |h(x)| dVol ≤\n( ∫ K dVol)1/p||h||Lq(K,dVol), which leads to h∈L1(K, dVol). Extend h to be a function H\nthat is defined on Hn by assigning H(x)=h(x) if x∈K and H(x)=0 if x∈Hn\\K. Then H∈L1(Hn, dVol)∩Lq(Hn, dVol) and∫\nHn F (x)H(x)dVol(x) = 0 (15)\nfor all finite sums of the form (14). For any ω∈Sn−1 and λ, b∈R, we set Fω,λ,b(x) = ρ(λ(〈x, ω〉H−b)). These functions are uniformly bounded, as |Fω,λ,b(x)|≤1. Moreover,\nlim λ→∞ Fω,λ,b(x) = { 1 if 〈x, ω〉H>b, 0 if 〈x, ω〉H<b.\n(16)\nAccording to (15), for all ω, λ, b, we have ∫ Hn\nFω,λ,b(x)H(x)dVol(x) = 0. Functions {Fω,λ,b}λ∈R converge pointwise as λ→∞, and they are uniformly bounded by |H|∈L1(Hn, dVol). By the bounded convergence theorem, for all ω∈Sn−1, b∈R, we have∫\n{x:〈x,ω〉H>b} H(x)dVol(x) = 0. (17)\nBy the integral formula (7) (with notations defined there), (6) and (17), for all b∈R,∫ ∞ 2b (∫ ξt,ω H(z)dVolξt,ω (z) ) dt = 0. (18)\nTaking the derivative of ∫∞ 2b (∫ ξt,ω H(z)dVolξt,ω (z) ) dt with respect to b, we deduce from (18) that∫\nξ2b,ω H(z)dVolξ2b,ω (z) = 0 for a.e. b∈R. In other words, the integration of H on a.e. ξ ∈ Ξω is zero. This fact is valid for all ω∈Sn−1. Therefore, the integration of H on a.e. ξ ∈ Ξ is zero. By the injectivity Theorem 1, H = 0 a.e., which contradicts our assumption. Therefore, finite sums of the form (14) are dense in Lp(K, dVol). The case ρ is ReLU, ELU or Softplus and µ = dVol follows from the above case and the fact that x 7→ ρ(x+ 1)− ρ(x) is sigmoidal. The case µ is the Euclidean volume follows from previous cases and the fact that the Euclidean volume on compact K is bounded from above by λdVol for some constant λ." }, { "heading": "A.10 UNIVERSAL APPROXIMATION THEOREM FOR POISSON NEURONS.", "text": "In this section, ρ is a continuous sigmoidal function (Cybenko, 1989), ReLU(Nair & Hinton, 2010), ELU(Clevert et al., 2016), or Softplus(Dugas et al., 2001). We also recall the Poisson neuron:\nPρw,λ,b(x) = ρ\n( λ |w|2 − |x|2\n|x− w|2 + b\n) , w ∈ Rn, λ, b ∈ R.\nTheorem 4. Let K be a compact set in Hn, and 1≤p<∞. Then finite sums of the form\nF (x) = N∑ i=1 αiP ρ ωi,λi,bi (x), ωi∈Sn−1, αi, λi, bi∈R (19)\nare dense in Lp(K,µ), where µ is either dVol (5) or the induced Euclidean volume.\nProof. We first treat the case ρ is sigmoidal and µ = dVol . Assume that these finite sums are not dense in Lp(K, dVol). By the Hahn-Banach theorem, there exists some nonzero h∈Lq(K, dVol), where q=p/(p − 1) if p>1 and q=∞ if p=1, such that ∫ K F (x)h(x)dVol(x) = 0 for all fi-\nnite sums of the form (19). As K is a compact set, by Hölder’s inequality, ∫ K |h(x)| dVol ≤\n( ∫ K dVol)1/p||h||Lq(K,dVol), which leads to h∈L1(K, dVol). Extend h to be a function H that is defined on Hn by assigning H(x)=h(x) if x∈K and H(x)=0 if x∈Hn\\K. Then H∈L1(Hn, dVol)∩Lq(Hn, dVol) and∫\nHn F (x)H(x)dVol(x) = 0 (20)\nfor all finite sums of the form (19). For any ω∈Sn−1, λ ∈ R, and b > 0, we set\nFω,λ,b(x) = P ρ ω,λ,−λb(x) = ρ\n( λ ( 1− |x|2 |x− ω|2 − b )) .\nThese functions are uniformly bounded, as |Fω,λ,b(x)|≤1. Moreover,\nlim λ→∞ Fω,λ,b(x) = 1 if 1−|x|2 |x−ω|2>b,\n0 if 1−|x| 2 |x−ω|2<b. (21)\nAccording to (20), for all ω, λ, b, we have ∫ Hn\nFω,λ,b(x)H(x)dVol(x) = 0. Functions {Fω,λ,b}λ∈R converge pointwise as λ→∞, and they are uniformly bounded by |H|∈L1(Hn, dVol). By the bounded convergence theorem, for all ω∈Sn−1, b∈R, we have∫\n{x:〈x,ω〉H>(log b)/2} H(x)dVol(x) = ∫ { x:\n1−|x|2 |x−ω|2\n>b }H(x)dVol(x) = 0. (22)\nBy the integral formula (7) (with notations defined there), (6) and (22), for all b∈R,∫ ∞ log b (∫ ξt,ω H(z)dVolξt,ω (z) ) dt = 0. (23) Taking the derivative of ∫∞ log b (∫ ξt,ω H(z)dVolξt,ω (z) ) dt with respect to b, we deduce from (23) that∫\nξlog b,ω H(z)dVolξlog b,ω (z) = 0 for a.e. b>0. In other words, the integration of H on a.e. ξ ∈ Ξω is zero. This fact is valid for all ω∈Sn−1. Therefore, the integration of H on a.e. ξ ∈ Ξ is zero. By the injectivity Theorem 1, H = 0 a.e., which contradicts our assumption. Therefore, finite sums of the form (19) are dense in Lp(K, dVol). The case ρ is ReLU, ELU or Softplus and µ = dVol follows from the above case and the fact that x 7→ ρ(x+ 1)− ρ(x) is sigmoidal. The case µ is the Euclidean volume follows from previous cases and the fact that the Euclidean volume on compact K is bounded from above by λdVol for some constant λ.\nWe refere the reader to the difference of (16) and (21), (17) and (22), and (18) and (23). However, basically the proofs are the same. The points are the integral formula (7), the injectivity Theorem 1 and the fact that level sets of horocycle/Poisson neurons are horocycles. Moreover, as a corollary of Theorem 4, we have\nCorollary 2. Let K be a compact set in Rn, and 1≤p<∞. Then finite sums of the form\nF (x) = N∑ i=1 αiP ρ wi,λi,bi (x), wi∈Rn, αi, λi, bi∈R\nare dense in Lp(K,µ), where µ is the Euclidean volume.\nProof. Because K is compact, there exists a positive number R such that K ⊂ {x ∈ Rn : |x| < R}. By the above theorem, finite sums of the form\nF (x) = N∑ i=1 αiP ρ wi,λi,bi (x), wi∈Sn−1, αi, λi, bi∈R\nare dense in Lp(K/R, µ). Then the corollary follows from\nPρw,λ,b(x) = P ρ w/R,λ,b(x/R)." }, { "heading": "A.11 PROOF OF THE LEMMA 1", "text": "Recall\nf1a,p(x) = 2|a|\n1− |p|2 sinh−1\n( 2(−p⊕ x, a)E\n(1− | − p⊕ x|2)|a|\n) . (24)\nThe proof of Lemma 1 follows from the following direct computation.\nProof. Let t ∈ (0, 1). Take pt = tω and at = −ω, then we have\n−pt ⊕ x = −t(1− 2t(ω, x)E + |x|2)ω + (1− t2)x\n1− 2t(ω, x)E + t2|x|2 .\nLet Ft(x) = 2(−pt⊕x,at)E\n(1−|−pt⊕x|2)|at| , then\nFt(x) = 2(−pt ⊕ x, at)E\n(1− | − pt ⊕ x|2)|at| =\n2 t(1−2t(ω,x)E+|x| 2)−(1−t2)(x,ω)E\n1−2t(ω,x)E+t2|x|2\n1− |−t(1−2t(ω,x)E+|x| 2)ω+(1−t2)x|2\n(1−2t(ω,x)E+t2|x|2)2\n= 2t(1− 2t(ω, x)E + t2|x|2)(1− 2t(ω, x)E + |x|2)− 2(1− t2)(1− 2t(ω, x)E + t2|x|2)(x, ω)E (1− 2t(ω, x)E + t2|x|2)2 − | − t(1− 2t(ω, x)E + |x|2)ω + (1− t2)x|2 = At(x)/Bt(x),\nwhere At, Bt are defined as the corresponding numerator and denominator. We have\nAt(x)|t=1 = 2|x− ω|4\nBt(x)|t=1 = 0 ∂Bt(x)/∂t|t=1 = 2|x− ω|2(|x|2 − 1).\nLet Gt(x) = sinh−1(Ft(x)) + log 1−t1+t , then\nGt(x) = log\n( At(x)\nBt(x) +\n√ 1 + A2t (x)\nB2t (x)\n) + log\n1− t 1 + t = log ( (1− t)At (1 + t)Bt + √ (1− t)2 (1 + t)2 + (1− t)2A2t (x) (1 + t)2B2t (x) ) .\nBy L’Hôpital’s rule,\nlim t<1,t→1 (1− t)At(x) (1 + t)Bt(x) = −At(x) + (1− t)A′t(x) Bt(x) + (1 + t)B′t(x) ∣∣∣ t=1 = |x− ω|2 2− 2|x|2 .\nTherefore,\nlim t<1,t→1 Gt(x) = log\n( |x− ω|2\n1− |x|2\n) .\nFor t < 1, we take pt = tω, at = −ω, ct = t 2−1 4 , dt = 1 2 log 1+t 1−t , then for all x ∈ K,\nlim t<1,t→1 ctf 1 at,pt(x) + dt = limt<1,t→1 −1 2 Gt(x) = 1 2 log\n( 1− |x|2\n|x− ω|2\n) = 〈x, ω〉H .\nIf there exists c1, c2 such that |ctf1at, pt(x) + dt|(= |Gt(x)|/2) ≤ c2 for all t ∈ (c1, 1), x ∈ K, then by the dominated convergence theorem, there exists t such that ||ctf1at,pt(x) + dt − 〈x, ω〉H ||Lp(K,m) < , which proves the lemma. Note that\n(1− t)At(x) (1 + t)Bt(x) = 2|x− ω|4(1− t) +\n∑4 j=1 Uj(x, ω)(1− t)j+1\n−2|x− ω|2(|x|2 − 1)(1− t)(1 + t) + ∑4 l=2 Ll(x, ω)(1− t)l(1 + t)\n= 2|x− ω|4 +\n∑4 j=1 Uj(x, ω)(1− t)j\n2|x− ω|2(1− |x|2)(1 + t) + ∑4 l=2 Ll(x, ω)(1− t)l−1(1 + t) ,\nwhere Uj and Ll are continuous functions defined on K × {ω}. There exist positive numbere c3, c4 and c1 ∈ (0, 1) such that for all x ∈ K and t ∈ (c1, 1),\nc3 ≤ 2|x− ω|4 ≤ c4, c3 ≤ 2|x− ω|2(1− |x|2)(1 + t) ≤ c4,\nc3 2 ≥ | 4∑ j=1 Uj(x, ω)(1− t)j |,\nc3 2 ≥ | 4∑ l=2 Ll(x, ω)(1− t)l−1(1 + t)|.\nTherefore, for x ∈ K and t ∈ (c1, 1), we have\nc3 2c4 + c3 ≤ (1− t)At(x) (1 + t)Bt(x) ≤ 2c4 + c3 c3 .\nThis implies that for t ∈ (c1, 1), Gt|K and therefore |ctf1at,pt + dt||K are uniformly bounded, which finishes the proof of the lemma." }, { "heading": "A.12 THE FIRST MNIST CLASSIFIER IN 6.1", "text": "At the preprocessing stage, we compute the projection of the 28× 28 input pattern on the 40 principal components and then scale them so that the scaled 40-dimensional PCA features are within the unit ball. In our network,\n1. Input layer: scaled 40-dimensional PCA features; 2. First layer: 40 inputs/1000 outputs horocycle layer (tanh activation); 3. Last layer: 1000 inputs/10 outputs affine layer; 4. Loss: cross entroy loss.\nTake learning rate = 1, learning rate decay = 0.999, and batch size = 128, and run it three times. The average test error rates after 600 epochs is 1.96%.\nPCA follows LeCun et al. (1998)(C.3), where 40 PCA is used for the quadratic network. Quadratic network has a similar structure to ours, because our neuron are contructed by quotient of quadratic functions followed by log." }, { "heading": "A.13 HOROCYCLE LAYER FOLLOWED BY MLR CAN APPROXIMATE THE CLASSFICATION FUNCTION", "text": "Suppose the MNIST classification function M is defined on ∪9j=0Kj ⊂ H40, where Ki are relatively compact and M|Kj = j. By Theorem 2, for 0≤j≤9, there exist Fj(x) = ∑Nj i=1 αj,iρ(λj,i〈x, ωj,i〉H+bj,i) such that Fj approximates IKj , where I is the indicator function. Therefore, a network with the first (horocycle) layer given by ρ(λj,i〈x, ωj,i〉H+bj,i)(0≤j≤9, 1≤i≤Nj) followed by a classical MLR with parameters given by αj,i(0≤j≤9, 1≤i≤Nj) (with arg max for prediction) approximatesM." }, { "heading": "A.14 THE SECOND MNIST CLASSIFIER IN 6.1", "text": "At the preprocessing stage, we do data augmentation by letting each image 1 step toward each of its 4 corners, so that our traning set has 300000 examples. In our network,\n1. Input layer: (28,28, 1); 2. First block: 32-filters 3× 3 convolution, ReLU, 2× 2 max-pooling, BatchNorm; 3. Second block: 64-filters 3× 3 convolution, ReLU, BatchNorm; 4. Thrid block: 64-filters 3× 3 convolution,ReLU,2× 2 max-pooling, BatchNorm;\n5. Fourth block: 128-filters 3× 3 convolution, ReLU, 2× 2 max-pooling, BatchNorm; 6. Fifth block: FC 1000, ReLU, BatchNorm;\n7. Last block: 1000 input/10 output Poisson layer, sigmoid, BatchNorm;\n8. Loss: cross entroy loss.\nIn optimization, we take Adam(Kingma & Ba, 2015). The batch size is 128 in the first 5 epochs, and 1024 in the next 15 epochs. After 5 epochs, we set ωi in the Poisson layer to be non-trainable. We train our network five times, the average test error rate after 20 epochs is 0.35%.\nThe in |w| 2−|x|2\n|x−w|2+ is an important hyperparameter for the numerical stability. We train this MNIST model with ∈ {10−1, 10−2, 10−4, 10−6, 10−8, 10−10, 10−20}. They all show robust performance." }, { "heading": "A.15 EXPERIMENT OF POINCARE TREE CLASSIFICATION TASK", "text": "Given a Poincaré embedding (Nickel & Kiela, 2017) PE : {WordNet noun} → HD of the 82114 WordNet noun nodes and given a node x, the task is to classify all other nodes as being part of the subtree rooted at x (Ganea et al., 2018a). Our model is a logistic regression, where the horocycle feature p ∈ {WordNet noun} 7→ hPE(x)(PE(p)/s) (s is a hyperparameter lying in [1, 1.5]) is the only predictor, and the dependent variable is whether p is in the subtree rooted at x. Let P be the set of all nodes in the Poincare embedding, and let p range from P .\n1. Input: hPE(x)(PE(p)/s) (s is a hyperparameter.) 2. Only layer: 1 input/1 output affine layer. (two parameters: one for input, one for bias.)\n3. Loss: Logistic. (with respect to 1 if p in the tree rooted at x; 0 else.)\nIn each training, x is one of {animal, group, location, mammal, worker}, dim is one of {2,3,5,10}, and Poincaré embeddings are from the animation_train.py of Ganea et al. (2018b) 4 (with tree=wordnet_full, model=poincare, dim=dim, seed randomly ∈ {7, 8, 9}). All nodes in the subtree rooted at x are divided into training nodes (80%) and test nodes (20%). The same splitting procedure applies for the rest nodes. We choose s that has the best training F1, and then record the corresponding test F1. For each x and dim, we do the training 100 times. The average test F1 classification scores are recorded in Table 2.\nThe horocycle feature performs well here because it is compatible with the Poincaré embedding algorithm. Let x be a node that is not at the origin. It seems that the Poincaré embedding algorithm tends to pull all nodes that are from the subtree rooted at x towards the direction of x|x| , therefore\ny → 〈 y, x|x| 〉 H is a suitable feature for this task." }, { "heading": "A.16 END-BASED CLUSTERING IN H2", "text": "For MNIST, at the preprocessing stage, we do data augmentation by letting each image 1 step toward each of its 4 corners, so that our traning set has 300000 examples. Our network for H2 embedding of MNIST dataset is\n1. Input layer: (28,28, 1);\n2. First block: 32-filters 3× 3 convolution, ReLU, 2× 2 max-pooling, BatchNorm; 3. Second block: 64-filters 3× 3 convolution, ReLU, BatchNorm; 4. Thrid block: 64-filters 3× 3 convolution,ReLU,2× 2 max-pooling, BatchNorm; 5. Fourth block: 128-filters 3× 3 convolution, ReLU, 2× 2 max-pooling, BatchNorm; 6. Fifth block: FC 1000, ReLU, BatchNorm;\n7. Sixth block: FC 2, ReLU, BatchNorm, Exp;\n8. Last block: 2 input/10 output horocycle layer, sigmoid;\n4https://github.com/dalab/hyperbolic_cones\n9. Loss: cross entroy loss,\nwhere Exp is the exponential map ToH2(= R2)→ H2. We apply the data augmentation as in A.14. In optimization, learning rate is 0.1, learning rate decay is 0.99, batch size is 128, epochs is 50.\nOur network, data augmentation and optimization for H2 embedding of Fashion-MNIST dataset is completely the same as that for MNIST.\nFor MNIST and Fashion-MNIST we use sphere optimization. We would like to remark that there are interesting new features in sphere optimization. Because the S1 is compact, for any continuous function f , there exists x = argmaxS1f . The derivative of f at x vanish, so the usual optimization algorithm to find the minimum will fail in the general case. In our experiments, we solve this problem by adding the following tricks:\n1. Observation: if the class Cα are all close to ω ∈ S1, and the end prototype ωα for the class Cα is around −ω, then ωα is a maximum point of the loss function and therefore can not be improved through normal SGD. We solve this problem by adopting an idea(supervised variation) of k-means clustering. In each early epochs, optimization consists of two parts. In the first part, the normal SGD applies. In the second part, we move end prototypes (ωi) to the average direction of the class (using training data).\n2. Observation: if the class Cα and class Cβ are all close to ω ∈ S1, and the end prototype ωα, ωβ are also both around ω, then all points in class Cα and class Cβ , end prototypes ωα, ωβ will all be pulling to ω by the SGD, and finally the network can not distinguish class Cα and class Cβ . We solve this problem by adding a loss if two prototypes are close.\nWith these small tricks, our 2D end-based clustering algorithm is very stable for MNIST and FashionMNIST. We run it on MNIST 10 times, and they all get a test acc around 99% within 20 epochs.\nSuppose the classification task has M classes and the prototype of the i-th class is ωi. We write down the additional loss function for the second observation as follows\ni = RandomChoice({1, . . . ,M}) j = RandomChoice({1, . . . ,M} \\ {i}) d = (ωi, ωj)E\nLObservation2 = arctanh(10× ReLU(d− 0.9− )),\nwhere is a small constant for numerical stability.\nFor CIFAR-10, our network for H2 embedding of CIFAR-10 dataset is\n1. Input layer: (32,32, 3); 2. First block: ResNet-32/128 output; 3. Second block: FC 2, ReLU, BatchNorm, Exp; 4. Last block: 2 input/10 output horocycle layer; 5. Loss: cross entroy loss.\nIn the data augmentation, we apply horizontal/vertical shifts and horizontal flip. We use Adam. The batch size is 32 in the first 100 epochs, or 1024 in the next 50 epochs. The weights of the horocycle layer are fixed at the beginning of the training and are non-trainable, which follows an idea of Mettes et al. (2019)." }, { "heading": "A.17 POISSON MLR", "text": "For CIFAR-10, we use a ResNet-32 structure as the feature descriptor, and we apply horizontal/vertical shifts and horizontal flip. In our network,\n1. Input layer: (32,32, 3); 2. First block: ResNet-32/128 output; 3. Second block: FC 128, ReLU, BatchNorm;\n4. Last block: 128 input/10 output Poisson layer, BatchNorm; 5. Loss: cross entroy loss.\nWe use Adam. The batch size is 32 in the first 80 epochs, or 1024 in the next 20 epochs. Test acc greater than 93.5%.\nFor the classification task of flowers (Tensorflow), The dataset of 3670 photos of flowers contains 5 classes: daisy, dandelion, roses, sunflowers and tulips. The keras model is\n1. Input layer: (180,180, 3); 2. First block: 16-filters 3× 3 convolution, ReLU, 2× 2 max-pooling; 3. Second block: 32-filters 3× 3 convolution, ReLU, 2× 2 max-pooling; 4. Thrid block: 64-filters 3× 3 convolution,ReLU,2× 2 max-pooling; 5. Fourth block: FC 128, ReLU; 6. Last block: 128 input/10 output FC layer; 7. Loss: cross entroy loss.\nOur Poisson model is\n1. Input layer: (180,180, 3); 2. First block: 16-filters 3× 3 convolution, ReLU, 2× 2 max-pooling; 3. Second block: 32-filters 3× 3 convolution, ReLU, 2× 2 max-pooling; 4. Thrid block: 64-filters 3× 3 convolution,ReLU,2× 2 max-pooling; 5. Fourth block: FC 128, ReLU; 6. Last block: BatchNorm, 128 input/10 output Poisson layer, sigmoid, BatchNorm; 7. Loss: cross entroy loss.\nWe use 2936 photos for training and the rest 734 for testing. We train two models 5 times, and Figure 6 records the average of training and test accuracies in 20 epochs. Although the test accuracy of the Poisson model is bad in the beginning, it is fairly higher (around 4%) than the test accuracy of the keras model in the end.\nA.18 Dist IS A RELATIVE DISTANCE FUNCTION\nAs Hn is a complete metric space, the absolute distance between any pair pf points (x, ω) ∈ Hn × Sn−1 is always +∞. This absolute distance is not useful, hence we look for a relative one. Moreover, if a function D reasonably measures the relative distance of Hn from ω ∈ Sn−1, then so does D + c for any constant c. This section explains why Dist (10) is a reasonable relative distance function. For (x, ω, b) ∈ Hn × Sn−1 ×R, we recall\nDist(x, ω, b) = − log ( 1− |x|2\n|x− ω|2\n) + b = −2〈x, ω〉H + b." }, { "heading": "A.18.1 A NAIVE VIEWPOINT", "text": "For fixed ω ∈ Sn−1 and b ∈ R, Dist(·, ω, b) is defined on Hn, and its level sets are the set of horocycles that are tangential to Sn−1 at ω: Ξω = ∪λ∈R{ξλ,ω}. For λ ∈ R, using (6), we have for all x ∈ ξλ,ω ,\nDist(x, ω, b) = −λ+ b.\nThe above identity implies the following: if x1, x2 are on the same horocycle ξλ,ω then Dist(x1, ω, b) = Dist(x2, ω, b) = −λ + b. Moreover, as λ moves to ∞ (equivalently speaking, the horocycle ξλ,ω moves to ω), then for x ∈ ξλ,ω the function value Dist(x, ω, b) goes to −∞; as λ moves to −∞ (equivalently speaking, the ξλ,ω moves away from ω), then for x ∈ ξλ,ω the function value Dist(x, ω, b) goes to∞. The above observation heuristically explains that Dist(·, ω, b) measures the relative distance of Hn from ω." }, { "heading": "A.18.2 THE BUSEMANN FUNCTION VIEWPOINT.", "text": "Fix ω ∈ Sn−1, and then let c : [0,∞) → Hn be the unique geodesic ray (with unit speed) that satisfies\nc[0] = (0, . . . , 0), c(∞) = ω.\nFor the purpose of this section, we do not need the definition of Busemann functions (we refer the interested reader to Bridson & Haefliger (2009)[II.8]). Instead, it suffices to know the following result of the theory: let dHn be the hyperbolic distance function, and for any x ∈ Hn,\nlim t→∞\n(dHn(x, c(t))− dHn(c(0), c(t))) = −2〈x, ω〉H . (25)\nWe read the above (25) in the following way: for fixed t > 0\nx ∈ Hn 7→ dHn(x, c(t))− dHn(c(0), c(t))\nis a function that measures the relative distance of {x, c(0)} from c(t), and therefore the left hand side of (25)\nx ∈ Hn 7→ lim t→∞ (dHn(x, c(t))− dHn(c(0), c(t)))\nis a function that measures the relative distance of {x, c(0)} from limt→∞ c(t) = ω, and finally the right hand side of (25) x ∈ Hn 7→ −2〈x, ω〉H is a function that measures the relative distance of {x, c(0)} from ω. Moreover, if the geodesic ray c starts from a different y ∈ Hn, and then there will be a corresponding bias term added to (25). Therefore, for any b ∈ R and ω ∈ Sn−1,\nx ∈ Hn 7→ Dist(x, ω, b) = −2〈x, ω〉H + b\nis a function that measures the relative distance of Hn from ω." } ]
2,020
null
SP:ea4d4d3798119498a6df81a19dcab2ae4978996c
[ "A method for computing sample learning weights based on variance is proposed. The method is model independent and a simple k-NN based estimator for the weights is derived. The authors justify their work by appealing to a novel generalisation bound. Overall the idea is interesting but the exposition needs to be significantly improved as proofs are difficult to follow as it currently stands.", "The authors introduce an algorithm called VBSW to re-weight a training data set in order to improve generalization. In summary, VBSW sets the weight of each example to be the sample variance of the labels of its k nearest neighbors. The nearest neighbors are chosen in the embedding space from the second-to-last layer of a pre-trained neural network. The last layer of the pre-trained model is then trained with these new weights." ]
In the context of supervised learning of a function by a Neural Network (NN), we claim and empirically justify that a NN yields better results when the distribution of the data set focuses on regions where the function to learn is steeper. We first traduce this assumption in a mathematically workable way using Taylor expansion. Then, theoretical derivations allow to construct a methodology that we call Variance Based Samples Weighting (VBSW). VBSW uses local variance of the labels to weight the training points. This methodology is general, scalable, cost effective, and significantly increases the performances of a large class of NNs for various classification and regression tasks on image, text and multivariate data. We highlight its benefits with experiments involving NNs from shallow linear NN to ResNet (He et al., 2015) or Bert (Devlin et al., 2019).
[]
[ { "authors": [ "cent Vanhoucke", "Vijay Vasudevan", "Fernanda Viégas", "Oriol Vinyals", "Pete Warden", "Martin Wattenberg", "Martin Wicke", "Yuan Yu", "Xiaoqiang Zheng" ], "title": "TensorFlow: Large-scale machine learning on heterogeneous systems", "venue": null, "year": 2015 }, { "authors": [ "Sanjeev Arora", "Rong Ge", "Behnam Neyshabur", "Yi Zhang" ], "title": "Stronger generalization bounds for deep nets via a compression approach", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Peter L. Bartlett", "Vitaly Maiorov", "Ron Meir" ], "title": "Almost linear vc dimension bounds for piecewise polynomial networks", "venue": "In Proceedings of the 11th International Conference on Neural Information Processing Systems,", "year": 1998 }, { "authors": [ "Peter L Bartlett", "Dylan J Foster", "Matus J Telgarsky" ], "title": "Spectrally-normalized margin bounds for neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Peter L. Bartlett", "Nick Harvey", "Christopher Liaw", "Abbas Mehrabian" ], "title": "Nearly-tight vc-dimension and pseudodimension bounds for piecewise linear neural networks", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In Proceedings of the 26th Annual International Conference on Machine Learning,", "year": 2009 }, { "authors": [ "Jon Louis Bentley" ], "title": "Multidimensional binary search trees used for associative searching", "venue": "Commun. ACM,", "year": 1975 }, { "authors": [ "Haw-Shiuan Chang", "Erik Learned-Miller", "Andrew McCallum" ], "title": "Active bias: Training more accurate neural networks by emphasizing high variance samples", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Yin Cui", "Menglin Jia", "Tsung-Yi Lin", "Yang Song", "Serge Belongie" ], "title": "Class-balanced loss based on effective number of samples", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2019 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017", "venue": "URL http://archive. ics.uci.edu/ml", "year": 2017 }, { "authors": [ "J. Feng", "Q. Teng", "X. He", "X. Wu" ], "title": "Accelerating multi-point statistics reconstruction method for porous media via deep learning", "venue": "Acta Materialia,", "year": 2018 }, { "authors": [ "Yarin Gal", "Riashat Islam", "Zoubin Ghahramani" ], "title": "Deep bayesian active learning with image data", "venue": "In Proceedings of the 34th International Conference on Machine Learning - Volume 70,", "year": 2017 }, { "authors": [ "Henry Gouk", "Eibe Frank", "Bernhard Pfahringer", "Michael Cree" ], "title": "Regularisation of neural networks by enforcing lipschitz continuity", "venue": null, "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Kurt Hornik", "Maxwell Stinchcombe", "Halbert White" ], "title": "Multilayer feedforward networks are universal approximators", "venue": "Neural Networks,", "year": 1989 }, { "authors": [ "Daniel Jakubovitz", "Raja Giryes", "Miguel R.D. Rodrigues" ], "title": "Generalization error in deep learning", "venue": null, "year": 2018 }, { "authors": [ "Lu Jiang", "Deyu Meng", "Qian Zhao", "Shiguang Shan", "Alexander G. Hauptmann" ], "title": "Self-paced curriculum learning", "venue": "In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Tang Jie", "Pieter Abbeel" ], "title": "On a connection between importance sampling and the likelihood ratio policy gradient", "venue": "Advances in Neural Information Processing Systems", "year": 2010 }, { "authors": [ "Angelos Katharopoulos", "François Fleuret" ], "title": "Not all samples are created equal: Deep learning with importance sampling", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Ksenia Konyushkova", "Raphael Sznitman", "Pascal Fua" ], "title": "Learning active learning from data", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "M.P. Kumar", "Benjamin Packer", "Daphne Koller" ], "title": "Self-paced learning for latent variable models", "venue": "Advances in Neural Information Processing Systems", "year": 2010 }, { "authors": [ "Tongliang Liu", "Dacheng Tao" ], "title": "Classification with noisy labels by importance reweighting", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 2016 }, { "authors": [ "Tambet Matiisen", "Avital Oliver", "Taco Cohen", "John Schulman" ], "title": "Teacher-student curriculum learning, 2017", "venue": null, "year": 2017 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "David Mcallester", "Nati Srebro" ], "title": "Exploring generalization in deep learning", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "Nathan Srebro" ], "title": "A PAC-bayesian approach to spectrally-normalized margin bounds for neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "F. Pedregosa", "G. Varoquaux", "A. Gramfort", "V. Michel", "B. Thirion", "O. Grisel", "M. Blondel", "P. Prettenhofer", "R. Weiss", "V. Dubourg", "J. Vanderplas", "A. Passos", "D. Cournapeau", "M. Brucher", "M. Perrot", "E. Duchesnay" ], "title": "Scikit-learn: Machine learning in Python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Haifeng Qian", "Mark N. Wegman" ], "title": "L2-nonexpansive neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Mengye Ren", "Wenyuan Zeng", "Bin Yang", "Raquel Urtasun" ], "title": "Learning to reweight examples for robust deep learning", "venue": null, "year": 2018 }, { "authors": [ "Burr Settles" ], "title": "Active Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning", "venue": null, "year": 2012 }, { "authors": [ "Abhinav Shrivastava", "Abhinav Gupta", "Ross B. Girshick" ], "title": "Training region-based object detectors with online hard example mining", "venue": null, "year": 2016 }, { "authors": [ "Iulia Turc", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "Well-read students learn better: On the importance of pre-training compact models", "venue": "arXiv preprint arXiv:1908.08962v2,", "year": 2019 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R. Bowman" ], "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Nick Winovich", "Karthik Ramani", "Guang Lin" ], "title": "Convpde-uq: Convolutional neural networks with quantified uncertainty for heterogeneous elliptic partial differential equations on varied domains", "venue": "Journal of Computational Physics,", "year": 2019 }, { "authors": [ "Huan Xu", "Shie Mannor" ], "title": "Robustness and generalization", "venue": "Machine Learning,", "year": 2012 }, { "authors": [ "Yinhao Zhu", "Nicholas Zabaras", "Phaedon-Stelios Koutsourelakis", "Paris Perdikaris" ], "title": "Physicsconstrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data", "venue": "Journal of Computational Physics,", "year": 2019 }, { "authors": [ "Gouk" ], "title": "PROBLEM 1: UNAVAILABILITY OF DERIVATIVES (SECTION 4.1) In this paragraph, we consider ni > 1 but no = 1. The following derivations can be extended to", "venue": null, "year": 2017 } ]
[ { "heading": null, "text": "In the context of supervised learning of a function by a Neural Network (NN), we claim and empirically justify that a NN yields better results when the distribution of the data set focuses on regions where the function to learn is steeper. We first traduce this assumption in a mathematically workable way using Taylor expansion. Then, theoretical derivations allow to construct a methodology that we call Variance Based Samples Weighting (VBSW). VBSW uses local variance of the labels to weight the training points. This methodology is general, scalable, cost effective, and significantly increases the performances of a large class of NNs for various classification and regression tasks on image, text and multivariate data. We highlight its benefits with experiments involving NNs from shallow linear NN to ResNet (He et al., 2015) or Bert (Devlin et al., 2019)." }, { "heading": "1 INTRODUCTION", "text": "When a Machine Learning (ML) model is used to learn from data, the distribution of the training data set can have a strong impact on its performances. More specifically, in the context of Deep Learning (DL), several works have hinted at the importance of the training set. In Bengio et al. (2009); Matiisen et al. (2017), the authors exploit the observation that a human will benefit more from easy examples than from harder ones at the beginning of a learning task. They construct a curriculum, inducing a change in the distribution of the training data set that makes a Neural Network (NN) achieve better results in an ML problem. With a different approach, Active Learning (Settles, 2012) modifies dynamically the distribution of the training data, by selecting the data points that will make the training more efficient. Finally, in Reinforcement Learning, the distribution of experiments is crucial for the agent to learn efficiently. Nonetheless, the challenge of finding a good distribution is not specific to ML. Indeed, in the context of Monte Carlo estimation of a quantity of interest based on a random variable X ∼ dPX , Importance Sampling owes its efficiency to the construction of a second random variable, X̄ ∼ dPX̄ that will be used instead of X to improve the estimation of this quantity. Jie & Abbeel (2010) even make a connection between the success of likelihood ratio policy gradients and importance sampling, which shows that ML and Monte Carlo estimation, both distribution based methods, are closely linked.\nIn this paper, we leverage the importance of the training set distribution to improve performances of NNs in supervised DL. This task can be formalized as approximating a function f with a model fθ parametrized by θ. We build a new distribution from the training points and their labels, based on the observation that fθ needs more data points to approximate f on the regions where it is steep. We use Taylor expansion of a function f , which links the local behaviour of f to its derivatives, to build this distribution. We show that up to a certain order and locally, variance is an estimator of Taylor expansion. It allows constructing a methodology called Variance Based Sample Weighting (VBSW) that weights each training data points using the local variance of their neighbor labels to simulate the new distribution. Sample weighting has already been explored in many works and for various goals. Kumar et al. (2010); Jiang et al. (2015) use it to prioritize easier samples for the training, Shrivastava et al. (2016) for hard example mining, Cui et al. (2019) to avoid class imbalance, or (Liu & Tao, 2016) to solve noisy label problem. In this work, the weights’ construction relies on a more general claim that can be applied to any data set and whose goal is to improve the performances of the model.\nVBSW is general, because it can be applied to any supervised ML problem based on a loss function. In this work we specifically investigate VBSW application to DL. In that case, VBSW is applied within the feature space of a pre-trained NN. We validate VBSW for DL by obtaining performance improvement on various tasks like classification and regression of text, from Glue benchmark (Wang et al., 2019), image, from MNIST (LeCun & Cortes, 2010) and Cifar10 (Krizhevsky et al.) and multivariate data, from UCI ML repository (Dua & Graff, 2017), for several models ranging from linear regression to Bert (Devlin et al., 2019) or ResNet20 (He et al., 2015). As a highlight, we obtain up to 1.65% classification improvement on Cifar10 with a ResNet. Finally, we conduct analyses on the complementarity of VBSW with other weighting techniques and its robustness.\nContributions: (i) We present and investigate a new approach of the learning problem, based on the variations of the function f to learn. (ii) We construct a new simple, scalable, versatile and cost effective methodology, VBSW, that exploits these findings in order to boost the performances of a NN. (iii) We validate VBSW on various ML tasks." }, { "heading": "2 RELATED WORKS", "text": "Active Learning - Our methodology is based on the consideration that not every sample bring the same amount of information. Active learning (AL) exploits the same idea, in the sense that it adapts the training strategy to the problem by introducing a data point selection rule. In (Gal et al., 2017), the authors introduce a methodology based on Bayesian Neural Networks (BNN) to adapt the selection of points used for the training. Using the variational properties of BNN, they design a rule to focus the training on points that will reduce the prediction uncertainty of the NN. In (Konyushkova et al., 2017), the construction of the selection rule is taken as a ML problem itself. See (Settles, 2012) for a review of more classical AL methods. While AL selects the data points, so modifies the distribution of the initial training data set, VBSW is applied independently of the training so the distribution of the weights can not change throughout the training.\nExamples Weighting - VBSW can be categorized as an example weighting algorithm. The idea of weighting the data set has already been explored in different ways and for different purposes. While curriculum learning (Bengio et al., 2009; Matiisen et al., 2017) starts the training with easier examples, Self paced learning (Kumar et al., 2010; Jiang et al., 2015) downscales harder examples. However, some works have proven that focusing on harder examples at the beginning of the learning could accelerate it. In (Shrivastava et al., 2016), hard example mining is performed to give more importance to harder examples by selecting them primarily. Example weighting is used in (Cui et al., 2019) to tackle the class imbalance problem by weighting rarer, so harder examples. At the contrary, in (Liu & Tao, 2016) it is used to solve the noisy label problem by focusing on cleaner, so easier examples. All these ideas show that depending on the application, example weighting can be performed in an opposed manner. Some works aim at going beyond this opposition by proposing more general methodologies. In (Chang et al., 2017), the authors use the variance of the prediction of each point throughout the training to decide whether it should be weighted or not. A meta learning approach is proposed in (Ren et al., 2018), where the authors choose the weights after an optimization loop included in the training. VBSW stands out from the previously mentioned example weighting methods because it is built on a more general assumption that a model would simply need more points to learn more complicated functions. Its effect is to improve the performances of a NN, without solving data set specific problems like class imbalance or noisy labels.\nImportance Sampling - Some of the previously mentioned methods use importance sampling to design the weights of the data set or to correct the bias induced by the sample selection (Katharopoulos & Fleuret, 2018). Here, we construct a new distribution that could be interpreted as an importance distribution. However, we weight the data points to simulate this distribution, not to correct a bias induced by this distribution.\nGeneralization Bound - Generalization bound for the learning theory of NN have motivated many works, most of which are reviewed in (Jakubovitz et al., 2018). In Bartlett et al. (1998), Bartlett et al. (2019), the authors focus on VC-dimension, a measure which depends on the number of parameters of NNs. Arora et al. (2018) introduces a compression approach that aims at reducing the number of model parameters to investigate its generalization capacities. PAC-Bayes analysis constructs generalization bounds using a priori and a posteriori distributions over the possible models. It is\ninvestigated for example in Neyshabur et al. (2018); Bartlett et al. (2017), and Neyshabur et al. (2017); Xu & Mannor (2012) links PAC-Bayes theory to the notion of sharpness of a NN, i.e. its robustness to small perturbation. While sharpness of the model is often mentioned in the previous works, our bound includes the derivatives of f , which can be seen as an indicator of the sharpness of the function to learn. Even if it uses elements of previous works, like the Lipschitz constant of fθ, our work does not pretend to tighten and improve the already existing generalization bounds, but only emphasizes the intuition that the NN would need more points to capture sharper functions. In a sense, it investigates the robustness to perturbations in the input space, not in the parameter space." }, { "heading": "3 A NEW TRAINING DISTRIBUTION BASED ON TAYLOR EXPANSION", "text": "In this section, we first illustrate why a NN may need more points where f is steep by deriving a generalization bound that involves the derivatives of f . Then, using Taylor expansion, we build a new training distribution that improves the performances of a NN on simple functions." }, { "heading": "3.1 PROBLEM FORMULATION", "text": "We formalize the supervised ML task as approximating a function f : S ⊂ Rni → Rno with an ML model fθ parametrized by θ, where S is a measured sub-space of Rni depending on the application. To this end, we are given a training data set of N points, {X1, ..., XN} ∈ S, drawn from X ∼ dPX and their point-wise values, or labels {f(X1), ..., f(XN )}. Parameters θ have to be found in order to minimize an integrated loss function JX(θ) = EX [L(fθ(X), f(X))], with L the loss function, L : Rno × Rno → R. The data allow estimating JX(θ) by ĴX(θ) = 1N ∑N i=1 L(fθ(Xi), f(Xi)). Then, an optimization algorithm is used to find a minimum of ĴX(θ) w.r.t. θ." }, { "heading": "3.2 INTUITION BEHIND TAYLOR EXPANSION", "text": "In the following, we illustrate the intuition with a Generalization Bound (GB) that include the derivatives of f , provided that these derivatives exist. The goal of the approximation problem is to be able to generalize to points not seen during the training. The generalization error JX(θ) = JX(θ) − ĴX(θ) thus needs to be as small as possible. Let Si, i ∈ {1, ..., N} be some sub-spaces of S such that S = ⋃N i=1 Si, ⋂N i=1 Si = Ø, and Xi ∈ Si. Suppose that L is the squared L2 error, ni = 1, f is differentiable and fθ is Kθ-Lipschitz. Provided that |Si| < 1, we show that\nJX(θ) ≤ N∑ i=1 (|f ′(Xi)|+Kθ)2 |Si|3 4 +O(|Si|4), (1)\nwhere |Si| is the volume of Si. The proof can be found in Appendix B. We see that on the regions where f ′(Xi) is higher, quantity |Si| has a stronger impact on the GB. This idea is illustrated on Figure 1. Since |Si| can be seen as a metric for the local density of the data set (the smaller |Si| is,\nthe denser the data set is), the GB can be reduced more efficiently by adding more points around Xi in these regions. This bound also involvesKθ, the Lipschitz constant of the NN, which has the same impact than f ′(Xi). It also illustrates the link between the Lipschitz constant and the generalization error, which has been pointed out by several works like (Gouk et al., 2018), (Bartlett et al., 2017) and (Qian & Wegman, 2019). Note that equation 1 only gives indications about n = 1. Indeed, this GB only has illustration purposes. Its goal is to motivate the metric described in the next section, which is based on Taylor expansion and therefore involves derivatives of order n > 1." }, { "heading": "3.3 A TAYLOR EXPANSION BASED METRIC", "text": "In this paragraph, we build a metric involving the derivatives of f . Using Taylor expansion at order n on f and supposing that f is n times differentiable (multi index notation):\nf(x+ ) = ‖ ‖→0 ∑ 0≤|k|≤n k ∂kf(x) k! +O( n). Dfn (x) = ∑ 1≤|k|≤n ‖ ‖k · ‖Vect(∂kf(x))‖ k! . (2)\nQuantity f(x + ) − f(x) gives an indication on how much f changes around x. By neglecting the orders above n, it is then possible to find the regions of interest by focusing on Dfn , given by equation 2, where Vect(X) denotes the vectorization of a tensor X and ‖.‖ the squared L2 norm. Note that Dfn is evaluated using ‖∂kf(x)‖ instead of ∂kf(x) for derivatives not to cancel each other. f will be steeper and more irregular in the regions where x→ Dfn (x) is higher. To focus the training set on these regions, one can use {Dfn (X1), ..., Dfn (XN )} to construct a probability density function (pdf) and sample new data points from it. This sampling is evaluated and validated in Appendix A for conciseness. Based on these experiments, we choose n = 2, i.e. we use {Df2 (X1), ..., Df2 (XN )}. The good results obtained, presented in Appendix A confirm our observation and motivate its application to complex DL problems." }, { "heading": "4 VARIANCE BASED SAMPLES WEIGHTING (VBSW)", "text": "" }, { "heading": "4.1 PRELIMINARIES", "text": "The new distribution cannot always be applied as is, because we do not have access to f . Problem 1: {Df2 (X1), ..., Df2 (XN )} cannot be evaluated since it requires to compute the derivatives of f . Moreover, it assumes that f is differentiable, which is often not true. Problem 2: even if {Df2 (X1), ..., Df2 (XN )} could be computed and new points sampled, we could not obtain their labels to complete the training data set.\nProblem 1: Unavailability of derivatives To overcome problem 1, we construct a new metric based on statistical estimation. In this paragraph, ni > 1 but no = 1. The following derivations can be extended to no > 1 by applying it to f element-wise and then taking the sum across the no dimensions. Let ∼ N (0, Ini) with ∈ R+ and Ini the identity matrix of dimension ni. We claim that V ar(f(x+ )) = Df2 (x) +O(‖ ‖32). The demonstration can be found in Appendix B. Using the unbiased estimator of variance, we thus define new indices D̂f2 (x) by\nD̂f2 (x) = 1 k − 1 k∑ i=1 ( f(x+ i)− f(x) )2 , (3)\nwith { 1, ..., k} k samples of . The metric D̂f2 (x) → k→∞ V ar(f(x + )) and V ar(f(x + )) = Df2 (x) + O(‖ ‖32), so D̂f2 (x) is a biased estimator of Df2 (x), with biasO(‖ ‖32). Hence, when → 0, D̂f2 (x) becomes an unbiased estimator ofDf2 (x). It is possible to compute D̂f2(x) from any set of points centered around x. Therefore, we compute D̂f2(Xi) for each i ∈ {1, ..., N} using the set Sk(X) of k-nearest neighbors ofXi. We obtain D̂f2(Xi) using\nD̂f2(Xi) = 1 k − 1 ∑\nXj∈Sk(Xi)\n( f(Xj)− 1\nk k∑ l=1 f(Xl) )2 , (4)\nThe advantages of this formulation are twofold. First, D̂f2 can even be applied to non-differentiable functions. Second, all we need is {f(X1), ..., f(XN )}. In other words, the points used by D̂f2(Xi) are those used for the training of the NN. Finally, while the definition of Df2 (x) is local, the definition of D̂f2 (x) holds for any . Note that equation 4 can even be applied when the data points are too sparse for the nearest neighbors of Xi to be considered as close to Xi. It can thus be seen as a generalization of Df2 (x), which tends towards Df 2 (x) locally.\nProblem 2: Unavailability of new labels To tackle problem 2, recall that the goal of the training is to find θ∗ = argmin\nθ ĴX(θ), with ĴX(θ) = 1N\n∑ i L(f(Xi), fθ(Xi)). With the new distribution\nbased on previous derivations, the procedure is different. Since the training points are sampled using D̂f2 , we no longer minimize ĴX(θ), but ĴX̄(θ) = 1 N ∑ i L(f(X̄i), fθ(X̄i)), with X̄ ∼ dPX̄ the new distribution. However, ĴX̄(θ) estimates\nJX̄(θ) = ∫ S L(f(x), fθ(x))dPX̄ .\nLet pX(x)dx = dPX , pX̄(x)dx = dPX̄ be the pdfs of X and X̄ (note that Df2 ∝ pX̄ ). Then,\nJX̄(θ) = ∫ S L(f(x), fθ(x)) pX̄(x) pX(x) dPX .\nThe straightforward Monte Carlo estimator for this expression of JX̄(θ) is\nĴX̄,2(θ) = 1\nN ∑ i L(f(Xi), fθ(Xi)) pX̄(Xi) pX(Xi) ∝ 1 N ∑ i L(f(Xi), fθ(Xi)) D̂f2(Xi) pX(Xi) . (5)\nThus, JX̄(θ) can be estimated with the same points as JX(θ) by weighting them withwi = D̂f2(Xi) pX(Xi) ." }, { "heading": "4.2 HYPERPARAMETERS OF VBSW", "text": "The expression of wi involves Df2 (Xi), whose estimation has been the goal of the previous sections. However, it also involves pX , the distribution of the data. Just like for f , we do not have access to pX . The estimation of pX is a challenging task by itself, and standard density estimation techniques such as K-nearest neighbors or Gaussian Mixture density estimation led to extreme estimated values of pX(Xi) in our experiments. Therefore, we decided to only apply ωi = D̂f2(Xi) as a first order approximation. In practice, we re-scale the weighting points to be between 1 and m, a hyperparameter. As a result, VBSW has two hyperparameters: m and k. Their effects and interactions are studied and discussed in Sections 5.1 and 5.4." }, { "heading": "4.3 VBSW FOR DEEP LEARNING", "text": "We specified that local variance could be computed using already existing points. This statement implies to find the nearest neighbors of each point. In extremely high dimension spaces like image spaces the curse of dimensionality makes nearest neighbors spurious. In addition, the structure of the data may be highly irregular, and the concept of nearest neighbor misleading. Thus, it may be irrelevant to evaluate D̂2f directly on this data.\nOne of the strength of DL is to construct good representations of the data, embedded in lower dimensional latent spaces. For instance, in Computer Vision, Convolutional Neural Networks (CNN)’s deeper layers represent more abstract features. We could leverage this representational power of NNs, and simply apply our methodology within this latent feature space.\nAlgorithm 1 Variance Based Samples Weighting (VBSW) for Deep learning 1: Inputs: k, m,M 2: TrainM on the training set {( 1N , X1), ..., ( 1N , XN )}, {( 1N , f(X1)), ..., ( 1N , f(XN ))} 3: ConstructM∗ by removing its last layer 4: Compute {D̂f2(M∗(X1)), ..., D̂f2(M∗(XN ))} using equation 4. 5: Construct a new training data set {(w1,M∗(X1)), ..., (wN ,M∗(XN ))} 6: Train fθ on {(w1, f(X1)), ..., (wN , f(XN ))} and add it to M∗. The final model is Mf = fθ ◦M∗\nVariance Based Samples Weighting (VBSW) for DL is recapitulated in Algorithm 1. Here,M is the initial NN whose feature space will be used to project the training data set and apply VBSW. Line 1: m and k are hyperparameters that can be chosen jointly with all other hyperparameters, e.g. using a random search. Line 2: The initial NN,M, is trained as usual. Notations {( 1N , X1), ..., ( 1N , XN )} is equivalent to {X1, ..., XN}, because all the weights are the same ( 1N ). Line 3: The last fully connected layer is discarded, resulting in a new modelM∗, and the training data set is projected in the feature space. Line 4-5: equation 4 is applied to compute the weights wi that are used to weight the projected data set. To perform nearest neighbors search, we use KD-Tree (Bentley, 1975). Line 6: The last layer is re-trained (which is often equivalent to fitting a linear model) using the weighted data set and added toM∗ to obtain the final modelMf . As a result,Mf is a composition of the already trained modelM∗ and fθ trained using the weighted data set." }, { "heading": "5 EXPERIMENTS", "text": "We first test this methodology on toy datasets with linear models and small NNs. Then, to illustrate how general VBSW can be, we consider various tasks in image classification, text regression and classification. Finally, we study the robustness of VBSW and its complementarity with other sample weighting techniques." }, { "heading": "5.1 TOY EXPERIMENTS", "text": "VBSW is studied on a Double Moon (DM) classification, in the Boston Housing (BH) regression and Breast Cancer (BC) classification data sets.\nFor DM, Figure 2 (c) shows that the points with highest wi (in red) are close to the boundary between the two classes. Indeed, in classification, VBSW can be interpreted as local label agreement. We train a Multi Layer Perceptron of 1 layer of 4 units, using Stochastic Gradient Descent (SGD) and binary cross-entropy loss function, on a 300 points training data set for\n50 random seeds. In this experiment, VBSW, i.e. weighting the data set with wi is compared to baseline where no weights are applied. Figure 2 (b) and (d) displays the decision boundary of best\nfit for each method. VBSW provides a cleaner decision boundary than baseline. These pictures as well as the results of Table 1 show the improvement obtained with VBSW.\nFor BH data set, a linear model is trained and for BC data set, a MLP of 1 layer and 30 units, with a train-validation split of 80%− 20%. Both models are trained with ADAM (Kingma & Ba, 2014). Since these data sets are small and the models are light, we study the effects of the choice of m and k on the performances. Moreover, BH is a regression task and BC a classification task, so it allows studying the effect of hyperparameters more extensively. We train the models for a grid of 20 × 20 different values of m and k. These hyperparameters seem to have a different impact on performances for classification and regression. In both cases, low values for m yields better results, but in classification, low values of k are better, unlike in regression. Details and visualization of this experiment can be found in Appendix C. The best results obtained with this study are compared to the best result of the same models trained without VBSW in Table 1." }, { "heading": "5.2 MNIST AND CIFAR10", "text": "For MNIST, we train 40 LeNet 5, i.e. with 40 different random seeds, and then apply VBSW for 10 different random seeds, with ADAM optimizer and categorical cross-entropy loss. Note that in the following, ADAM is used with the default parameters of its keras implementation. We record the best value obtained from the 10 VBSW training. The same procedure is followed for Cifar10, except that we train a ResNet20 for 50 random seeds and with data augmentation and learning rate decay. The networks have been trained on 4 Nvidia K80 GPUs. The values of the hyperparameters used can be found in Appendix C. We compare the test accuracy between LeNet 5 + VBSW, ResNet20 + VBSW and the initial test accuracies of LeNet 5 and ResNet20 (baseline) for each of the initial networks.\n1≤i≤10\n(acc(Mif )−acc(M)) with acc the accuracy andMif the VBSW model trained\nat the i-th random seed.\nThe results statistics are gathered in Table 2, which also displays statistics about the gain due to VBSW for each model. The results on MNIST, for all statistics and for the gain are significantly better than forVBSW than for baseline. For Cifar10, we get a 0.3% accuracy improvement for the best model and up to 1.65% accuracy gain, meaning that among the 50 ResNet20s, there is one whose accuracy has been improved by 1.65% by VBSW. Note that applying VBSW took less than 15 minutes on a laptop with an i7-7700HQ CPU. A visualization of the samples that were weighted by the highest wi is given in Figure 3." }, { "heading": "5.3 RTE, STS-B AND MRPC", "text": "For this application, we do not pre-train Bert NN, like in the previous experiments, since it has been originally built for Transfer Learning purposes. Therefore, its purpose is to be used as is and then fine-tuned on any NLP data set see (Devlin et al., 2019). However, because of the small size of the dataset and the high number of model parameters we chose not to fine-tune the Bert model, and only to use the representations of the data sets in its feature space to apply VBSW. More specifically, we use tiny-bert (Turc et al., 2019), which is a lighter version of the initial Bert NN. We train the linear model with tensorflow, to be able to add the trained model on top of the Bert model and obtain a\nunified model. RTE and MRPC are classification tasks, so we use binary cross-entropy loss function to train our model. STS-B is a regression task so the model is trained with Mean Squared Error. All the models are trained with ADAM optimizer. For each task, we compare the training of the linear model with VBSW, and without VBSW (baseline). The results obtained with VBSW are better overall, except for Pearson Correlation in STS-B, which is slightly worse than baseline (Table 3)." }, { "heading": "5.4 ROBUSTNESS OF VBSW", "text": "VBSW relies on statistical estimation: the weights are based on local empirical variance, evaluated using k points. In addition, they are rescaled using hyperparameter m. Section 5.1 and Appendix C show that many different combinations of m and k and therefore many different values for the weights improve the error. This behavior suggests that VBSW is quite robust to weights approximation error.\nWe also assess the robustness of VBSW to label noise. To that end, we train a ResNet20 on Cifar10 with four different noise levels. We randomly change the label of p% training points for four different values of p (10, 20, 30 and 40). We then apply VBSW 30 times and evaluate the performances of the obtained NNs on a clean test set. The results are gathered in Table 4.\nThe results show that VBSW is still effective despite label noise, which could be explained by two observations. First, the weights of VBSW rely on statistical estimation, so perturbations in the labels might have a limited impact on weights’ value. Second, as mentioned previously, VBSW is robust to weights approximation error, so perturbation of the weights due to label noise may not critically hurt the method. Although VBSW is robust to label noise, note that the goal of VBSW is not to address noisy label problem, like discussed in Section 2. It may be more effective to use a sampling technique specifically tailored for this situation - possibly jointly with VBSW, like in Section 5.5." }, { "heading": "5.5 COMPLEMENTARITY OF VBSW", "text": "Existing sample weighting techniques can be used jointly with VBSW by training the initial NN M with the first sample weighting algorithm, and then applying VBSW on its feature space. To illustrate this feature, we compare VBSW with the recently introduced Active Bias (AB) (Chang et al., 2017). AB dynamically weights the samples based on the variance of the probability of prediction of each points throughout the training. Here, we study the combined effects of AB and VBSW for the training of a ResNet20 on Cifar10. Table 5 gathers the results of experiments for different baselines: vanilla, for a regular training with Adam optimizer, AB for a training with AB, VBSW for the application of VBSW on top of a regular training and VBSW + AB for a training with AB and the application of VBSW. Unlike in section 5.2, we do not use data augmentation or learning rate decay, to simplify the experiments (which explains the accuracy loss compared to previous experiments).\nThese results demonstrate the competitiveness of VBSW compared with AB and their complementarity. Indeed, the accuracy obtained with VBSW is quite similar to AB and the best mean and max accuracy is obtained for a NN trained with AB + VBSW. Note that in this experiment, the gain\n1≤i≤10\n(acc(Mif )− acc(M)) with acc the accuracy\nandMif the VBSW model trained at the i-th random seed. per model is lower for AB + VBSW than for VBSW alone. An explanation might be that AB is already improving the NN performances compared to vanilla, so there is less room for accuracy improvement by VBSW in that case." }, { "heading": "6 DISCUSSION & FUTURE WORK", "text": "Previous experiments demonstrate the performances improvement that VBSW can bring in practice. In addition to these results, several advantages can be pointed out.\n• VBSW is validated on several different tasks, which makes it quite versatile. Moreover, the problem of high dimensionality and irregularity of f , which often arises in DL problems, is alleviated by focusing on the latent space of NNs. This makes VBSW scalable. As a result, VBSW can be applied to complex NNs such as ResNet, a complex CNN or Bert, for various ML tasks. Its application to more diverse ML models is a perspective for future works.\n• The validation presented in this paper supports an original view of the learning problem, that involves the local variations of f . The studies of Appendix A, that use the derivatives of the function to learn to sample a more efficient training data set, support this approach as well.\n• VBSW allows to extend this original view to problems where the derivatives of f are not accessible, and sometimes not defined. Indeed, VBSW comes from Taylor expansion, which is specific to derivable functions, but in the end can be applied regardless of the properties of f .\n• Finally, this method is cost effective. In most cases, it allows to quickly improve the performances of a NN using a regular CPU. In terms of energy consumption, it is better than carrying on a whole new training with a wider and deeper NN.\nWe first approximated pX to be uniform, because we could not approximate it correctly. This approximation still led to a an efficient methodology, but VBSW may benefit from a finer approximation of pX . Improving the approximation of pX is among our perspectives. Finally, the KD-tree and even Approximate Nearest Neighbors algorithms struggle when the data set is too big. One possibility to overcome this problem would be to parallelize their execution.\nWe only considered the cases where we have no access to f . However, there are ML applications where we do. For instance, in numerical simulations, for physical sciences, computational economics or climatology, ML can be used for various reasons, e.g. sensitivity analysis, inverse problems or to speed up computer codes (Zhu et al., 2019; Winovich et al., 2019; Feng et al., 2018). In this context data comes from numerical models, so the derivatives of f are accessible and could be directly used. Appendix A contains examples of such possible applications." }, { "heading": "7 CONCLUSION", "text": "Our work is based on the observation that, in supervised learning, a function f is more difficult to approximate by a NN in the regions where it is steeper. We mathematically traduced this intuition and derived a generalization bound to illustrate it. Then, we constructed an original method, Variance Based Samples Weighting (VBSW), that uses the variance of the training samples to weight the training data set and boosts the model’s performances. VBSW is simple to use and to implement, because it only requires to compute statistics on the input space. In Deep Learning, applying VBSW on the data set projected in the feature space of an already trained NN allows to reduce its error by simply training its last layer. Although specifically investigated in Deep Learning, this method is applicable to any loss function based supervised learning problem, scalable, cost effective, robust and versatile. It is validated on several applications such as glue benchmark with Bert, for text classification and regression and Cifar10 with a ResNet20, for image classification." }, { "heading": "APPENDIX A TAYLOR BASED SAMPLING", "text": "In this part, we empirically verify that using Taylor expansion to construct a new training distribution has a beneficial impact on the performances of a NN. To this end, we construct a methodology, that we call Taylor Based Sampling (TBS), that generates a new training data set based on the metric introduced in Section 3.3. First, we recall the formula for {Dfn (X1), ..., Dfn (XN )}.\nDfn (x) = ∑\n1≤|k|≤n\n‖ ‖k · ‖Vect(∂kf(x))‖ k! . (6)\nTo focus the training set on the regions of interest, i.e. regions of high {Dfn (X1), ..., Dfn (XN )}, we use this metric to construct a probability density function (pdf). This is possible sinceDfn (x) ≥ 0 for all x ∈ S. It remains to normalize it but in practice it is enough considering a distribution d ∝ Dfn . Here, to approximate dwe use a Gaussian Mixture Model (GMM) with pdf dGMM that we fit to {Dfn (X1), ..., Dfn (XN )} using the Expectation-Maximization (EM) algorithm. N ′ new data points {X̄1, ..., X̄N ′}, can be sampled, with X̄ ∼ dGMM. Finally, we obtain {f(X̄1), ..., f(X̄N ′)}, add it to {f(X1), ..., f(XN )} and train our NN on the whole data set.\nTAYLOR BASED SAMPLING (TBS)\nTBS is described in Algorithm 2. Line 1: The choice criterion of , the number of Gaussian distribution nGMM and N ′ is to avoid sparsity of {X̄1, ..., X̄N ′} over S. Line 2: Without a priori information on f , we sample the first points uniformly in a subspace S. Line 3-7: We construct {Dfn (X1), ..., Dfn (XN )}, and then d to be able to sample points accordingly. Line 8: Because the support of a GMM is not bounded, some points can be sampled outside S. We discard these points and sample until all points are inside S. This rejection method is equivalent to sampling points from a truncated GMM. Line 9-10: We construct the labels and add the new points to the initial data set.\nAlgorithm 2 Taylor Based Sampling (TBS) 1: Inputs: , N , N ′, nGMM, n 2: Sample {X1, ..., XN}, with X ∼ U(S) 3: for 0 ≤ k ≤ n do 4: Compute {∂kf(X1), ..., ∂kf(XN )} 5: Compute {Dfn (X1), ..., Dfn (XN )} using equation 2 6: Approximate d ∼ D with a GMM using EM algorithm to obtain a density dGMM 7: Sample {X̄1, ..., X̄N ′} using rejection method to sample inside S 8: Compute {f(X̄1), ..., f(X̄N ′)} 9: Add {f(X̄1), ..., f(X̄N ′)} to {f(X1), ..., f(XN )}\nAPPLICATION TO SIMPLE FUNCTIONS\nSampling L2 error L∞ f : Runge (×10−2)\nBS 1.45± 0.62 5.31± 0.86 TBS 1.13± 0.73 3.87± 0.48\nf : tanh (×10−1) BS 1.39± 0.67 2.75± 0.78 TBS 0.95± 0.50 2.25± 0.61\nTable 6: Comparison between BS and TBS. The metrics used are the L2 and L∞ errors, displayed with a 95% confidence interval.\nIn order to illustrate the benefits of TBS compared to a uniform, basic sampling (BS), we apply it to two simple functions: hyperbolic tangent and Runge function. We chose these functions because they are differentiable and have a clear distinction between flat and steep regions. These functions are displayed in Figure 4, as well as the map x→ Df2 (x). All NN have been implemented in Python, with Tensorflow Abadi et al. (2015). We use the Python package scikit-learn Pedregosa et al. (2011), and more specifically the GaussianMixture class to construct\ndGMM . The network chosen for this experiment is a Multi Layer Perceptron (MLP) with one layer of 8 neurons, with Relu activation function, that we trained alternatively with BS and TBS using\nAdam optimizer Kingma & Ba (2014) with the defaults tensorflow implementation hyperparameters, and Mean Squared Error loss function. We first sample {X1, ..., XN} according to a regular grid. To compare the two methods, we add N ′ additional points sampled using BS to create the BS data set, and then N ′ other points sampled with TBS to construct the TBS data set. As a result, each data set have the same number of points (N +N ′). We repeated the method for several values of n, nGMM and , and finally selected n = 2, nGMM = 3 and = 10−3.\nTable 6 summarizes the L2 and the L∞ norm of the error of fθ, obtained at the end of the training phase for N + N ′ = 16, with N = 8. Those norms are estimated using a same test data set of 1000 points. The values are the means of the 40 independent experiments displayed with a 95% confidence interval. These results illustrate the benefits of TBS over BS. Table 6 shows that TBS slightly degrades the L2 error of the NN, but improves its L∞ error. This may explain the good results of VBSW for classification. Indeed, for a classification task, the accuracy will not be very sensitive to small output variations, since the output is rounded to 0 or 1. However, a high error can induce a misclassification, and the reduction in L∞ norm limits this risk.\nAPPLICATION TO AN ODE SYSTEM\nWe apply TBS to a more realistic case: the approximation of the resolution of the Bateman equations, which is an ODE system :\n{ ∂tu(t) = vσa · η(t)u(t), ∂tη(t) = vΣr · η(t)u(t), , with initial conditions { u(0) = u0, η(0) = η0. ,\nwith u ∈ R+,η ∈ (R+)M ,σTa ∈ RM ,Σr ∈ RM×M . Here, f : (u0,η0, t)→ (u(t),η(t)). For physical applications, M ranges from tens to thousands. We consider the particular case M = 1 so that f : R3 → R2, with f(u0, η0, t) = (u(t), η(t)). The advantage of M = 1 is that we have access to an analytic, cheap to compute solution for f . Of course, this particular case can also be solved using a classical ODE solver, which allows us to test it end to end. It can thus be generalized to higher dimensions (M > 1).\nAll NN trainings have been performed in Python, with Tensorflow Abadi et al. (2015). We used a fully connected NN with hyperparameters chosen using a simple grid search. The final values are: 2 hidden layers, ”ReLU” activation function, and 32 units for each layer, trained with the Mean Squared Error (MSE) loss function using Adam optimization algorithm with a batch size of 50000, for 40000 epochs and on N +N ′ = 50000 points, with N = N ′. We first trained the model for (u(t), η(t)) ∈ R, with an uniform sampling (BS) (N ′ = 0), and then with TBS for several values of n, nGMM and = (1, 1, 1), to be able to find good values. We finally select = 5×10−4, n = 2 and nGMM = 10. The data points used in this case have been sampled with an explicit Euler\nscheme. This experiment has been repeated 50 times to ensure statistical significance of the results.\nTable 7 summarizes the MSE, i.e. the L2 norm of the error of fθ and L∞ norm, with L∞(θ) = max X∈S\n(|f(X) − fθ(X)|) obtained at the end of the training phase. This last metric is important because the goal in computational physics is not only to be averagely accurate, which is measured with MSE, but to be accurate over the whole input space S. Those norms are estimated using a same test data set of Ntest = 50000 points. The values are the means of the 50 independent experiments displayed with a 95% confidence interval. These results reflect an error reduction of 6.6% for L2 and of 45.3% for L∞, which means that TBS mostly improves the L∞ error of fθ. Moreover, the L∞ error confidence intervals do not intersect so the gain is statistically significant for this norm.\n0≤t≤10\nDn (u0, η0, t) w.r.t. (u0, η0). 2b: u0, η0 → gθBS (u0, η0) −\ngθTBS (u0, η0),\nFigure 1a shows how the NN can perform for an average prediction. Figure 1b illustrates the benefits of TBS relative to BS on the L∞ error (Figure 2b). These 2 figures confirm the previous observation about the gain in L∞ error. Finally, Figure 2a displays u0, η0 → max\n0≤t≤10 Dn (u0, η0, t)\nw.r.t. (u0, η0) and shows that Dn increases when U0 → 0. TBS hence focuses on this region. Note that for the readability of these plots, the values are capped to 0.10. Otherwise only few points with high Dn are visible. Figure 2b displays u0, η0 → gθBS (u0, η0) − gθTBS (u0, η0), with gθ : u0, η0 → max\n0≤t≤10 ‖f(u0, η0, t) − fθ(u0, η0, t)‖22 where θBS and θTBS denote the parameters\nobtained after a training with BS and TBS, respectively. It can be interpreted as the error reduction achieved with TBS.\nThe highest error reduction occurs in the expected region. Indeed, more points are sampled where Dn is higher. The error is slightly increased in the rest of S, which could be explained by a sparser sampling on this region. However, as summarized in Table 1, the average error loss (AEL) of TBS is around six times lower than the average error gain (AEG), with AEG = Eu0,η0(Z1Z>0) and AEL = Eu0,η0(Z1Z<0) where Z(u0, η0) = gθBS (u0, η0) − gθTBS (u0, η0). In practice, AEG and AEL are estimated using uniform grid integration, and averaged on the 50 experiments." }, { "heading": "APPENDIX B: DEMONSTRATIONS", "text": "INTUITION BEHIND TAYLOR EXPANSION (SECTION 3.2)\nWe want to approximate f : x → f(x), x ∈ Rni , f(x) ∈ Rno with a NN fθ. The goal of the approximation problem can be seen as being able to generalize to points not seen during the training. We thus want the generalization error JX(θ) to be as small as possible. Given an initial data set {X1, ..., XN} drawn from X ∼ dPX and {f(X1), ..., f(XN )}, and the loss function L being the squared L2 error, recall that the integrated error JX(θ), its estimation ĴX(θ) and the generalization error JX(θ) can be written:\nJX(θ) = ∫ S ‖f(x)− fθ(x)‖dPX ,\nĴX(θ) = 1\nN N∑ i=1 ‖fθ(Xi)− f(Xi) ∥∥,\nJX(θ) = JX(θ)− ĴX(θ),\n(7)\nwhere ‖.‖ denotes the squared L2 norm. In the following, we find an upper bound for JX(θ). We start by finding an upper bound for JX(θ) and then for JX(θ) using equation 7. Let Si, i ∈ {1, ..., N} be some sub-spaces of a bounded space S such that S = ⋃N i=1 Si, ⋂N i=1 Si = Ø, and Xi ∈ Si. Then,\nJX(θ) = N∑ i=1 ∫ Si ‖f(x)− fθ(x)‖dPX ,\nJX(θ) = N∑ i=1 ∫ Si ‖f(Xi + x−Xi)− fθ(x)‖dPX .\nSuppose that ni = no = 1 and f twice differentiable. Let |S| = ∫ S dPX . The volume |S| = 1 since dPX is a probability measure, and therefore |Si| < 1 for all i ∈ {1, ..., N} . Using Taylor expansion at order 2, and since |Si| < 1 for all i ∈ {1, ..., N}\nJX(θ) = N∑ i=1 ∫ Si ‖f(Xi) + f ′(Xi)(x−Xi) + 1 2 f ′′(Xi)(x−Xi)2 − fθ(x) +O((x−Xi)3)‖dPX .\nTo find an upper bound for J(θ), we can first find an upper bound for |Ai(x)|, with Ai(x) = f(Xi) + f\n′(Xi)(x−Xi) + 12f ′′(Xi)(x−Xi)2 − fθ(x) +O((x−Xi)3). NN fθ is Kθ−Lipschitz, so since S is bounded (so are Si), for all x ∈ Si, |fθ(x) − fθ(Xi)| ≤ Kθ|x−Xi|. Hence,\nfθ(Xi)−Kθ|x−Xi| ≤ fθ(x) ≤ fθ(Xi) +Kθ|x−Xi|, − fθ(Xi)−Kθ|x−Xi| ≤ −fθ(x) ≤ −fθ(Xi) +Kθ|x−Xi|,\nf(Xi) + f ′(Xi)(x−Xi) +\n1 2 f ′′(Xi(x−Xi)2)− fθ(Xi)−Kθ|x−Xi|+O((x−Xi)3)\n≤ Ai(x) ≤ f(Xi) + f ′(Xi)(x−Xi) + 1\n2 f ′′(Xi)(x−Xi)2 − fθ(Xi) +Kθ|x−Xi|+O((x−Xi)3),\nAi(x) ≤ f(Xi)− fθ(Xi) + f ′(Xi)(x−Xi) + 1\n2 f ′′(Xi)(x−Xi)2 +Kθ|x−Xi|+O((x−Xi)3).\nAnd finally, using triangular inequality,\nAi(x) ≤ |f(Xi)− fθ(Xi)|+ |f ′(Xi)||x−Xi|+ 1\n2 |f ′′(Xi)||x−Xi|2 +Kθ|x−Xi|+O(|x−Xi|3).\nNow, ‖.‖ being the squared L2 norm:\nJX(θ) = N∑ i=1 ∫ Si ‖f(Xi) + f ′(Xi)(x−Xi) + 1 2 f ′′(Xi)(x−Xi)2 − fθ(x) +O(|x−Xi|3)‖dPX ,\nJX(θ) ≤ N∑ i=1 ∫ Si [( |f(Xi)− fθ(Xi)| ) + ( |f ′(Xi)||x−Xi|+ 1 2 |f ′′(Xi)||x−Xi|2 +Kθ|x−Xi| )\n+O(|x−Xi|3) ]2 dPX ,\n= N∑ i=1 ∫ Si [ |f(Xi)− fθ(Xi)|2\n+ 2|f(Xi)− fθ(Xi)| ( |f ′(Xi)||x−Xi|+ 1\n2 |f ′′(Xi)||x−Xi|2 +Kθ|x−Xi| ) + [( |f ′(Xi)||x−Xi| ) + (1\n2 |f ′′(Xi)||x−Xi|2 +Kθ|x−Xi|\n)]2 +O(|x−Xi|3) ] dPX ,\n= N∑ i=1 ∫ Si [ |f(Xi)− fθ(Xi)|2\n+ 2|f(Xi)− fθ(Xi)| ( |f ′(Xi)||x−Xi|+ 1\n2 |f ′′(Xi)||x−Xi|2 +Kθ|x−Xi| ) + [ |f ′(Xi)|2|x−Xi|2 + 2Kθ|f ′(Xi)||x−Xi|2 +K2θ |x−Xi|2 ] +O(|x−Xi|3) ] dPX ,\n= N∑ i=1 ∫ Si [ |f(Xi)− fθ(Xi)|2\n+ 2|f(Xi)− fθ(Xi)| ( |f ′(Xi)||x−Xi|+ 1\n2 |f ′′(Xi)||x−Xi|2 +Kθ|x−Xi| ) + ( |f ′(Xi)|+Kθ )2 |x−Xi|2 +O(|x−Xi|3) ] dPX .\nHornik’s theorem (Hornik et al., 1989) states that given a norm ‖.‖p,µ = such that ‖f‖pp,µ =∫ S |f(x)|pdµ(x), with dµ a probability measure, for any , there exists θ such that for a Multi Layer Perceptron, fθ, ‖f(x)− fθ(x)‖pp,µ < ,\nThis theorem grants that for any , with dµ = ∑N i=1\n1 N δ(x−Xi), there exists θ such that ‖f(x)− fθ(x)‖11,µ = N∑ i=1 1 N |f(Xi)− fθ(Xi)| ≤ ,\n‖f(x)− fθ(x)‖22,µ = N∑ i=1 1 N ( f(Xi)− fθ(Xi) )2 ≤ . (8) Let’s introduce i∗ such that i∗ = argmin |Si|. Note that for any i ∈ {1, ..., N},O(|S∗i |4) isO(|Si|4). Now, let’s choose such that = O(|S∗i |4). Then, equation 8 implies that |f(Xi)− fθ(Xi)| = O(|Si|4),( f(Xi)− fθ(Xi) )2 = O(|Si|4), ĴX(θ) = ‖f(x)− fθ(x)‖22,µ = O(|Si|4). Thus, we have JX(θ) = JX(θ)− ĴX(θ) = JX(θ) +O(|Si|4) and therefore,\nJX(θ) ≤ N∑ i=1 ∫ Si [( |f ′(Xi)|+Kθ )2 |x−Xi|2dPX ] +O(|Si|4).\nFinally,\nJX(θ) ≤ N∑ i=1 (|f ′(Xi)|+Kθ)2 |Si|3 3 +O(|Si|4). (9)\nWe see that on the regions where f ′(Xi) + Kθ is higher, quantity |Si| (the volume of Si) has a stronger impact on the GB. Then, since |Si| can be seen as a metric for the local density of the data set (the smaller |Si| is, the denser the data set is), the Generalization Bound (GB) can be reduced more efficiently by adding more points around Xi in these regions. This bound also involves Kθ, the Lipschitz constant of the NN, which has the same impact as f ′(Xi). It also illustrates the link between the Lipschitz constant and the generalization error, which has been pointed out by several works like, for instance, Gouk et al. (2018), Bartlett et al. (2017) and Qian & Wegman (2019).\nPROBLEM 1: UNAVAILABILITY OF DERIVATIVES (SECTION 4.1)\nIn this paragraph, we consider ni > 1 but no = 1. The following derivations can be extended to no > 1 by applying it to f element-wise. Let ∼ N (0, 1ni) with ∈ R+ and = ( 1, ..., ni), i.e. i ∼ N (0, ). Using Taylor expansion on f at order 2 gives\nf(x+ ) = f(x) +∇xf(x) · + 1\n2 T ·Hxf(x) · +O(‖ ‖32).\nWith∇xf and Hxf(x) the gradient and the Hessian of f w.r.t. x. We now compute V ar(f(X+ )) and make Df2 (x) = ‖∇xf(x)‖2F + 12 2‖Hxf(x)‖2F appear in its expression to establish a link between these two quantities:\nV ar(f(x+ )) = V ar ( f(x) +∇xf(x) · + 1\n2 T ·Hxf(x) · +O(‖ ‖32)\n) ,\n= V ar ( ∇xf(x) · + 1\n2 T ·Hxf(x) ·\n) +O(‖ ‖32).\nSince i ∼ N (0, ), x = (x1, ..., xni) and with ∂ 2f ∂xixj (X) the cross derivatives of f w.r.t. xi and xj ,\n∇xf(x) · + 1\n2 T ·Hxf(x) · = ni∑ i=1 i ∂f ∂xi (x) + 1 2 ni∑ j=1 ni∑ k=1 j k ∂2f ∂xjxk (x),\nV ar ( ∇xf(x) · + 1\n2 T ·Hxf(x) ·\n) =V ar ( ni∑ i=1 i ∂f ∂xi (x) + 1 2 ni∑ j=1 ni∑ k=1 j k ∂2f ∂xjxk (x) ) ,\n= ni∑ i1=1 ni∑ i2=1 Cov ( i1 ∂f ∂xi1 (x), i2 ∂f ∂xi2 (x) ) ,\n+ 1\n4 ni∑ j1=1 ni∑ k1=1 ni∑ j2=1 ni∑ k2=1 Cov ( j1 k1 ∂2f ∂xj1xk1 (x), j2 k2 ∂2f ∂xj2xk2 (x) )\n+ ni∑ i=1 ni∑ j=1 ni∑ k=1 Cov ( i ∂f ∂xi (x), j k ∂2f ∂xjxk (x) ) ,\n= ni∑ i1=1 ni∑ i2=1 ∂f ∂xi1 (x) ∂f ∂xi2 (x)Cov ( i1 , i2 ) + 1\n4 ni∑ j1=1 ni∑ k1=1 ni∑ j2=1 ni∑ k2=1 ∂2f ∂xj1xk1 (x) ∂2f ∂xj2xk2 (x)Cov ( j1 k1 , j2 k2 ) +\nni∑ i=1 ni∑ j=1 ni∑ k=1 ∂f ∂xi (x) ∂2f ∂xjxk (x)Cov ( i, j k ) .\nIn this expression, we have to assess three quantities: Cov( i1 , i2), Cov( i, j k) and Cov( j1 k1 , j2 k2).\nFirst, since ( 1, ..., ni) are i.i.d.,\nCov ( i1 , i2 ) = { V ar( i) = if i1 = i2 = i, 0 otherwise. .\nTo assess Cov( i, j k), three cases have to be considered.\n• If i = j = k, because E[ 3i ] = 0, Cov( i, j k) = Cov( i, 2 i ),\n= E[ 3i ]− E[ i]E[ 2i ], = 0.\n• If i = j or i = k (we consider i = k, and the result holds for i = j by commutativity), Cov( i, j k) = Cov( i, i j),\n= E[ 2i j ]− E[ i]E[ i j ], = E[ 2i ]E[ j ], = 0.\n• If i 6= j and i 6= k, i and j k are independent and so Cov( i, j k) = 0.\nFinally, to assess Cov( j1 k1 , j2 k2), four cases have to be considered:\n• If j1 = j2 = k1 = k2 = i, Cov( j1 k1 , j2 k2) = V ar( 2 i ),\n= 2 2.\n• If j1 = k1 = i and j2 = k2 = j, Cov( j1 k1 , j2 k2) = Cov( 2i , 2j ) = 0 since 2i and 2j are independent.\n• If j1 = j2 = j and k1 = k2 = k, Cov( j1 k1 , j2 k2) = V ar( j k),\n= V ar( j)V ar( k),\n= 2.\n• If j1 6= k1, j2 and k2, Cov( j1 k1 , j2 k2) = E[ j1 k1 j2 k2 ]− E[ j1 k1 ]E[ j2 k2 ],\n= E[ j1 ]E[ k1 j2 k2 ]− E[ j1 ]E[ k1 ]E[ j2 k2 ], = 0.\nAll other possible cases can be assessed using the previous results, commutativity and symmetry of Cov operator. Hence,\nV ar ( ∇xf(x) · + 1\n2 T ·Hxf(x) ·\n) = ni∑ i1=1 ni∑ i2=1 ∂f ∂xi1 (x) ∂f ∂xi2 (x)Cov ( i1 , i2 ) + 1\n4 ni∑ j1=1 ni∑ k1=1 ni∑ j2=1 ni∑ k2=1 ∂2f ∂xj1xk1 (x) ∂2f ∂xj2xk2 (x)Cov ( j1 k1 , j2 k2 ) ,\n= ni∑ i=1 ∂f2 ∂xi (x) + 1 2 ni∑ j=1 ni∑ k=1 2 ∂2f2 ∂xjxk (x),\n= ‖∇xf(x)‖2F + 1\n2 2‖Hxf(x)‖2F ,\n=Df2 (x).\nAnd finally,\nV ar(f(x+ )) = Df2 (x) +O(‖ ‖32) (10)\nIf we consider D̂f2 (x) as defined in equation 2, on section 3.3 of the main document, D̂f2 (x) →\nk→∞ V ar(f(x + )) . Since V ar(f(x + )) = Df2 (x) + O(‖ ‖32), D̂f2 (x) is a bi-\nased estimator of Df2 (x), with bias O(‖ ‖32). Hence, when → 0, D̂f2 (x) becomes an unbiased estimator of Df2 (x)." }, { "heading": "APPENDIX C: HYPERPARAMETERS", "text": "This appendix Section is split in two parts. First, we describe the results of the experiments on the hyperparameters search of Boston Housing (BH) and Breast Cancer (BC) data sets (Section 5.1). The second part is a list of final hyperparameters values chosen for the experiments of the main paper.\nEXPERIMENTS ON BOSTON HOUSING AND BREAST CANCER DATA SETS\nFor BH and BC experiments, we conduct a grid search for VBSW on the values of m and k. As a reminder,m is the ratio between the highest and the lowest weights, and k is the number of neighbor points used to compute the local variance. We train a linear model for BH and a MLP with 30 units for BC with VBSW on a grid of 20 values ofm equally distributed between 2 and 100 and 20 values of k equally distributed between 10 and 50. As a result, we train the model on 400 pairs of (m, k) values, and with 10 different random seeds for each pair.\nThese experiments, illustrated in Figure 6 shows that the influence of m and k on the performances of the model can be different. For BH data set, low values of k clearly lead to poorer performances. Hyperparameter m seems to have less impact, although it should be chosen not too far from its lowest value, 2. For BC data set, at the contrary, the best performances are obtained for low values of k, while m could be chosen in high values. These experiments highlight that the impact of m and k can be different between classification and regression, but it could also be different depending on the data set. Hence, we recommend considering these hyperparameters like many other involved in DL, and to select their values using classical hyperparameters optimization techniques.\nThis also shows that many different (m, k) pairs lead to error improvement. This suggests that the weights approximation does not have to be exact in order for VBSW to be effective, like stated in Section 5.4.\nPAPER HYPERPARAMETERS VALUES\nThe values chosen for the hyperparameters of the paper experiments are gathered in Table 8. For ADAM optimizer hyperparameters, we kept the default values of Keras implementation. We chose these hyperparameters after simple grid searches." } ]
2,020
null
SP:2dc6337218afc973db75d973d08c0cdd7e55698b
[ "The authors propose RMIX to deal with the randomness of rewards and the uncertainty in environments. RMIX learns the individual value distributions of each agent and uses a predictor to calculate the dynamic risk level. Given the individual value distribution and the risk level, a CVaR operator outputs the C value for execution. For training, the $C$ values are mixed as $C^{tot}$ and updated by TD error end-to-end. RMIX outperforms a series of value decomposition baselines on many challenging StarCraft II tasks. The paper is very clear and well-structured. Expanding value decomposition methods to the risk-sensitive field is a novel idea, and it shows competitive performance in empirical studies. ", "This paper proposes a new value-based method using risk measures in cooperative multi-agent reinforcement learning. The authors propose a new network structure that calculates global CVaR through individual distribution and learns risk-sensitized multi-agent policies. The authors also propose a new dynamic risk level prediction method that can dynamically adjust the risk level according to the agent’s observation and action. Applying risk-sensitive reinforcement learning in multi-agent reinforcement learning is interesting, but several points can be improved." ]
Centralized training with decentralized execution (CTDE) has become an important paradigm in multi-agent reinforcement learning (MARL). Current CTDEbased methods rely on restrictive decompositions of the centralized value function across agents, which decomposes the global Q-value into individual Q values to guide individuals’ behaviours. However, such expected, i.e., risk-neutral, Q value decomposition is not sufficient even with CTDE due to the randomness of rewards and the uncertainty in environments, which causes the failure of these methods to train coordinating agents in complex environments. To address these issues, we propose RMIX, a novel cooperative MARL method with the Conditional Value at Risk (CVaR) measure over the learned distributions of individuals’ Q values. Our main contributions are in three folds: (i) We first learn the return distributions of individuals to analytically calculate CVaR for decentralized execution; (ii) We then propose a dynamic risk level predictor for CVaR calculation to handle the temporal nature of the stochastic outcomes during executions; (iii) We finally propose risk-sensitive Bellman equation along with IndividualGlobal-MAX (IGM) for MARL training. Empirically, we show that our method significantly outperforms state-of-the-art methods on many challenging StarCraft II tasks, demonstrating significantly enhanced coordination and high sample efficiency. Demonstrative videos and results are available in this anonymous link: https://sites.google.com/view/rmix.
[]
[ { "authors": [ "Carlo Acerbi", "Dirk Tasche" ], "title": "On the coherence of expected shortfall", "venue": "Journal of Banking & Finance,", "year": 2002 }, { "authors": [ "Marc G Bellemare", "Will Dabney", "Rémi Munos" ], "title": "A distributional perspective on reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Yinlam Chow", "Mohammad Ghavamzadeh" ], "title": "Algorithms for cvar optimization in MDPs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Yinlam Chow", "Aviv Tamar", "Shie Mannor", "Marco Pavone" ], "title": "Risk-sensitive and robust decisionmaking: a cvar optimization approach", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Junyoung Chung", "Caglar Gulcehre", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", "venue": "In Advances in Neural Information Processing Systems 2014 Workshop on Deep Learning,", "year": 2014 }, { "authors": [ "Will Dabney", "Georg Ostrovski", "David Silver", "Remi Munos" ], "title": "Implicit quantile networks for distributional reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Will Dabney", "Mark Rowland", "Marc G Bellemare", "Rémi Munos" ], "title": "Distributional reinforcement learning with quantile regression", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Jakob Foerster", "Gregory Farquhar", "Triantafyllos Afouras", "Nantas Nardelli", "Shimon Whiteson" ], "title": "Counterfactual multi-agent policy gradients", "venue": "arXiv preprint arXiv:1705.08926,", "year": 2017 }, { "authors": [ "Jakob Foerster", "Nantas Nardelli", "Gregory Farquhar", "Triantafyllos Afouras", "Philip HS Torr", "Pushmeet Kohli", "Shimon Whiteson" ], "title": "Stabilising experience replay for deep multi-agent reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Javier García" ], "title": "A comprehensive survey on safe reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2015 }, { "authors": [ "He He", "Jordan Boyd-Graber", "Kevin Kwok", "Hal Daumé III" ], "title": "Opponent modeling in deep reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Takuya Hiraoka", "Takahisa Imagawa", "Tatsuya Mori", "Takashi Onishi", "Yoshimasa Tsuruoka" ], "title": "Learning robust options by conditional value at risk optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jian Hu", "Seth Austin Harding", "Haibin Wu", "Shih-wei Liao" ], "title": "Qr-mix: Distributional value function factorisation for cooperative multi-agent reinforcement learning", "venue": "arXiv preprint arXiv:2009.04197,", "year": 2020 }, { "authors": [ "Maximilian Hüttenrauch", "Adrian Šošić", "Gerhard Neumann" ], "title": "Guided deep reinforcement learning for swarm systems", "venue": "AAMAS", "year": 2017 }, { "authors": [ "Dan A Iancu", "Marek Petrik", "Dharmashankar Subramanian" ], "title": "Tight approximations of dynamic risk measures", "venue": "Mathematics of Operations Research,", "year": 2015 }, { "authors": [ "Shariq Iqbal", "Fei Sha" ], "title": "Actor-attention-critic for multi-agent reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Jiechuan Jiang", "Zongqing Lu" ], "title": "Learning attentional communication for multi-agent cooperation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ramtin Keramati", "Christoph Dann", "Alex Tamkin", "Emma Brunskill" ], "title": "Being optimistic to be conservative: Quickly learning a cvar policy", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Ravi Kumar Kolla", "LA Prashanth", "Sanjay P Bhat", "Krishna Jagannathan" ], "title": "Concentration bounds for empirical conditional value-at-risk: The unbounded case", "venue": "Operations Research Letters,", "year": 2019 }, { "authors": [ "Landon Kraemer", "Bikramjit Banerjee" ], "title": "Multi-agent reinforcement learning as a rehearsal for decentralized planning", "venue": null, "year": 2016 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Ryan Lowe", "Yi Wu", "Aviv Tamar", "Jean Harb", "OpenAI Pieter Abbeel", "Igor Mordatch" ], "title": "Multi-agent actor-critic for mixed cooperative-competitive environments", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Xueguang Lyu", "Christopher Amato" ], "title": "Likelihood quantile networks for coordinating multi-agent reinforcement learning", "venue": "In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems,", "year": 2020 }, { "authors": [ "Xiaoteng Ma", "Qiyuan Zhang", "Li Xia", "Zhengyuan Zhou", "Jun Yang", "Qianchuan Zhao" ], "title": "Distributional soft actor critic for risk sensitive learning", "venue": "arXiv preprint arXiv:2004.14547,", "year": 2020 }, { "authors": [ "Anuj Mahajan", "Tabish Rashid", "Mikayel Samvelyan", "Shimon Whiteson" ], "title": "MAVEN: Multi-agent variational exploration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Anirudha Majumdar", "Marco Pavone" ], "title": "How should a robot assess risk? Towards an axiomatic theory of risk in robotics", "venue": "In Robotics Research,", "year": 2020 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Frans A Oliehoek", "Matthijs TJ Spaan", "Nikos Vlassis" ], "title": "Optimal and approximate q-value functions for decentralized POMDPs", "venue": "Journal of Artificial Intelligence Research,", "year": 2008 }, { "authors": [ "Frans A Oliehoek", "Christopher Amato" ], "title": "A Concise Introduction to Decentralized POMDPs, volume", "venue": null, "year": 2016 }, { "authors": [ "Nicolas Privault" ], "title": "Notes on Financial Risk and Analytics", "venue": "Course notes,", "year": 2020 }, { "authors": [ "Tabish Rashid", "Mikayel Samvelyan", "Christian Schroeder De Witt", "Gregory Farquhar", "Jakob Foerster", "Shimon Whiteson" ], "title": "QMIX: Monotonic value function factorisation for deep multi-agent reinforcement learning", "venue": "arXiv preprint arXiv:1803.11485,", "year": 2018 }, { "authors": [ "D Sai Koti Reddy", "Amrita Saha", "Srikanth G Tamilselvam", "Priyanka Agrawal", "Pankaj Dayama" ], "title": "Risk averse reinforcement learning for mixed multi-agent environments", "venue": "In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems,", "year": 2019 }, { "authors": [ "R Tyrrell Rockafellar", "Stanislav Uryasev" ], "title": "Conditional value-at-risk for general loss distributions", "venue": "Journal of Banking & Finance,", "year": 2002 }, { "authors": [ "R Tyrrell Rockafellar", "Stanislav Uryasev" ], "title": "Optimization of conditional value-at-risk", "venue": "Journal of Risk,", "year": 2000 }, { "authors": [ "Andrzej Ruszczyński" ], "title": "Risk-averse dynamic programming for markov decision processes", "venue": "Mathematical Programming,", "year": 2010 }, { "authors": [ "Mikayel Samvelyan", "Tabish Rashid", "Christian Schroeder de Witt", "Gregory Farquhar", "Nantas Nardelli", "Tim G.J. Rudner", "Chia-Man Hung", "Philiph H.S. Torr", "Jakob Foerster", "Shimon Whiteson" ], "title": "The StarCraft Multi-Agent Challenge", "venue": null, "year": 1902 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of Go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Arambam James Singh", "Akshat Kumar", "Hoong Chuin Lau" ], "title": "Hierarchical multiagent reinforcement learning for maritime traffic management", "venue": "In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems,", "year": 2020 }, { "authors": [ "Kyunghwan Son", "Daewoo Kim", "Wan Ju Kang", "David Earl Hostallero", "Yung Yi" ], "title": "Qtran: Learning to factorize with transformation for cooperative multi-agent reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Peter Sunehag", "Guy Lever", "Audrunas Gruslys", "Wojciech Marian Czarnecki", "Vinicius Zambaldi", "Max Jaderberg", "Marc Lanctot", "Nicolas Sonnerat", "Joel Z Leibo", "Karl Tuyls" ], "title": "Value-decomposition networks for cooperative multi-agent learning", "venue": "arXiv preprint arXiv:1706.05296,", "year": 2017 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement Learning: An Introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Aviv Tamar", "Yinlam Chow", "Mohammad Ghavamzadeh", "Shie Mannor" ], "title": "Policy gradient for coherent risk measures", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Ardi Tampuu", "Tambet Matiisen", "Dorian Kodelja", "Ilya Kuzovkin", "Kristjan Korjus", "Juhan Aru", "Jaan Aru", "Raul Vicente" ], "title": "Multiagent cooperation and competition with deep reinforcement learning", "venue": "PLoS ONE,", "year": 2017 }, { "authors": [ "Yichuan Charlie Tang", "Jian Zhang", "Ruslan Salakhutdinov" ], "title": "Worst cases policy gradients", "venue": "CoRL,", "year": 2019 }, { "authors": [ "Oriol Vinyals", "Timo Ewalds", "Sergey Bartunov", "Petko Georgiev", "Alexander Sasha Vezhnevets", "Michelle Yeo", "Alireza Makhzani", "Heinrich Küttler", "John Agapiou", "Julian Schrittwieser" ], "title": "Starcraft ii: A new challenge for reinforcement learning", "venue": "arXiv preprint arXiv:1708.04782,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Igor Babuschkin", "Wojciech M Czarnecki", "Michaël Mathieu", "Andrew Dudzik", "Junyoung Chung", "David H Choi", "Richard Powell", "Timo Ewalds", "Petko Georgiev" ], "title": "Grandmaster level in StarCraft II using multi-agent reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "John Von Neumann", "Oskar Morgenstern" ], "title": "Theory of Games and Economic Behavior, 2nd rev", "venue": "Princeton university press,", "year": 1947 }, { "authors": [ "Jianhao Wang", "Zhizhou Ren", "Terry Liu", "Yang Yu", "Chongjie Zhang" ], "title": "Qplex: Duplex dueling multi-agent q-learning", "venue": "arXiv preprint arXiv:2008.01062,", "year": 2020 }, { "authors": [ "Rundong Wang", "Xu He", "Runsheng Yu", "Wei Qiu", "Bo An", "Zinovi Rabinovich" ], "title": "Learning efficient multi-agent communication: An information bottleneck approach", "venue": "International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Yaodong Yang", "Jianye Hao", "Ben Liao", "Kun Shao", "Guangyong Chen", "Wulong Liu", "Hongyao Tang" ], "title": "Qatten: A general framework for cooperative multiagent reinforcement learning", "venue": null, "year": 2002 }, { "authors": [ "Junyu Zhang", "Amrit Singh Bedi", "Mengdi Wang", "Alec Koppel" ], "title": "Cautious reinforcement learning via distributional risk in the dual domain", "venue": "arXiv preprint arXiv:2002.12475,", "year": 2020 }, { "authors": [ "Reddy" ], "title": "2019) or have no general framework and convincing results in complex domains (Lyu", "venue": "Dabney et al.,", "year": 2020 }, { "authors": [ "Chow", "Ghavamzadeh" ], "title": "mean-CVaR optimization problem in MDPs and proposed policy gradient with CVaR, and García et al. (2015) presented a survey on safe RL, which have ignited the research on borrowing risk measures in RL", "venue": "(García et al.,", "year": 2015 }, { "authors": [ "∈ S" ], "title": "With proposition 1, we can leverage the TD learning (Sutton & Barto, 2018) to compute the maximal CVaR value of each agent, thus leading to the maximal global CVaR value. In some scenarios, where risk is not the primal concern for policy optimization, for example corridor scenario, where agents", "venue": null, "year": 2018 }, { "authors": [ "D′ = (st", "ut", "rt" ], "title": "st+1) and D is the replay buffer. θ̄ indicates the parameters of the target network which is periodically copied from θ for stabilizing training", "venue": "(Mnih et al.,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) has made remarkable advances in many domains, including arcade video games (Mnih et al., 2015), complex continuous robot control (Lillicrap et al., 2016) and the game of Go (Silver et al., 2017). Recently, many researchers put their efforts to extend the RL methods into multi-agent systems (MASs), such as urban systems (Singh et al., 2020), coordination of robot swarms (Hüttenrauch et al., 2017) and real-time strategy (RTS) video games (Vinyals et al., 2019). Centralized training with decentralized execution (CTDE) (Oliehoek et al., 2008; Kraemer & Banerjee, 2016) has drawn enormous attention via training policies of each agent with access to global trajectories in a centralized way and executing actions given only the local observations of each agent in a decentralized way. Empowered by CTDE, several MARL methods, including valuebased and policy gradient-based, are proposed (Foerster et al., 2017a; Sunehag et al., 2017; Rashid et al., 2018; Son et al., 2019). These MARL methods propose decomposition techniques to factorize the global Q value either by structural constraints or by estimating state-values or inter-agent weights to conduct the global Q value estimation. Among these methods, VDN (Sunehag et al., 2017) and QMIX (Rashid et al., 2018) are representative methods that use additivity and monotonicity structure constraints, respecitively. With relaxed structural constraints, QTRAN (Son et al., 2019) guarantees a more general factorization than VDN and QMIX. Some other methods include incorporating an estimation of advantage values (Wang et al., 2020a) and proposing a multi-head attention method to represent the global values (Yang et al., 2020).\nDespite the merits, most of these works focus on decomposing the global Q value into individual Q values with different constraints and network architectures, but ignore the fact that the expected, i.e., risk-neutral, value decomposition is not sufficient even with CTDE due to the randomness of rewards and the uncertainty in environments, which causes the failure of these methods to train coordinating agents in complex environments. Specifically, these methods only learn the expected values over\nreturns (Rashid et al., 2018) and do not handle the high variance caused by events with extremely high/low rewards to agents but small probabilities, which cause the inaccurate/insufficient estimations of the future returns. Therefore, instead of expected values, learning distributions of future returns, i.e., Q values, is more useful for agents to make decisions. Recently, QR-MIX (Hu et al., 2020) decomposes the estimated joint return distribution (Bellemare et al., 2017; Dabney et al., 2018a) into individual Q values. However, the policies in QR-MIX are still based expected individual Q values. Even further, given that the environment is nonstationary from the perspective of each agent, each agent needs a more dynamic way to choose actions based on the return distributions, rather than simply taking the expected values. However, current MARL methods do not extensively investigate these aspects.\nMotivated by the previous reasons, we intend to extend the risk-sensitive1 RL (Chow & Ghavamzadeh, 2014; Keramati et al., 2020; Zhang et al., 2020) to MARL settings, where risksensitive RL optimizes policies with a risk measure, such as variance, power formula measure value at risk (VaR) and conditional value at risk (CVaR). Among these risk measures, CVaR has been gaining popularity due to both theoretical and computational advantages (Rockafellar & Uryasev, 2002; Ruszczyński, 2010). However, there are two main obstacles: i) most of the previous works focus on risk-neutral or static risk level in single-agent settings, ignoring the randomness of reward and the temporal structure of agents’ trajectories (Dabney et al., 2018a; Tang et al., 2019; Ma et al., 2020; Keramati et al., 2020); ii) many methods use risk measures over Q values for policy execution without getting the risk measure values used in policy optimization in temporal difference (TD) learning, which causes the global value factorization on expected individual values to sub-optimal behaviours in MARL. We provide a detailed review of related works in Appendix A due to the limited space.\nIn this paper, we propose RMIX, a novel cooperative MARL method with CVaR over the learned distributions of individuals’ Q values. Specifically, our contributions are in three folds: (i) We first learn the return distributions of individuals by using Dirac Delta functions in order to analytically calculate CVaR for decentralized execution. The resulting CVaR values at each time step are used as policies for each agent via arg max operation; (ii) We then propose a dynamic risk level predictor for CVaR calculation to handle the temporal nature of stochastic outcomes during executions. The dynamic risk level predictor measures the discrepancy between the embedding of current individual return distributions and the embedding of historical return distributions. The dynamic risk levels are agent-specific and observation-wise; (iii) We finally propose risk-sensitive Bellman equation along with IGM for centralized training. The risk sensitive Bellman equation enables CVaR value update in a recursive form and can be trained with TD learning via a neural network. These also allow our method to achieve temporally extended exploration and enhanced temporal coordination, which are key to solving complex multi-agent tasks. Empirically, we show that RMIX significantly outperforms state-of-the-art methods on many challenging StarCraft IITM2 tasks, demonstrating enhanced coordination in many symmetric & asymmetric and homogeneous & heterogeneous scenarios and revealing high sample efficiency. To the best of our knowledge, our work is the first attempt to investigate cooperative MARL with risk-sensitive policies under the Dec-POMDP framework." }, { "heading": "2 PRELIMINARIES", "text": "In this section, we provide the notation and the basic notions we will use in the following. We consider the probability space (Ω,F ,Pr), where Ω is the set of outcomes (sample space), F is a σ-algebra over Ω representing the set of events, and Pr is the set of probability distributions. Given a set X , we denote with P(X ) the set of all probability measures over X . DEC-POMDP A fully MARL problem can be described as a decentralised partially observable Markov decision process (Dec-POMDP) (Oliehoek et al., 2016) which can be formulated as a tuple M = 〈S,U ,P, R,Υ, O,N , γ〉, where s ∈ S denotes the true state of the environment. Each agent i ∈ N := {1, ..., N} chooses an action ui ∈ U at each time step, giving rise to a joint action vector, u := [ui] N i=1 ∈ UN . P(s′|s,u) : S × UN × S 7→ P(S) is a Markovian transition function and governs all state transition dynamics. Every agent shares the same joint reward function R(s,u) : S × UN 7→ R, and γ ∈ [0, 1) is the discount factor. Due to partial observability, each agent has individual partial observation υ ∈ Υ, according to some observation functionO(s, i) : S×N 7→ Υ.\n1“Risk” refers to the uncertainty of future outcomes (Dabney et al., 2018a). 2StarCraft II is a trademark of Blizzard Entertainment, Inc.\nEach agent also has an action-observation history τi ∈ T := (Υ × U)∗, on which it conditions its stochastic policy πi(ui|τi) : T × U 7→ [0, 1]. CVaR CVaR is a coherent risk measure and enjoys computational properties (Rockafellar & Uryasev, 2002) that are derived for loss distributions in discreet decision-making in finance. It gains popularity in various engineering and finance applications. CVaR (as illustrated in Figure 1) is the expectation of values that are less equal than the α-percentile value of the distribution over returns.\nFormally, let X ∈ X be a bounded random variable with cumulative distribution function F (x) = P [X ≤ x] and the inverse CDF is F−1(u) = inf{x : F (x) ≥ u}. The conditional value at risk (CVaR) at level α ∈ (0, 1] of a random variable X is then defined as CVaRα(X) := supν { ν − 1αE[(ν −X) +] }\n(Rockafellar et al., 2000) when X is a discrete random variable. Correspondingly, CVaRα(X) = EX∼F [ X|X ≤ F−1(α) ] (Acerbi & Tasche, 2002) when X has a continuous distribution. The α-percentile value is value at risk (VaR). For ease of notation, we write CVaR as a function of the CDF F , CVaRα(F ).\nRisk-sensitive RL Risk-sensitive RL uses risk criteria over policy/value, which is a sub-field of the Safety RL (García et al., 2015). Von Neumann & Morgenstern (1947) proposed the expected utility theory where a decision policy behaves as though it is maximizing the expected value of some utility functions. The theory is satisfied when the decision policy is a consistent and has a particular set of four axioms. This is the most pervasive notion of risk-sensitivity. A policy maximizing a linear utility function is called risk-neutral, whereas concave or convex utility functions give rise to riskaverse or risk-seeking policies, respectively. Many measures are used in RL such as CVaR (Chow et al., 2015; Dabney et al., 2018a) and power formula (Dabney et al., 2018a). However, few works have been done in MARL and it cannot be easily extended. Our work fills this gap.\nCTDE CTDE has recently attracted attention from deep MARL to deal with nonstationarity while learning decentralized policies. One of the promising ways to exploit the CTDE paradigm is value function decomposition (Sunehag et al., 2017; Rashid et al., 2018; Son et al., 2019) which learns a decentralized utility function for each agent and uses a mixing network to combine these local Q values into a global action-value. It follows the IGM principle where the optimal joint actions across agents are equivalent to the collection of individual optimal actions of each agent (Son et al., 2019). To achieve learning scalability, existing CTDE methods typically learn a shared local value or policy network for agents." }, { "heading": "3 METHODOLOGY", "text": "In this section, we present our framework RMIX, as displayed in Figure 2, where the agent network learns the return distribution of each agent, a risk operator network determines the risk level of each agent and the mixing network mixes the outputs of risk operators of agents to produce the global value. In the rest of this section, we first introduce the CVaR operator to analytically calculate the CVaR value with the modeled individual distribution of each agent in Section 3.1 and propose the dynamic risk level predictor to alleviate time-consistency issue in Section 3.2. Then, we introduce the risk-sensitive Bellman equation for both centralized training and decentralized execution in Section 3.3. Finally, we provide the details of centralized training of RMIX in Section 3.4. All proofs are provided in Appendix B." }, { "heading": "3.1 CVAR OF RETURN DISTRIBUTION", "text": "In this section, we describe how we estimate the CVaR value. The value of CVaR can be either estimated through sampling or computed from the paremetrized return distribution (Rockafellar & Uryasev, 2002). However, the sampling method is usually computationally expensive (Tang et al., 2019). Therefore, we let each agent learn a return distribution parameterized by a mixture\nof Dirac Delta (δ) functions 3, which is demonstrated to be highly expressive and computationally efficient (Bellemare et al., 2017). For convenience, we provide the definition of the Generalized Return Probability Density Function (PDF). Definition 1. (Generalized Return PDF). For a discrete random variable R ∈ [−Rmax, Rmax] and probability mass function (PMF) P(R = rk), where rk ∈ [−Rmax, Rmax], we define the generalized return PDF as: fR(r) = ∑ rk∈R PR(rk)δ(r − rk). Note that for any rk ∈ R, the probability of R = rk is given by the coefficient of the corresponding δ function, δ(r − rk).\nWe define the return distribution of each agent i at time step t as:\nZti (τi, u t−1 i ) = ∑m j=1 Pj(τi, u t−1 i )δj(τi, u t−1 i ) (1)\nwherem is the number of Dirac Delta functions. δj(τi, ut−1i ) is the j-th Dirac Delta function and indicates the estimated value which can be parameterized by neural networks in practice. Pj(τi, ut−1i ) is the corresponding probability of the estimated value given local observations and actions. τi and ut−1i are trajectories (up to that timestep) and actions of agent i, respectively. With the individual return distribution Zti (τi, u t−1 i ) ∈ Z and cumulative distribution function (CDF) FZi(τi,ut−1i ), we define the CVaR operator Παi , at a risk level αi (αi ∈ (0, 1] and i ∈ A) over return as4\nCti (τi, u t−1 i , αi) = ΠαtiZ t i (τi, u t−1 i ) = CVaRαti(FZti (τi,u t−1 i ) ), (2)\nwhereC ∈ C. As we use CVaR on return distributions, it corresponds to risk-neutrality (expectation, αi = 1) and indicates the improving degree of risk-aversion (αi → 0). CVaRαi can be estimated in a nonparametric way given ordering of Dirac Delta functions {δj}mj=1 (Kolla et al., 2019) by leveraging the individual distribution:\nCVaRαi = ∑m\nj=1 Pjδj1 {δj ≤ v̂m,αi}, (3)\nwhere 1{·} is the indicator function and v̂m,αi is estimated value at risk from v̂m,αi = bδm(1−αi)c with b·c being floor function. This is a closed-form formulation and can be easily implemented in practice. The optimal action of agent i can be calculated via arg maxui Ci(τi, u t−1 i , αi). We will introduce the decentralized execution in detail in Section 3.3. 3The Dirac Delta is a Generalized function in the theory of distributions and not a function given the properties of it, we use the name Dirac Delta function by convention. 4We will omit t in the rest of the paper for notation brevity." }, { "heading": "3.2 DYNAMIC RISK LEVEL PREDICTION", "text": "The values of risk levels, i.e., αi, i ∈ A, are important for the agents to make decisions. Most of the previous works take a fixed value of risk level and do not take into account any temporal structure of agents’ trajectories, which can impede centralized training in the evolving multi-agent environments. Therefore, we propose the dynamic risk level prediction, which determines the risk levels of agents by explicitly taking into account the temporal nature of the stochastic outcomes, to alleviate time-consistency issue (Ruszczyński, 2010; Iancu et al., 2015) and stabilize the centralized training. Specifically, we represent the risk operator Πα by a deep neural network, which calculates the CVaR value with predicted dynamic risk level α over the return distribution.\nRNN of Agent 𝑖\nRNN of 𝜓! ?̃?!\n𝑍!\n…\nΠ\"\nC!\n𝑡\n𝜋!\n𝑡\n𝛼#\nObservations & actions\nFigure 3: Agent architecture.\n𝑍!\n\"𝑍!\n×\n1 1 1 1 0 0\nC! = Π\"!𝑍!\n0 𝑓\"#$#\n𝜙!\n𝛼!\n𝐾 Dirac functions The dimension is 𝐾\nx𝑍!\ndistributions\ndistributions\nSoftmax\nFigure 4: Risk level predictor ψi.\nWe show the architecture of agent i in Figure 3 and illustrate how ψi works with agent i for CVaR calculation in practice in Figure 4. As depicted in Figure 4, at time step t, the agent’s return distribution is Zi and its historical return distribution is Z̃i. Then we conducts the inner product to measure the discrepancy between the embedding of individual return distribution femb(Zi) and the embedding of past trajectory φi(τ0:t−1i , u t−1 i ) modeled by GRU (Chung et al., 2014). We discretize the risk level range into K even ranges for the purpose of computing. The k-th dynamic risk level αki is output from ψi and the probability of α k i is defined as:\nP(αki ) = exp\n(〈 femb(Zi) k, φki 〉)∑K−1\nk′=0 exp (〈 femb(Zi)k ′ , φk ′ i 〉) . (4) Then we get the k ∈ [1, . . . ,K] with the maximal probability by arg max and normalize it into (0, 1], thus αi = k/K. The prediction risk level αi is a scalar value and it is converted into to a K-dimensional mask vector where the first bαi×Kc items are one and the rest items are zero. This mask vector is used to calculate the CVaR value (Eqn. 2 and 3) of each action-return distribution that contains K Dirac functions. Finally, we obtain Ci and the policy πi as illustrated in Figure 3. During training, fembi updates its weights and the gradients of fembi are blocked (the dotted arrow in Figure 3) in order to prevent changing the weights of the network of agent i.\nWe note that the predictor differs from the attention network used in previous works (Iqbal & Sha, 2019; Yang et al., 2020) because the agent’s current return distribution and its return distribution of previous time step are separate inputs of their embeddings and there is no key, query and value weight matrices. The dynamic risk level predictors allow agents to determine the risk level dynamically based on historical return distributions." }, { "heading": "3.3 RISK-SENSITIVE BELLMAN EQUATION", "text": "Motivated by the success of optimizing the CVaR value in single-agent RL (Chow & Ghavamzadeh, 2014), RMIX aims to maximize the CVaR value of the joint return distribution, rather than the expectation (Rashid et al., 2018; Son et al., 2019). As proved in Theorem 1, the maximizing operation of CVaR values satisfies the IGM principle, which implies that maximizing the CVaR value of joint return distribution is equivalent to maximizing the CVaR value of each agent. Theorem 1. In decentralized execution, given α = {αi}n−1i=0 , we define the global arg max performed on global CVaR Ctot(τ ,u,α) as:\narg max u\nCtot(τ ,u,α) = ( arg max\nu1 C1(τ1, u1, α1), · · · , arg max un Cn(τn, un, αn)\n) (5)\nwhere τ and u are trajectories (up to that timestep) and actions of all agents, respectively. The individuals’ maximization operation over return distributions defined above satisfies IGM and allows each agent to participate in a decentralised execution solely by choosing greedy actions with respect to its Ci(τi, ui, αi).\nTo maximize the CVaR value of each agent, we define the risk-sensitive Bellman operator T : T Ctot(s,u,α) := E [ R(s,u) + γmax\nu′ Ctot(s′,u′,α′)\n] (6)\nwhere α and α′ are agents’ static risk levels or dynamic risk levels output from the dynamic risk level predictor ψ at each time step. The risk-sensitive Bellman operator T operates on the CVaR value of the agent and the reward, which can be proved to be a contracting operation, as showed in Proposition 1. Therefore, we can leverage the TD learning (Sutton & Barto, 2018) to compute the maximal CVaR value of each agent, thus leading to the maximal global CVaR value. Proposition 1. T : C 7→ C is a γ-contraction." }, { "heading": "3.4 CENTRALIZED TRAINING", "text": "We introduce the centralized training of RMIX. We utilize the monotonic mixing network of QMIX, which is a value decomposition network via hypernetwork (Ha et al., 2017) to maintain the monotonicity and has shown success in cooperative MARL. Based on IGM (Son et al., 2019) principle, we define monotonic IGM between Ctot and Ci for RMIX:\n∂Ctot\n∂Ci ≥ 0,∀i ∈ {1, 2, . . . , N}, (7)\nwhere Ctot is the total CVaR value and Ci(τi, ui) is the individual CVaR value of agent i, which can be considered as a latent combination of agents’ implicit CVaR values to the global CVaR value. Following the CTDE paradigm, we define the TD loss of RMIX as:\nLΠ(θ) := ED′∼D [ (ytott − Ctot (st,ut,αt))2 ] (8)\nwhere ytott = ( rt + γmaxu′ C tot θ̄ (st+1,u ′,α′) ) , and (ytott − Ctotθ (st,ut,αt)) is our CVaR TD error for updating CVaR values. θ is the parameters of Ctot which can be modeled by a deep neural network and θ̄ indicates the parameters of the target network which is periodically copied from θ for stabilizing training (Mnih et al., 2015). While training, gradients from Zi are blocked to avoid changing the weights of the agents’ network from the dynamic risk level predictor. We train RMIX in an end-to-end manner. ψi is trained together the agent network via the loss defined in Eq. 8. During training, fembi updates its weights while gradients of fembi are blocked in order to prevent changing the weights of the return distribution in agent i. The pseudo code of RMIX is shown in Algorithm 1 in Appendix D. We present our framework as shown in Figure 2. Our framework is flexible and can be easily used in many cooperative MARL methods." }, { "heading": "4 EXPERIMENTS", "text": "We empirically evaluate our methods on various StarCraft II scenarios. Especially, we are interested in the robust cooperation in complex asymmetric and homogeneous/heterogeneous scenarios. Additional introduction of baselines, scenarios and results are in Appendix C, E, F and G." }, { "heading": "4.1 EXPERIMENT SETUP", "text": "StarCraft II We consider SMAC (Samvelyan et al., 2019) benchmark5 (screenshots of some scenarios are in Figure 5), a challenging set of cooperative StarCraft II maps for micromanagement, as our evaluation environments. We evaluate our methods for every 10,000 training steps during training by running 32 episodes in which agents trained with our methods battle with built-in game bots. We report the mean test won rate (test_battle_won_mean, percentage of episodes won of MARL agents) along with one standard deviation of won rate (shaded in figures). Due to limited\n5https://github.com/oxwhirl/smac\npage space, we present the results of our methods and baselines on 8 scenarios (we train our methods and baselines on 17 SMAC scenarios): corridor, 3s5z_vs_3s6z, 6h_vs_8z, 5m_vs_6m, 8m_vs_9m, 10m_vs_11m, 27m_vs_30m and MMM2. Table 1 shows detailed information on these scenarios.\nBaselines and training The baselines are IQL (Tampuu et al., 2017), VDN (Sunehag et al., 2017), COMA (Foerster et al., 2017a), QMIX (Rashid et al., 2018), QTRAN (Son et al., 2019), MAVEN (Mahajan et al., 2019) and Qatten (Yang et al., 2020). We implement our methods on PyMARL6 and use 5 random seeds for training each method on 17 SMAC scenarios. We carry out experiments on NVIDIA Tesla V100 GPU 16G.\n4.2 EXPERIMENT RESULTS\nRMIX demonstrates substantial superiority over baselines in asymmetric and homogeneous scenarios as depicted in Figure 7. RMIX outperforms baselines in asymmetric homogeneous scenarios: 5m_vs_6m, 8m_vs_9m, 10m_vs_11m and 27m_vs_30m (hard game). In 3s5z_vs_3s6z (asymmetric heterogeneous, very hard game) and MMM2 (symmetric heterogeneous, hard game), RMIX also shows leading performance over baselines. RMIX learns micro-trick (wall off) and micro-trick (focus fire) faster and better in very hard corridor and 6h_vs_8z. RMIX improves coordination in a sample efficient way via risk-sensitive policies. We summarize the performance of\nRMIX and QMIX on 17 SMAC scenarios in Figure 6. Readers can refer to Figure 19 and 20 for more results. We present results of RMIX in Figure 13 and 14 on 3s5z_vs_3s6z and 6h_vs_8z in 8 million training steps. Although training 27m_vs_30m is memory-consuming, we also present results of 2.5 million training steps, as depicted in Figure 15.\nInterestingly, as illustrated in Figure 8, RMIX also demonstrates leading exploration performance over baselines on very hard corridor (in Figure 5(d)) scenario, where there is a narrow corridor con-\n6https://github.com/oxwhirl/pymarl\nnecting two separate rooms, and agents should learn to cooperatively combat the opponents to avoid being beaten by opponents with the divide-and-conquer strategy. RMIX outperforms MAVEN (Mahajan et al., 2019), which was originally proposed for tackling multi-agent exploration problems, both in sample efficiency and final results. After 4 million training steps, RMIX starts to converge while MAVEN starts to converge after over 7 million training steps.\nIn addition, we compare RMIX with QR-MIX (Hu et al., 2020). We implement QR-MIX with PyMARL and train it on 5m_vs_6m (hard), corridor (very hard), 6h_vs_8z (very hard) and 3s5z_vs_3s6z (very hard) with 3 random seeds for each scenario. Hyper-parameters used during training are from QR-MIX paper. As shown in Figure 9, RMIX shows leading performance and superior sample efficiency over QR-MIX on 5m_vs_6m , 27m_vs_30m and 6h_vs_8z. With distributional RL, QR-MIX presents slightly better performance on 3s5z_vs_3s6z. We present more results of RMIX vs QR-MIX in Appendix G.3.\nWe conduct an ablation study by fixing the risk level with the value of 1, thus we get the risk-neutral method, which we name as RMIX (α = 1). We present results of RMIX, RMIX (α = 1) and QMIX\non 4 scenarios in Figure 10. RMIX outperforms RMIX (α = 1) in many heterogeneous and asymmetric scenarios, demonstrating the benefits of learning risk-sensitive MARL policies in complex scenarios where the potential of loss should be taken into consideration in coordination. Intuitively, for asymmetric scenarios, agents can be easily defeated by the opponents. As a consequence, coordination between agents is cautious in order to win the game, and the cooperative strategies in these scenarios should avoid massive casualties in the starting stage of the game. Apparently, our risk-sensitive policy representation works better than vanilla expected Q values in evaluation. In heterogeneous scenarios, action space and observation space are different among different types of agents, and methods with vanilla expected action value are inferior to RMIX.\nTo show that our proposed method is flexible in other mixing network for the ablation study, we apply additivity of individual CVaR values to represent the global CVaR value as Ctot(τ ,u,α) = C1(τ1, u1, α1) + · · · + Cn(τn, un, αn). Following the training of RMIX, we name this method Risk Decomposition Network (RDN) for ablation study. we use experiment setup of VDN and train RDN on 5 SMAC scenarios. With CVaR values, RDN outperforms VDN and QMIX in 1c3s5z, 5m_vs_6m, 8m_vs_9m and MMM2 with convincing improvements, as depicted in Figure 11. In some scenar-\nios, for example 1c3z5z and 8m_vs_9m, the converged performance is even equal to that of RMIX, which demonstrate that RMIX is flexible in additivity mixing networks. Overall, with the new policy representation and additivity decomposition network, we can gain convincing improvements of RDN over VDN.\nWe present how the risk level α of each agent changes during the episode and emergent cooperative strategies between agents in the results analysis of RMIX in Appendix G.2 due to limited space." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose RMIX, a novel risk-sensitive MARL method with CVaR over the learned distributions of individuals’ Q values. Our main contributions are in three folds: (i) We first learn the return distributions of individuals to analytically calculate CVaR for decentralized execution; (ii) We then propose a dynamic risk level predictor for CVaR calculation to handle the temporal nature of the stochastic outcomes during executions; (iii) We finally propose risk-sensitive Bellman equation along with Individual-Global-MAX (IGM) for MARL training. Empirically, we show that RMIX significantly outperforms state-of-the-art methods on many challenging StarCraft II tasks, demonstrating enhanced coordination in many complex scenarios and revealing high sample efficiency. To the best of our knowledge, our work is the first attempt to investigate cooperative MARL with risk-sensitive policies under the Dec-POMDP framework." }, { "heading": "A RELATED WORKS", "text": "As deep reinforcement learning (DRL) becomes prevailing (Mnih et al., 2015; Schulman et al., 2017), recent years have witnessed a renaissance in cooperative MARL with deep learning. However, there are several inevitable issues, including the nonstationarity of the environment from the view of individual agents (Foerster et al., 2017b), the credit assignment in cooperative scenarios with shared global rewards (Sunehag et al., 2017; Rashid et al., 2018), the lack of coordination and communication in cooperative scenarios (Jiang & Lu, 2018; Wang et al., 2020b) and the failure to consider opponents’ strategies when learning agent policies (He et al., 2016). Aiming to address these issues, centralized training with decentralized execution (CTDE) (Oliehoek et al., 2008; Kraemer & Banerjee, 2016) has drawn enormous attention via training policies of each agent with access to global trajectories in a centralized way and executing actions given only the local observations of each agent in a decentralized way. Several MARL methods are proposed (Lowe et al., 2017; Foerster et al., 2017a; Sunehag et al., 2017; Rashid et al., 2018; Son et al., 2019; Yang et al., 2020; Wang et al., 2020a), including value-based and policy gradient. Among these methods, VDN (Sunehag et al., 2017) and QMIX (Rashid et al., 2018) are representative methods that use value decomposition of the joint action-value function by adopting additivity and monotonicity structural constraints. Free from such structural constraints, QTRAN (Son et al., 2019) guarantees more general factorization than VDN and QMIX, however, such linear affine transformation fails to scale up in complex multi-agent scenarios, for example, StarCraft II environments.\nHowever, most of works focus on decomposing the global Q value into individual Q values with different formulas and network architectures, either with structural constraints, for example additivity and monotonicity (Sunehag et al., 2017; Rashid et al., 2018), or with estimating values that forms a new global Q value estimation, for example representing Q values via summation of state-value and advantage or multi-head attention networks (Wang et al., 2020a; Yang et al., 2020). Current MARL methods neglect the limited representation of agent values, which fails to consider the agent-level impact of individuals to the whole system when transforming individual utilities to global values, leading to hostile training in complex scenarios. Typically, the problem of random cost underlying the nonstationarity of the environment, a.k.a risk-sensitive learning, which is very important for many real-world applications, has rarely been investigated. Current work are either confined in simple settings (Reddy et al., 2019) or have no general framework and convincing results in complex domains (Lyu & Amato, 2020). Further research should be done in risk-sensitive cooperative MARL. We propose RMIX and fill the gap in this field.\nRecent advances in distributional RL (Bellemare et al., 2017; Dabney et al., 2018a;b) focuses on learning distribution over returns. However, with return distributions, these works still focus on either risk-neutral settings or with static risk level in single-agent setting, which neglects the ubiquitous significant risk-sensitive problems in many multi-agent real-world applications, including pipeline robots cooperation in factories, warehouse robots coordination, etc. This is very common in real-world applications, especially in highly dynamic tasks for example military action, resource allocation, finance portfolio management and Internet of Things (IoT), etc.\nChow & Ghavamzadeh (2014) proposed considered the mean-CVaR optimization problem in MDPs and proposed policy gradient with CVaR, and García et al. (2015) presented a survey on safe RL, which have ignited the research on borrowing risk measures in RL (García et al., 2015; Tamar et al., 2015; Tang et al., 2019; Hiraoka et al., 2019; Majumdar & Pavone, 2020; Keramati et al., 2020; Ma et al., 2020). However, these works focus on single-agent settings, where there is one agent interacting with the environment compared with dynamic and non-stationary multi-agent environments. The merit of CVaR in optimization of MARL has yet to be discovered. We explore the CVaR risk measure in our methods and demonstrate the leading performance of our MARL methods." }, { "heading": "B PROOFS", "text": "We present proofs on our propositions introduced in previous sections. The proposition and equations numbers are reused in restated propositions.\nAssumption 1. The mean rewards are bounded in a known interval, i.e., r ∈ [−Rmax, Rmax].\nThis assumption means we can bound the absolute value of the Q-values as |Qsa| ≤ Qmax = HRmax, where H is the maximum time horizon length in episodic tasks. Proposition 1. T : C 7→ C is a γ-contraction.\nProof. We consider the sup-norm contraction,∣∣T C(1)(s,u,α(1))− T C(2)(s,u,α(2))∣∣ ≤ γ ∥∥C(1)(s,u,α(1))− C(2)(s,u,α(2))∥∥∞ ∀s ∈ S,u ∈ U , α(i),i∈{1,2} ∈ A. (9)\nThe sup-norm is defined as ‖C‖∞ = sups∈S,u∈U,α∈A |C(s,u)| and C ∈ R.\nIn {Ci}n−1i=0 , the risk level is fixed and can be considered implicit input. Given two risk level set α(1) and α(2), and two different return distributions Z(1) and Z(2), we prove:∣∣T C(1) − T C(2)∣∣\n≤ max s,u ∣∣[T C(1)] (s,u,α(1))− [T C(2)] (s,u,α(1))∣∣ = max\ns,u ∣∣∣γ∑ s′ P(s′|s,u) ( max u′ C(1)(s ′,u′,α′)−max u′ C(2)(s ′,u′,α′) )∣∣∣\n≤ γmax s′ ∣∣∣max u′ C(1)(s ′,u′,α′)−max u′ C(2)(s ′,u′,α′) ∣∣∣\n≤ γmax s′,u′ ∣∣C(1)(s′,u′,α′)− C(2)(s′,u′,α′)∣∣ = γ ∥∥C(1) − C(2)∥∥∞\n(10)\nThis further implies that∣∣T C(1) − T C(2)∣∣ ≤ γ ∥∥C(1) − C(2)∥∥∞ ∀s ∈ S,u ∈ U , α(i),i∈{1,2} ∈ A. (11) With proposition 1, we can leverage the TD learning (Sutton & Barto, 2018) to compute the maximal CVaR value of each agent, thus leading to the maximal global CVaR value. In some scenarios, where risk is not the primal concern for policy optimization, for example corridor scenario, where agents should learn to explore to win the game. RMIX will show less sample efficient to learn the optimal policy compared with its risk neutral variant, RMIX (α = 1). As we can see in Figure 16(b), section G.1.1, RMIX learns slower than RMIX (α = 1) because it relies on the dynamic risk predictor while the predictor is trained together with the agent, it will take more samples in these environments. Interestingly, RMIX shows very good performance over other baselines. Theorem 1. In decentralized execution, given α = {αi}n−1i=0 , we define the global arg max performed on global CVaR Ctot(τ ,u,α) as:\narg max u\nCtot(τ ,u,α) = ( arg max\nu1 C1(τ1, u1, α1), · · · , arg max un Cn(τn, un, αn)\n) (5)\nwhere τ and u are trajectories (up to that timestep) and actions of all agents, respectively. The individuals’ maximization operation over return distributions defined above satisfies IGM and allows each agent to participate in a decentralised execution solely by choosing greedy actions with respect to its Ci(τi, ui, αi).\nProof. With monotonicity network fm, in RMIX, we have\nCtot(τ ,u,α) = fm(C1(τ1, u1, α1), . . . , Cn(τn, un, αn)) (12) Consequently, we have\nCtot(τ , {arg max u′ Ci(τi, u ′, αi)}n−1i=0 ) = fm({max u′ Ci(τi, u ′, αi)}n−1i=0 ) (13)\nBy the monotonocity property of fm, we can easily derive that if j ∈ {0, 1, . . . , n − 1}, u∗j = arg maxu′ Cj(τj , u\n′, αj), α∗j ∈ (0, 1] is the optimal risk level given the current return distributions and historical return distributions, and actions of other agents are not the best action, then we have\nfm({Cj(τj , uj , αj)}n−1i=0 ) ≤ fm({Cj(τj , uj , αj)} n−1 i=0,i6=j , Cj(τj , u ∗ j , αj)). (14)\nSo, for all agents, ∀j ∈ {0, 1, . . . , n− 1}, u∗j = arg maxu′ Cj(τj , u′, αj), we have\nfm({Cj(τi, ui, αi)}n−1i=0 ) ≤ fm({Cj(τj , uj , αj)} n−1 i=0,i6=j , Cj(τj , u ∗ j ))\n≤ fm({Ci(τi, u∗i , αi)}n−1i=0 ) = max {ui}n−1i=0 fm({Ci(τi, ui, αi)}n−1i=0 ). (15)\nTherefore, we can get\nmax {ui,αi}n−1i=0\nfm({Ci(τi, ui, αi)}n−1i=0 ) = maxu,α C tot(τ ,u,α), (16)\nwhich implies\nmax u Ctot(τ ,u,α) = Ctot(τ , {arg max u′,α′ Ci(τi, u ′, αi)}n−1i=0 ). (17)\nProposition 2. For any agent i, i ∈ {0, . . . , n − 1}, ∃λ(τi, ui) ∈ (0, 1], such that Ci(τi, ui) = λ(τi, ui)E [Zi(τi, ui)].\nProof. We first provide that given a return distribution Z, return random variable Z and risk level α ∈ A, ∀z, ΠαZ can be rewritten as E [Z |Z < z] < E [Z ]. This can be easily proved by following Privault (2020)’s proof. Thus we can get ΠαZ < E [Z], and there exists λ(τi,ui) ∈ (0, 1], which is a value of agent’s trajectories, such that ΠαZi(τi, ui) = λ(τi,ui)E [Zi(τi, ui)].\nProposition 2 implies that we can view the CVaR value as truncated values of Q values that are in the lower region of return distribution Zi(τi, ui). CVaR can be decomposed into two factors: λ(τi,ui) and E[Zi(τi, ui)]." }, { "heading": "C ADDITIONAL BACKGROUND", "text": "We introduce additional background on cooperative MARL algorithms, including QMIX, MAVEN and Qatten, for the convenience of readers who want to know more on these algorithms.\nQ-based cooperative MARL is concerned with estimating an accurate action-value function to select the actions with maximum expected returns. The optimal Q-function is defined as (Rashid et al., 2018):\nQtotθ (s,u) := E [ ∞∑ t=0 γtr (st,ut, θ) | st+1 ∼P (· | st,ut, θ) , ut+1 = arg maxQ tot θ (st+1, ·) ] = r(s,u, θ) + γE [ maxQtotθ (s ′, ·) | s′ ∼P (· | s,u, θ) ] ,\nwhere θ is the parameters of Qtot which can be model by deep neural networks working as the Q-function approximator. This Q-network can be trained by minimizing the loss function in a supervised-learning fashion as defined below:\nL(θ) := ED′∼D (rt + γQtotθ̄ (st+1, arg maxu Qtotθ (st+1, ·))︸ ︷︷ ︸ ytott −Qtotθ (st,ut))2 , where D′ = (st,ut, rt, st+1) and D is the replay buffer. θ̄ indicates the parameters of the target network which is periodically copied from θ for stabilizing training (Mnih et al., 2015). The network is trained in a centralized way with all partial observations accessible to all agents.\nQMIX QMIX (Rashid et al., 2018) is a well-known multi-agent Q-learning algorithm in the centralised training and decentralised execution paradigm, which restricts the joint action Q-values it can represent to be a monotonic mixing of each agent’s utilities in order to enable decentralisation and value decomposition:\nQmix := { Qtot | Qtot(τ ,u) = fm ( Q1 ( τ1, u1 ) , . . . Qn (τ n, un) ) , ∂fm ∂Qa ≥ 0, Qa(τ, u) ∈ R }\nand the arg max operator is used to get the Qtot for centralized training via TD loss similar to DQN (Mnih et al., 2015)\narg max u Qtot(τ ,u) =\n arg maxu1 Q1(τ 1, u1)\n... arg maxun Qn(τ n, un) . The architecture is shown in Figure 12. The monotonic mixing network fm is parametrised as a feedforward network, of which non-negative weights are generated by hypernetworks (Ha et al., 2017) with the state as input.\nMAVEN MAVEN (Mahajan et al., 2019) (multi-agent variational exploration) overcomes the detrimental effects of QMIX’s monotonicity constraint on exploration via learning a diverse ensemble of monotonic approximations with the help of a latent space. It consists of value-based agents that condition their behaviour on the shared latent variable z controlled by a hierarchical policy that offloads -greedy with committed exploration. Thus, fixing z, each joint action-value function is a monotonic approximation to the optimal action-value function that is learnt with Q-learning.\nQatten Qatten (Yang et al., 2020) explicitly considers the agent-level impact of individuals to the whole system when transforming individualQis intoQtot. It theoretically derives a general formula of Qtot in terms of Qi, based on a multi-head attention formation to approximate Qtot, resulting in not only a refined representation of Qtot with an agent-level attention mechanism, but also a tractable maximization algorithm of decentralized policies." }, { "heading": "D PSEUDO CODE OF RMIX", "text": "Algorithm 1: RMIX input: K, γ;\n1 Initialize parameters θ of the network of agent, risk operator and monotonic mixing network; 2 Initialize parameters θ̄ of the target network of agent, risk operator and monotonic mixing\nnetwork; 3 Initialize replay buffer D; 4 for e← 0 to MAX_EPISODE do 5 Start a new episode; 6 while EPISODE_IS_NOT_TEMINATED do 7 Get the global state st; 8 for agent i← 0 to n− 1 do 9 Get observation oti from the environment;\n10 Get action of last step ut−1i from the environment; 11 Estimate the local return distribution Zti (o t i, u t−1 i ); 12 Predict the risk level αi:\narg maxk { exp(〈femb(Zi)k,φki 〉)∑K−1\nk′=0 exp(〈femb(Zi)k ′ ,φk ′ i 〉) } k /K, k ∈ {1, ....,K};\n13 Calculate CVaR values Cti ( oti, u t−1 i , αi ) = ΠαtiZ t i ( (oti, u t−1 i ) ; 14 Get the action uti = arg maxut C t i ( oti, u t−1 i , αi ) ; 15 Concatenate uti, i ∈ [0, .., n− 1] into ut; 16 Execute uti into environment; 17 Receive global reward rt and observe a new state s′; 18 Store (st, {oti}i∈[0,...,n−1],ut, rt, s′) in replay buffer D; 19 if UPDATE then 20 Sample a min-batch D′ from replay buffer D; 21 For each sample in D′, calculate CVaR value Ci by following steps in line 9-13; 22 Concatenate CVaR values {[C00 , . . . , C0n−1]0, . . . , [C |D′|−1 0 , . . . , C |D′|−1 n−1 ]|D′|−1}; 23 For each [Cj0 , . . . , C j n−1]0,j∈[0,...,|D′|−1], calculate C tot j in the mixing network;\n24 Calculate the target value ytot = ( rt + γmaxu′ C\ntot θ̄\n) ;\n25 Calculate the TD loss LΠ(θ) := ED′∼D [ (ytot − Ctot)2 ] ; 26 Update θ by minimizing the TD loss; 27 Update θ̄: θ̄ ← θ; 28 return θ;" }, { "heading": "E ADDITIONAL ENVIRONMENT INTRODUCTION", "text": "SMAC benchmark is a challenging set of cooperative StarCraft II maps for micromanagement developed by Samvelyan et al. (2019) built on DeepMind’s PySC2 (Vinyals et al., 2017). We introduce states and observations, action space and rewards of SMAC, and environmental settings of RMIX below.\nStates and Observations At each time step, agents receive local observations within their field of view. This encompasses information about the map within a circular area around each unit with a radius equal to the sight range. The sight range makes the environment partially observable for each agent. An agent can only observe other agents if they are both alive and located within its sight range. Hence, there is no way for agents to distinguish whether their teammates are far away or dead. The feature vector observed by each agent contains the following attributes for both allied and enemy units within the sight range: distance, relative x, relative y, health, shield, and unit type. All Protos units have shields, which serve as a source of protection to offset damage and can regenerate if no new damage is received. The global state is composed of the joint observations but removing the restriction of sight range, which could be obtained during training in the simulations. All features, both in the global state and in individual observations of agents, are normalized by their maximum values.\nAction Space The discrete set of actions which agents are allowed to take consists of move[direction], attack[enemy id], stop and no-op. Dead agents can only take no-op action while live agents cannot. Agents can only move with a fixed movement amount 2 in four directions: north, south, east, or west. To ensure decentralization of the task, agents are restricted to use the attack[enemy id] action only towards enemies in their shooting range. This additionally constrains the ability of the units to use the built-in attack-move macro-actions on the enemies that are far away. The shooting range is set to be6 for all agents. Having a larger sight range than a shooting range allows agents to make use of the move commands before starting to fire. The unit behavior of automatically responding to enemy fire without being explicitly ordered is also disabled.\nRewards At each time step, the agents receive a joint reward equal to the total damage dealt on the enemy units. In addition, agents receive a bonus of 10 points after killing each opponent, and 200 points after killing all opponents for winning the battle. The rewards are scaled so that the maximum cumulative reward achievable in each scenario is around 20.\nEnvironmental Settings of RMIX The difficulty level of the built-in game AI we use in our experiments is level 7 (very difficult) by default as many previous works did (Rashid et al., 2018; Mahajan et al., 2019; Yang et al., 2020). The scenarios used in Section 4 are shown in Table 1. We present the table of all scenarios in SMAC in Table 1 and the corresponding memory usage for training each scenario in Table 2. The Ally Units are agents trained by MARL methods and Enemy Units are built-in game bots. For example, 5m_vs_6m indicates that the number of MARL agent is 5 while the number of the opponent is 6. The agent (unit) type is marine7. This asymmetric setting is hard for MARL methods.\n7A type of unit (agent) in StarCraft II. Readers can refer to https://liquipedia.net/ starcraft2/Marine_(Legacy_of_the_Void) for more information" }, { "heading": "F ADDITIONAL TRAINING DETAILS", "text": "The baselines are list in table 3 as depicted below. To make a fair comparison, we use episode (single-process environment for training, compared with parallel) runner defined in PyMARL to run all methods. The evaluation interval is 10, 000 for all methods. We use uniform probability to estimate Zi(·, ·) for each agent. We use the other hyper parameters used for training in the original papers of all baselines. The metrics are calculated with a moving window size of 15. Experiments are carried out on NVIDIA Tesla V100 GPU 16G. We also provide memory usage of baselines (given the current size of the replay buffer) for training each scenario of SCII domain in SMAC.\nWe use the same neural network architecture of agent used by QMIX (Rashid et al., 2018). The trajectory embedding network φi is similar to the network of the agent." }, { "heading": "G ADDITIONAL EXPERIMENTS ON SMAC", "text": "We use the same hyper-parameters in ablation study unless otherwise specified.\nG.1.1 STATIC RISK LEVEL\nWe present more results of RMIX and RMIX (α = 1) in Figure 13, 14 and 15. In vary hard 3s5z_vs_3s6z, 6h_vs_8z games, RMIX and RMIX (α = 1) outperforms baselines. Surprisingly, RMIX is even slightly better in 6h_vs_8z where micro-trick (focus fire) is learned. In conclusion, RMIX is also capable of learning hard micro-trick tasks in StarCraft II. In asymmetric scenario 27m_vs_30m, RMIX shows leading performance as well.\nWe also conduct an ablation study with static risk level of α = 0.1 and α = 0.3 in RMIX-static. Obviously, as shown in Figure 16, with static risk level, RMIX-static shows steadily progress over time, but its performance is lower than RMIX in 6h_vs_8z and 5m_vs_6m. On asymmetric scenario 5m_vs_6m, the converged performance of RMIX-static is 0.6, which is lower than RMIX’s (0.8).\nIn some scenarios, where risk is not the primal concern for policy optimization, for example corridor scenario, where agents should learn to explore to win the game. RMIX shows less sample efficient to learn the optimal policy compared with its risk neutral variant, RMIX (α = 1) in corridor as shown in Figure 16(b). RMIX learns slower than RMIX (α = 1) because it relies on the dynamic risk predictor while the predictor is trained together with the agent, it will take more samples in these environments. Interestingly, RMIX shows very good performance over other baselines.\nG.2 ADDITIONAL RESULTS ANALYSIS\nWe provide additional results analysis of RMIX on corridor in Figure 17. There are 6 RMIX agents in corridor. For brevity of visualization, we use the data of 3 agents (agent 0, 1 and 3) to analyse the results and to demonstrate RMIX agents have learned to address time-consistency issue.\nU nderreview as a conference paperatIC L R 2021\n6 Zealots: RMIX agents\n24 Zerglings: enemies\n0 1\nAgent 0 and 1 show similar 𝛼 values as they walking around and fighting with enemies. The health values are high\nAgent 3 combats with most of enemies and it quickly dies (𝛼 = 0)\n3\n0 1\nOne episode of game starts, 6 Zealots consist of RMIX agents and 24 Zerglings compose enemies\nIn order to win the game, agent 3 draws the attention of enemies and go the other side of the battlefield. The rest agents combat with fewer number enemies As being outnumbered, agent 3 quickly dies as shown in green line after agents die the 𝛼 = 0\nAgent 0 is followed by agent 1 (low health value) in order to draw enemies to come over. To avoid being killed, 𝛼 values are low which means the policy is risk-averse\n0\nWin\nIn the other side, as there are many agents around and the number of enemies is small, the 𝛼 values are over 0.8 (nearly risk-neutral)\n1\nGame starts\nRMIX agents are going to win, agent 1 walks outside the range of the 𝛼 values of agent 1 are over 0.8 near riskneutral\nRMIX agents are going to win. Agent 1 (low health value) walks outside the observation of enemies in order to survive\nAgents are seeking and waiting for enemies to come in the center of the corridor. Reward is zero\nTime-consistency 𝛼 values\nAgent 0 (low health value) goes across the corridor to draw enemies’ attention, 𝛼 values are low (risk-averse)\nAgent 0 dies (step 95) in the other side of the corridor. The risk level is zero\nFigure 17: RMIX results analysis on corridor. We use trained model of RMIX and run the model to collect one episode data including game replay, states, actions, rewards and α values (risk level). We show rewards of one episode and the corresponding α value each agent predicts per time step in row one and row two. We provide description and analyses on how agents learn time-consistency α values for the rest rows. Pictures are screenshots from the game replay. Readers can watch the game replay via this anonymous link: https://youtu.be/J-PG0loCDGk. Interestingly, it also shows emergent cooperation strategies between agents at different time step during the episode, which demonstrate the superiority of RMIX.\n24\nG.3 ADDITIONAL RESULTS\nWe conduct experiments of RMIX, QMIX, MAVEN, Qatten, VDN, COMA and IQL on 17 SMAC scenarios. We show results of test_battle_won_mean and test_return_mean of aforementioned methods in Figure 19 and 20, respectively. RMIX shows leading performance on most of scenarios, ranging from symmetric homogeneous scenarios to asymmetric heterogeneous scenarios. Surprisingly, RMIX also shows superior performance on scenarios where micro-trick should be learned to win the game.\nIn addition, we compare RMIX with QR-MIX (Hu et al., 2020). Unlike QMIX, QR-MIX decomposes the estimated joint return distribution into individual Q values. We implement QR-MIX with PyMARL by using hyper-parameters in QR-MIX paper and train it on 3m (easy), 1c3s5z (easy), 5m_vs_6m (hard), 8m_vs_9m (hard), 10_vs_11m (hard), 27m_vs_30m (very hard), MMM2 (very hard), 3s5z_vs_3s6z (very hard), corridor (very hard) and 6h_vs_8z (very hard) with 3 random seeds for each scenario. Results are shown in Figure 18." } ]
2,020
null
SP:21f870f084d0b9b91f258cf893c66fd207570236
[ "The paper presents a knowledge-driven prototypical learning strategy for few-shot classification tasks. The main idea of this work is to introduce a set of concepts defined in the subspaces of inputs and represent each class as a group of concept prototypes for few-shot learning. Following the prototypical networks, the method first computes the concept embeddings of an input, and then takes the summation of the distances between those embeddings and their corresponding concept prototypes in each class to estimate the class probability. The experiments validates the proposed methods on 4 benchmarks in three different domains, including vision, language and biology. For the biology task, the authors also develop a new benchmark on cross-organ cell type classification. ", "This paper introduces potential use of intermediate structured representation of input space called “concepts” which are most likely human-interpretable. This intermediate space is then used for few-shot learning instead of using only the input space. This leads to better classification performance on the task, and it shows that injecting human-interpretable structured representation into task correlates with better performance (as one would hope). The paper uses datasets from different domains and shows improvement over approaches that don’t use the above defined “concepts”." ]
Developing algorithms that are able to generalize to a novel task given only a few labeled examples represents a fundamental challenge in closing the gap between machineand human-level performance. The core of human cognition lies in the structured, reusable concepts that help us to rapidly adapt to new tasks and provide reasoning behind our decisions. However, existing meta-learning methods learn complex representations across prior labeled tasks without imposing any structure on the learned representations. Here we propose COMET, a meta-learning method that improves generalization ability by learning to learn along humaninterpretable concept dimensions. Instead of learning a joint unstructured metric space, COMET learns mappings of high-level concepts into semi-structured metric spaces, and effectively combines the outputs of independent concept learners. We evaluate our model on few-shot tasks from diverse domains, including finegrained image classification, document categorization and cell type annotation on a novel dataset from a biological domain developed in our work. COMET significantly outperforms strong meta-learning baselines, achieving 6–15% relative improvement on the most challenging 1-shot learning tasks, while unlike existing methods providing interpretations behind the model’s predictions.
[ { "affiliations": [], "name": "FEW-SHOT LEARNING" }, { "affiliations": [], "name": "Kaidi Cao" }, { "affiliations": [], "name": "Maria Brbić" }, { "affiliations": [], "name": "Jure Leskovec" } ]
[ { "authors": [ "Kelsey Allen", "Evan Shelhamer", "Hanul Shin", "Joshua Tenenbaum" ], "title": "Infinite mixture prototypes for few-shot learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Antreas Antoniou", "Harrison Edwards", "Amos Storkey" ], "title": "How to train your MAML", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Michael Ashburner", "Catherine A Ball", "Judith A Blake", "David Botstein", "Heather Butler", "J Michael Cherry", "Allan P Davis", "Kara Dolinski", "Selina S Dwight", "Janan T Eppig" ], "title": "Gene Ontology: tool for the unification of biology", "venue": "Nature Genetics,", "year": 2000 }, { "authors": [ "Samy Bengio", "Yoshua Bengio", "Jocelyn Cloutier", "Jan Gecsei" ], "title": "On the optimization of a synaptic learning rule", "venue": "In Preprints of the Conference Optimality in Artificial and Biological Neural Networks,", "year": 1992 }, { "authors": [ "David M Blei", "Andrew Y Ng", "Michael I Jordan" ], "title": "Latent Dirichlet Allocation", "venue": "Journal of Machine Learning Research,", "year": 2003 }, { "authors": [ "Maria Brbić", "Marinka Zitnik", "Sheng Wang", "Angela O Pisco", "Russ B Altman", "Spyros Darmanis", "Jure Leskovec" ], "title": "Mars: discovering novel cell types across heterogeneous single-cell experiments", "venue": "Nature Methods,", "year": 2020 }, { "authors": [ "Chaofan Chen", "Oscar Li", "Daniel Tao", "Alina Barnett", "Cynthia Rudin", "Jonathan K Su" ], "title": "This looks like that: Deep learning for interpretable image recognition", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Wei-Yu Chen", "Yen-Cheng Liu", "Zsolt Kira", "Yu-Chiang Frank Wang", "Jia-Bin Huang" ], "title": "A closer look at few-shot classification", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Tabula Muris Consortium" ], "title": "Single-cell transcriptomics of 20 mouse organs creates a Tabula Muris", "venue": "Nature,", "year": 2018 }, { "authors": [ "Tabula Muris Consortium" ], "title": "A single cell transcriptomic atlas characterizes aging tissues in the mouse", "venue": "Nature, 583:590–595,", "year": 2020 }, { "authors": [ "Nikita Dvornik", "Cordelia Schmid", "Julien Mairal" ], "title": "Diversity with cooperation: Ensemble methods for few-shot classification", "venue": "In IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Li Fei-Fei", "Rob Fergus", "Pietro Perona" ], "title": "One-shot learning of object categories", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2006 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Amirata Ghorbani", "James Wexler", "James Y Zou", "Been Kim" ], "title": "Towards automatic concept-based explanations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Spyros Gidaris", "Nikos Komodakis" ], "title": "Dynamic few-shot visual learning without forgetting", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Erin Grant", "Chelsea Finn", "Sergey Levine", "Trevor Darrell", "Thomas Griffiths" ], "title": "Recasting gradientbased meta-learning as hierarchical Bayes", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Lars Kai Hansen", "Peter Salamon" ], "title": "Neural network ensembles", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 1990 }, { "authors": [ "Ruibing Hou", "Hong Chang", "MA Bingpeng", "Shiguang Shan", "Xilin Chen" ], "title": "Cross attention network for few-shot classification", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Tomas Jakab", "Ankush Gupta", "Hakan Bilen", "Andrea Vedaldi" ], "title": "Unsupervised learning of object landmarks through conditional image generation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Been Kim", "Martin Wattenberg", "Justin Gilmer", "Carrie Cai", "James Wexler", "Fernanda Viegas", "Rory Sayres" ], "title": "Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV)", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Gregory Koch", "Richard Zemel", "Ruslan Salakhutdinov" ], "title": "Siamese neural networks for one-shot image recognition", "venue": "In ICML Deep Learning Workshop,", "year": 2015 }, { "authors": [ "Brenden Lake", "Ruslan Salakhutdinov", "Jason Gross", "Joshua Tenenbaum" ], "title": "One shot learning of simple visual concepts", "venue": "In Proceedings of the Annual Meeting of the Cognitive Science Society,", "year": 2011 }, { "authors": [ "Brenden M Lake", "Ruslan Salakhutdinov", "Joshua B Tenenbaum" ], "title": "Human-level concept learning through probabilistic program induction", "venue": null, "year": 2015 }, { "authors": [ "Kwonjoon Lee", "Subhransu Maji", "Avinash Ravichandran", "Stefano Soatto" ], "title": "Meta-learning with differentiable convex optimization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "David D Lewis", "Yiming Yang", "Tony G Rose", "Fan Li" ], "title": "Rcv1: A new benchmark collection for text categorization research", "venue": "Journal of Machine Learning Research,", "year": 2004 }, { "authors": [ "Oscar Li", "Hao Liu", "Chaofan Chen", "Cynthia Rudin" ], "title": "Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Lu Liu", "Tianyi Zhou", "Guodong Long", "Jing Jiang", "Chengqi Zhang" ], "title": "Learning to propagate for graph meta-learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Scott M Lundberg", "Su-In Lee" ], "title": "A unified approach to interpreting model predictions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "David Alvarez Melis", "Tommi Jaakkola" ], "title": "Towards robust interpretability with self-explaining neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Erik G Miller", "Nicholas E Matsakis", "Paul A Viola" ], "title": "Learning from one example through shared densities on transforms", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2000 }, { "authors": [ "Kaichun Mo", "Shilin Zhu", "Angel X Chang", "Li Yi", "Subarna Tripathi", "Leonidas J Guibas", "Hao Su" ], "title": "PartNet: A large-scale benchmark for fine-grained and hierarchical part-level 3D object understanding", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Alexey G Murzin", "Steven E Brenner", "Tim Hubbard", "Cyrus Chothia" ], "title": "SCOP: a structural classification of proteins database for the investigation of sequences and structures", "venue": "Journal of Molecular Biology,", "year": 1995 }, { "authors": [ "Alex Nichol", "John Schulman" ], "title": "Reptile: A scalable metalearning algorithm", "venue": "arXiv preprint arXiv:1803.02999,", "year": 2018 }, { "authors": [ "Maria-Elena Nilsback", "Andrew Zisserman" ], "title": "Automated flower classification over a large number of classes", "venue": "Sixth Indian Conference on Computer Vision, Graphics & Image Processing,", "year": 2008 }, { "authors": [ "Boris Oreshkin", "Pau Rodrı́guez López", "Alexandre Lacoste" ], "title": "TADAM: Task dependent adaptive metric for improved few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Aniruddh Raghu", "Maithra Raghu", "Samy Bengio", "Oriol Vinyals" ], "title": "Rapid learning or feature reuse? Towards understanding the effectiveness of MAML", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Mengye Ren", "Eleni Triantafillou", "Sachin Ravi", "Jake Snell", "Kevin Swersky", "Joshua B Tenenbaum", "Hugo Larochelle", "Richard S Zemel" ], "title": "Meta-learning for semi-supervised few-shot classification", "venue": "arXiv preprint arXiv:1803.00676,", "year": 2018 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": "Why should I trust you?” Explaining the predictions of any classifier", "venue": "In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2016 }, { "authors": [ "Andrei A Rusu", "Dushyant Rao", "Jakub Sygnowski", "Oriol Vinyals", "Razvan Pascanu", "Simon Osindero", "Raia Hadsell" ], "title": "Meta-learning with latent embedding optimization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-.", "venue": "hook. PhD thesis, Technische Universität München,", "year": 1987 }, { "authors": [ "Ramprasaath R Selvaraju", "Abhishek Das", "Ramakrishna Vedantam", "Michael Cogswell", "Devi Parikh", "Dhruv Batra" ], "title": "Grad-CAM: Why did you say that", "venue": "arXiv preprint arXiv:1611.07450,", "year": 2016 }, { "authors": [ "Daniel Smilkov", "Nikhil Thorat", "Been Kim", "Fernanda Viégas", "Martin Wattenberg" ], "title": "SmoothGrad: removing noise by adding noise", "venue": "arXiv preprint arXiv:1706.03825,", "year": 2017 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Mukund Sundararajan", "Ankur Taly", "Qiqi Yan" ], "title": "Axiomatic attribution for deep networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Flood Sung", "Yongxin Yang", "Li Zhang", "Tao Xiang", "Philip HS Torr", "Timothy M Hospedales" ], "title": "Learning to compare: Relation network for few-shot learning", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Pavel Tokmakov", "Yu-Xiong Wang", "Martial Hebert" ], "title": "Learning compositional representations for few-shot recognition", "venue": "In IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Eleni Triantafillou", "Tyler Zhu", "Vincent Dumoulin", "Pascal Lamblin", "Utku Evci", "Kelvin Xu", "Ross Goroshin", "Carles Gelada", "Kevin Swersky", "Pierre-Antoine Manzagol" ], "title": "Meta-dataset: A dataset of datasets for learning to learn from few examples", "venue": "arXiv preprint arXiv:1903.03096,", "year": 2019 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Catherine Wah", "Steve Branson", "Peter Welinder", "Pietro Perona", "Serge Belongie" ], "title": "The CaltechUCSD Birds-200-2011", "venue": null, "year": 2011 }, { "authors": [ "Alex Wong", "Alan L Yuille" ], "title": "One shot learning via compositions of meaningful patches", "venue": "In IEEE International Conference on Computer Vision, pp", "year": 2015 }, { "authors": [ "Chen Xing", "Negar Rostamzadeh", "Boris Oreshkin", "Pedro O Pinheiro" ], "title": "Adaptive cross-modal few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Chi Zhang", "Yujun Cai", "Guosheng Lin", "Chunhua Shen" ], "title": "Deepemd: Few-shot image classification with differentiable earth mover’s distance and structured classifiers", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Yuting Zhang", "Yijie Guo", "Yixin Jin", "Yijun Luo", "Zhiyuan He", "Honglak Lee" ], "title": "Unsupervised discovery of object landmarks as structural representations", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "E UNSUPERVISED" ], "title": "CONCEPT ANNOTATION: ADDITIONAL RESULTS We evaluate COMET and baseline methods on the Flowers dataset for fine-grained image classification. We automatically extract concepts using unsupervised landmarks discovery approach (Zhang et al., 2018). Results in Table 6 show that COMET outperforms all baselines by a large margin", "venue": "Results on 1-shot and 5-shot classification on the Flowers dataset", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning has reached human-level performance on domains with the abundance of large-scale labeled training data. However, learning on tasks with a small number of annotated examples is still an open challenge. Due to the lack of training data, models often overfit or are too simplistic to provide good generalization. On the contrary, humans can learn new tasks very quickly by drawing upon prior knowledge and experience. This ability to rapidly learn and adapt to new environments is a hallmark of human intelligence.\nFew-shot learning (Miller et al., 2000; Fei-Fei et al., 2006; Koch et al., 2015) aims at addressing this fundamental challenge by designing algorithms that are able to generalize to new tasks given only a few labeled training examples. Meta-learning (Schmidhuber, 1987; Bengio et al., 1992) has recently made major advances in the field by explicitly optimizing the model’s ability to generalize, or learning how to learn, from many related tasks (Snell et al., 2017; Vinyals et al., 2016; Ravi & Larochelle, 2017; Finn et al., 2017). Motivated by the way humans effectively use prior knowledge, meta-learning algorithms acquire prior knowledge over previous tasks so that new tasks can be efficiently learned from a small amount of data. However, recent works (Chen et al., 2019b; Raghu et al., 2020) show that simple baseline methods perform comparably to existing meta-learning methods, opening the question about which components are crucial for rapid adaptation and generalization.\nHere, we argue that there is an important missing piece in this puzzle. Human knowledge is structured in the form of reusable concepts. For instance, when we learn to recognize new bird species we are already equipped with the critical concepts, such as wing, beak, and feather. We then focus on these specific concepts and combine them to identify a new species. While learning to recognize new species is challenging in the complex bird space, it becomes remarkably simpler once the reasoning is structured into familiar concepts. Moreover, such a structured way of cognition gives us the ability to provide reasoning behind our decisions, such as “ravens have thicker beaks than crows, with more\n∗The two first authors made equal contributions.\nof a curve to the end”. We argue that this lack of structure is limiting the generalization ability of the current meta-learners. The importance of compositionality for few-shot learning was emphasized in (Lake et al., 2011; 2015) where hand-designed features of strokes were combined using Bayesian program learning.\nMotivated by the structured form of human cognition, we propose COMET, a meta-learning method that discovers generalizable representations along human-interpretable concept dimensions. COMET learns a unique metric space for each concept dimension using concept-specific embedding functions, named concept learners, that are parameterized by deep neural networks. Along each high-level dimension, COMET defines concept prototypes that reflect class-level differences in the metric space of the underlying concept. To obtain final predictions, COMET effectively aggregates information from diverse concept learners and concept prototypes. Three key aspects lead to a strong generalization ability of our approach: (i) semi-structured representation learning, (ii) concept-specific metric spaces described with concept prototypes, and (iii) ensembling of many models. The latter assures that the combination of diverse and accurate concept learners improves the generalization ability of the base learner (Hansen & Salamon, 1990; Dvornik et al., 2019). Remarkably, the high-level universe of concepts that are used to guide our algorithm can be discovered in a fully unsupervised way, or we can use external knowledge bases to define concepts. In particular, we can get a large universe of noisy, incomplete and redundant concepts and COMET learns which subsets of those are important by assigning local and global concept importance scores. Unlike existing methods (Snell et al., 2017; Vinyals et al., 2016; Sung et al., 2018; Gidaris & Komodakis, 2018), COMET’s predictions are interpretable—an advantage especially important in the few-shot learning setting, where predictions are based only on a handful of labeled examples making it hard to trust the model. As such, COMET is the first domain-agnostic interpretable meta-learning approach.\nWe demonstrate the effectiveness of our approach on tasks from extremely diverse domains, including fine-grained image classification in computer vision, document classification in natural language processing, and cell type annotation in biology. In the biological domain, we conduct the first systematic comparison of meta-learning algorithms. We develop a new meta-learning dataset and define a novel benchmark task to characterize single-cell transcriptome of all mouse organs (Consortium, 2018; 2020). Additionally, we consider the scenario in which concepts are not given in advance, and test COMET’s performance with automatically extracted visual concepts. Our experimental results show that on all domains COMET significantly improves generalization ability, achieving 6–15% relative improvement over state-of-the-art methods in the most challenging 1-shot task. Furthermore, we demonstrate the ability of COMET to provide interpretations behind the model’s predictions, and support our claim with quantitative and qualitative evaluations of the generated explanations." }, { "heading": "2 PROPOSED METHOD", "text": "Problem formulation. In few-shot classification, we assume that we are given a labeled training set Dtr, an unlabeled query set Dqr, and a support set S consisting of a few labeled data points that share the label space with the query set. Label space between training and query set is disjoint, i.e., {Ytr} ∩ {Yqr} = ∅, where {Ytr} denotes label space of training set and {Yqr} denotes label space of\nquery set. Each labeled data point (x, y) consists of a D-dimensional feature vector x ∈ RD and a class label y ∈ {1, ...,K}. Given a training set of previously labeled tasks Dtr and the support set S of a few labeled data points on a novel task, the goal is to train a model that can generalize to the novel task and label the query set Dqr." }, { "heading": "2.1 PRELIMINARIES", "text": "Episodic training. To achieve successful generalization to a new task, training of meta-learning methods is usually performed using sampled mini-batches called episodes (Vinyals et al., 2016). Each episode is formed by first sampling classes from the training set, and then sampling data points labeled with these classes. The sampled data points are divided into disjoint sets of: (i) a support set consisting of a few labeled data points, and (ii) a query set consisting of data points whose labels are used to calculate a prediction error. Given the sampled support set, the model minimizes the loss on the sampled query set in each episode. The key idea behind this meta-learning training scheme is to improve generalization of the model by trying to mimic the low-data regime encountered during testing. Episodes with balanced training sets are usually referred to as “N-way, k-shot” episodes where N indicates number of classes per episode (“way”), and k indicates number of support points (labeled training examples) per class (“shot”).\nPrototypical networks. Our work is inspired by prototypical networks (Snell et al., 2017), a simple but highly effective metric-based meta-learning method. Prototypical networks learn a non-linear embedding function fθ : RD → RM parameterized by a convolutional neural network. The main idea is to learn a function fθ such that in the M -dimensional embedding space data points cluster around a single prototype representation pk ∈ RM for each class k. Class prototype pk is computed as the mean vector of the support set labeled with the class k:\npk = 1 |Sk| ∑\n(xi,yi)∈Sk\nfθ(xi), (1)\nwhere Sk denotes the subset of the support set S belonging to the class k. Given a query data point xq , prototypical networks output distribution over classes using the softmax function:\npθ(y = k|xq) = exp(−d(fθ(xq),pk))∑ k′ exp(−d(fθ(xq),pk′)) , (2)\nwhere d : RM → R denotes the distance function. Query data point xq is assigned to the class with the minimal distance between the class prototype and embedded query point." }, { "heading": "2.2 META-LEARNING VIA CONCEPT LEARNERS", "text": "Our main assumption is that input dimensions can be separated into subsets of related dimensions corresponding to high-level, human-interpretable concepts that guide the training. Such sets of potentially overlapping, noisy and incomplete human-interpretable dimensions exists in many realworld scenarios. For instance, in computer vision concepts can be assigned to image segments; in natural language processing to semantically related words; whereas in biology we can use external knowledge bases and ontologies. In many problems, concepts are already available as a prior domain knowledge (Ashburner et al., 2000; Murzin et al., 1995; Wah et al., 2011; Mo et al., 2019; Miller et al., 2000), or can be automatically generated using existing techniques (Blei et al., 2003; Zhang et al., 2018; Jakab et al., 2018). Intuitively, concepts can be seen as part-based representations of the input and reflect the way humans reason about the world. Importantly, we do not assume these concepts are clean or complete. On the contrary, we show that even if there are thousands of concepts, which are noisy, incomplete, overlapping, or redundant, they still provide useful guidance to the meta-learning algorithm.\nFormally, let C = {c(j)}Nj=1 denote a set of N concepts given/extracted as a prior knowledge, where each concept c(j) ∈ {0, 1}D is a binary vector such that c(j)i = 1 if i-th dimension should be used to describe the j-th concept and D denotes the dimensionality of the input. We do not impose any constraints on C, meaning that the concepts can be disjoint or overlap. Instead of learning single mapping function fθ : RD → RM across all dimensions, COMET separates original space into subspaces of predefined concepts and learns individual embedding functions f (j)θ : RD → RM\nfor each concept j (Figure 1). Concept embedding functions f (j)θ , named concept learners, are non-linear functions parametrized by a deep neural network. Each concept learner j produces its own concept prototypes p(j)k for class k computed as the average of concept embeddings of data points in the support set:\np (j) k =\n1 |Sk| ∑\n(xi,yi)∈Sk\nf (j) θ (xi ◦ c (j)), (3)\nwhere ◦ denotes Hadamard product. As a result, each class k is represented with a set of N concept prototypes {p(j)k }Nj=1. Given a query data point xq , we compute its concept embeddings and estimate their distances to the concept prototypes of each class. We then aggregate the information across all concepts by taking sum over distances between concept embeddings and concept prototypes. Specifically, for each concept embedding f (j)θ (xq ◦ c(j)) we compute its distance to concept prototype p (j) k of a given class k, and sum distances across all concepts to obtain a distribution over support classes. The probability of assigning query point xq to k-th class is then given by:\npθ(y = k|xq) = exp(−\n∑ j d(f (j) θ (xq ◦ c(j)),p\n(j) k ))∑ k′ exp(− ∑ j d(f (j) θ (xq ◦ c(j)),p (j) k′ )) . (4)\nThe loss is computed as the negative log-likelihood Lθ = − log pθ(y = k|xq) of the true class, and COMET is trained by minimizing the loss on the query samples of training set in the episodic fashion (Snell et al., 2017; Vinyals et al., 2016). In equation (4), we use euclidean distance as the distance function. Experimentally, we find that it outperforms cosine distance (Appendix B), which agrees with the theory and experimental findings in (Snell et al., 2017). We note that in order for distances to be comparable, it is crucial to normalize neural network layers using batch normalization (Ioffe & Szegedy, 2015)." }, { "heading": "2.3 INTERPRETABILITY", "text": "Local and global concept importance scores. In COMET, each class is represented with N concept prototypes. Given a query data point xq , COMET assigns local concept importance scores by comparing concept embbeddings of the query to concept prototypes. Specifically, for a concept j in a class k the local importance score is obtained by inverted distance d(f (j)θ (xq ◦ c(j)),p (j) k ). Higher importance score indicates higher contribution in classifying query point to the class k. Therefore, explanations for the query point xq are given by local concept importance scores, and directly provide reasoning behind each prediction. To provide global explanations that can reveal important concepts for a set of query points of interest or an entire class, COMET computes average distance between concept prototype and concept embeddings of all query points of interest. Inverted average distance reflects global concept importance score and can be used to rank concepts, providing insights on important concepts across a set of examples.\nDiscovering locally similar examples. Given a fixed concept j, COMET can be used to rank data points based on the distance of their concept embeddings to the concept prototype p(j)k of class k. By ranking data points according to their similarity to the concept of interest, COMET can find examples that locally share similar patterns within the same class, or even across different classes. For instance, COMET can reveal examples that well reflect a concept prototype, or examples that are very distant from the concept prototype." }, { "heading": "3 EXPERIMENTS", "text": "" }, { "heading": "3.1 EXPERIMENTAL SETUP", "text": "Datasets. We apply COMET to four datasets from three diverse domains: computer vision, natural language processing (NLP) and biology. In the computer vision domain, we consider fine-grained image classification tasks. We use bird classification CUB-200-2011 (Wah et al., 2011) and flower classification Flowers-102 (Nilsback & Zisserman, 2008) datasets, referred to as CUB and Flowers\nhereafter. To define concepts, CUB provides part-based annotations, such as beak, wing, and tail of a bird. Parts were annotated by pixel location and visibility in each image. The total number of 15 parts/concepts is available; however concepts are incomplete and only a subset of them is present in an image. In case concept is not present, we rely on the prototypical concept to substitute for a missing concept. Based on the part coordinates, we create a surrounding bounding box with a fixed length to serve as the concept mask c(j). On both CUB and Flowers datasets, we test automatic concept extraction. In NLP domain, we apply COMET to benchmark document classification dataset Reuters (Lewis et al., 2004) consisting of news articles. To define concepts, we use all hypernyms of a given word based on the WordNet hiearchy (Lewis et al., 2004). On all datasets, we include a concept that captures the whole input, corresponding to a binary mask of all ones.\nIn the biology domain, we introduce a new cross-organ cell type classification task (Brbić et al., 2020) together with a new dataset. We develop a novel single-cell transcriptomic dataset based on the Tabula Muris dataset (Consortium, 2018; 2020) that comprises 105, 960 cells of 124 cell types collected across 23 organs of the mouse model organism. The features correspond to the gene expression profiles of cells. Out of the 23, 341 genes, we select 2, 866 genes with high standardized log dispersion given their mean. We define concepts using Gene Ontology (Ashburner et al., 2000; Consortium, 2019), a resource which characterizes gene functional roles in a hierarchically structured vocabulary. We select Gene Ontology terms at level 3 that have at least 64 assigned genes, resulting in the total number of 190 terms that define our concepts. We propose the evaluation protocol in which different organs are used for training, validation, and test splits. Therefore, a meta-learner needs to learn to generalize to unseen cell types across organs. This novel dataset along with the cross-organ evaluation splits is publicly available at https://snap.stanford.edu/comet. To our knowledge, this is the first meta-learning dataset from the biology domain.\nBaselines. We compare COMET’s performance to seven baselines, including FineTune/Baseline++ (Chen et al., 2019b), Matching Networks (MatchingNet) (Vinyals et al., 2016), Model Agnostic Meta-Learning (MAML) (Finn et al., 2017), Relation Networks (Sung et al., 2018), MetaOptNet (Lee et al., 2019), DeepEMD (Zhang et al., 2020) and Prototypical Networks (ProtoNet) (Snell et al., 2017). DeepEMD is only applicable to image datasets.\nWe provide more details on evaluation and implementation in Appendix A. Code is publicly available at https://github.com/snap-stanford/comet." }, { "heading": "3.2 RESULTS", "text": "Performance comparison. We report results on CUB, Tabula Muris and Reuters datasets with concepts given as a prior domain knowledge in Table 1. COMET outperforms all baselines by a remarkably large margin on all datasets. Specifically, COMET achieves 9.5% and 9.3% average improvements over the best performing baseline in the 1-shot and 5-shot tasks across datasets. Notably, COMET improves the result of the ProtoNet baseline by 19–23% in the 1-shot tasks across datasets. COMET’s substiantial improvement are retained with the deeper Conv-6 backbone (Appendix C). To confirm that the improvements indeed come from concept learners and not from additional weights, we compare COMET to ensemble of prototypical networks, and further evaluate performance of COMET with shared weights across all concepts. Results shown in Table 2 demonstrate that COMET achieves significantly better performance than the ensemble of ProtoNets even when the weights across concepts are shared. Of note, COMET’s performance is only slightly affected with shared weights across concepts. More experimental details are provided in Appendix D.\nEffect of number of concepts. We systematically evaluate the effect of the number of concepts on COMET’s performance on CUB and Tabula Muris datasets (Figure 2). In particular, we start from ProtoNet’s result that can be seen as using a single concept in COMET that covers all dimensions of the input. We then gradually increase number of concepts and train and evaluate COMET with the selected number of concepts. For the CUB dataset, we add concepts based on their visibility frequency, whereas on the Tabula Muris we are not limited in the coverage of concepts so we randomly select them. The results demonstrate that on both domains COMET consistently improves performance when increasing the number of concepts. Strikingly, by adding just one most frequent concept corresponding to a bird’s beak on top of the whole image concept, we improve ProtoNet’s performance on CUB by 10% and 5% in 1-shot and 5-shot tasks, respectively. On the Tabula Muris, with just 8 concepts COMET significantly outperforms all baselines and achieves 7% and 17%\nimprovement over ProtoNet in 1-shot and 5-shot tasks, respectively. To demonstrate the robustness of our method to a huge set of overlapping concepts, we extend the number of concepts to 1500 by capturing all levels of the Gene Ontology hierarchy, therefore allowing many redundant relationships. Even in this scenario, COMET slightly improves the results compared to 190 concepts obtained from a single level. These results demonstrate that COMET outperforms other methods even when the number of concepts is small and annotations are incomplete, as well as with many overlapping and redundant concepts.\nUnsupervised concept annotation. While COMET achieves remarkable results with humanvalidated concepts given as external knowledge, we next investigate COMET’s performance on automatically inferred concepts. In addition to CUB dataset, we consider Flowers dataset for finegrained image classification. To automatically extract visual concepts, we train the autoencoding framework for landmarks discovery proposed in (Zhang et al., 2018). The encoding module outputs landmark coordinates that we use as part coordinates. We generate a concept mask by creating a bounding box with a fixed length around landmark coordinates. Although extracted coordinates are often noisy and capture background (Appendix F), we find that COMET outperforms all baselines on both CUB and Flowers fine-grained classification datasets (Table 3). This analysis shows that the benefits of our method are expected even with noisy concepts extracted in a fully automated and unsupervised way.\nTo test unsupervised concept annotation on Tabula Muris and Reuters datasets, we randomly select subsets of features for concept definition. Since COMET is interpretable and can be used to find important concepts, we use validation set to select concepts with the highest importance scores. Even\nin this case, COMET significantly outperforms all baselines, achieving only 2% lower accuracy on the Tabula Muris dataset and 1% on the Reuters dataset on both 1-shot and 5-shot tasks compared to human-defined concepts. This additionally confirms COMET’s effectiveness with automatically extracted concepts. We provide more results in Appendix E ." }, { "heading": "3.3 INTERPRETABILITY", "text": "We analyze the reasoning part of COMET by designing case studies aiming to answer the following questions: (i) Which concepts are the most important for a given query point (i.e., local explanation)? Which concepts are the most important for a given class (i.e., global explanation)?; (iii) Which examples share locally similar patterns?; (iv) Which examples reflect well concept prototype? We perform all analyses exclusively on classes from the novel task that are not seen during training.\nConcept importance. Given a query point, COMET ranks concepts based on their importance scores, therefore identifying concepts highly relevant for the prediction of a single query point. We demonstrate examples of local explanations in Appendix G. To quantitatively evaluate global explanations that assign concept importance scores to the entire class, we derive ground truth explanations on the Tabula Muris dataset. Specifically, using the ground truth labels on the test set, we obtain a set of genes that are differentially expressed for each class (i.e., cell type). We then find Gene Ontology terms that are significantly enriched (false discovery rate corrected p-value< 0.1) in the set of differentially expressed genes of a given class, and use those terms as ground-truth concepts. We consider only cell types that have at least two assigned terms. To obtain COMET’s explanations, we rank global concept importance scores for each class and report the number of relevant terms that are successfully retrieved in top 20 concepts with the highest scores in the 5-shot setting (Figure 3 left). We find that COMET’s importance scores agree extremely well with the ground truth annotations, achieving 0.71 average recall@20 across all cell types. We further investigate global explanations on the CUB dataset by computing the frequency of the most relevant concepts across the species (Figure 3 right). Beak, belly and forehead turn out to be the most relevant features, supporting\ncommon-sense intuition. For instance, ‘beak’ is selected as the most relevant concept for ‘parakeet auklet’ known for its nearly circular beak; ‘belly’ for ‘cape may warbler’ known for its tiger stripes on the belly; while ‘belted kingfisher’ indeed has characteristic ‘forehead’ with its shaggy crest on the top of the head. This confirms that COMET correctly identifies important class-level concepts.\nLocally similar patterns. Given a fixed concept of interest, we apply COMET to sort images with respect to the distance of their concept embedding to the concept prototype (Figure 4). COMET finds images that locally resemble the prototypical image and well express concept prototype, correctly reflecting the underlying concept of interest. On the contrary, images sorted using the whole image as a concept often reflect background similarity and can not provide intuitive explanations. Furthermore, by finding most distant examples COMET can aid in identifying misannotated or non-visible concepts (Appendix H) which can be particularly useful when the concepts are automatically extracted. These analyses suggest that COMET can be used to discover, sort and visualize locally similar patterns, revealing insights on concept-based similarity across examples." }, { "heading": "4 RELATED WORK", "text": "Our work draws motivation from a rich line of research on meta-learning, compositional representations, and concept-based interpretability.\nMeta-learning. Recent meta-learning methods fall broadly into two categories. Optimization-based methods (Finn et al., 2017; Rusu et al., 2019; Nichol & Schulman, 2018; Grant et al., 2018; Antoniou et al., 2019) aim to learn a good initialization such that network can be fine-tuned to a target task within a few gradient steps. On the other hand, metric-based methods (Snell et al., 2017; Vinyals et al., 2016; Sung et al., 2018; Gidaris & Komodakis, 2018) learn a metric space shared across tasks such that in the new space target task can be solved using nearest neighbour or simple linear classifier. DeepEMD (Zhang et al., 2020) learns optimal distance between local image representations. Prototypical networks (Snell et al., 2017) learn a metric space such that data points cluster around a prototypical representation computed for each category as the mean of embedded labeled examples. It has remained one of the most competitive few-shot learning methods (Triantafillou et al., 2019), resulting in many follow-up works (Sung et al., 2018; Oreshkin et al., 2018; Ren et al., 2018; Liu et al., 2019; Xing et al., 2019). Two recent works (Hou et al., 2019; Zhu et al.) proposed to learn local discriminative features with attention mechanisms in image classification tasks. Our work builds upon prototypical networks and extends the approach by introducing concept-based prototypes. Prototypical networks were extended by learning mixture prototypes in (Allen et al., 2019); however prototypes in this work share the same metric space. In contrast, COMET defines human-interpretable concept-specific metric spaces where each prototype reflects class-level differences in the metric space of the corresponding concept.\nCompositionality. The idea behind learning from a few examples using compositional representations originates from work on Bayesian probabilistic programs in which individual strokes were combined for the handwritten character recognition task (Lake et al., 2011; 2015). This approach was extended in (Wong & Yuille, 2015) by replacing hand designed features with symmetry axis as object descriptors. Although these early works effectively demonstrated that compositionality is a\nkey ingredient for adaptation in a low-data regime, it is unclear how to extend them to generalize beyond simple visual concepts. Recent work (Tokmakov et al., 2019) revived the idea and showed that deep compositional representations generalize better in few-shot image classification. However, this approach requires category-level attribute annotations that are impossible to get in domains not intuitive to humans, such as biology. Moreover, even in domains in which annotations can be collected, they require tedious manual effort. On the contrary, our approach is domain-agnostic and generates human-understandable interpretations in any domain.\nInterpretability. There has been much progress on designing interpretable methods that estimate the importance of individual features (Selvaraju et al., 2016; Sundararajan et al., 2017; Smilkov et al., 2017; Ribeiro et al., 2016; Lundberg & Lee, 2017; Melis & Jaakkola, 2018). However, individual features are often not intuitive, or can even be misleading when interpreted by humans (Kim et al., 2018). To overcome this limitation, recent advances have been focused on designing methods that explain predictions using high-level human understandable concepts (Kim et al., 2018; Ghorbani et al., 2019). TCAV (Kim et al., 2018) defines concepts based on user-annotated set of examples in which the concept of interest appears. On the contrary, high-level concepts in our work are defined with a set of related dimensions. As such, they are already available in many domains, or can be obtained in an unsupervised manner. Once defined, they are transferable across problems that share feature space. As opposed to the methods that base their predictions on the posthoc analysis (Ribeiro et al., 2016; Lundberg & Lee, 2017; Melis & Jaakkola, 2018; Kim et al., 2018), COMET is designed as an inherently interpretable model and explains predictions by gaining insights from the reasoning process of the network. The closest to our work are prototypes-based explanation models (Li et al., 2018; Chen et al., 2019a). However, they require specialized convolutional architecture for feature extraction and are not applicable beyond image classification, or to a few-shot setting." }, { "heading": "5 CONCLUSION", "text": "We introduced COMET, a novel metric-based meta-learning algorithm that learns to generalize along human-interpretable concept dimensions. We showed that COMET learns generalizable representations with incomplete, noisy, redundant, very few or a huge set of concept dimensions, selecting only important concepts for classification and providing reasoning behind the decisions. Our experimental results showed that COMET does not make a trade-off between interpretability and accuracy and significantly outperforms existing methods on tasks from diverse domains, including a novel benchmark dataset from the biology domain developed in our work." }, { "heading": "ACKNOWLEDGEMENTS", "text": "The authors thank Yusuf Roohani, Michihiro Yasunaga and Marinka Zitnik for their helpful comments. We gratefully acknowledge the support of DARPA under Nos. N660011924033 (MCS); ARO under Nos. W911NF-16-1-0342 (MURI), W911NF-16-1-0171 (DURIP); NSF under Nos. OAC-1835598 (CINES), OAC-1934578 (HDR), CCF-1918940 (Expeditions), IIS-2030477 (RAPID); Stanford Data Science Initiative, Wu Tsai Neurosciences Institute, Chan Zuckerberg Biohub, Amazon, JPMorgan Chase, Docomo, Hitachi, JD.com, KDDI, NVIDIA, Dell, Toshiba, and UnitedHealth Group. J. L. is a Chan Zuckerberg Biohub investigator." }, { "heading": "A EXPERIMENTAL SETUP", "text": "Evaluation. We test all methods on the most broadly used 5-way classification setting. In each episode, we randomly sample 5 classes where each class contains k examples as the support set in the k-shot classification task. We construct the query set to have 16 examples, where each unlabeled sample in the query set belongs to one of the classes in the support set. We choose the best model according to the validation accuracy, and then evaluate it on the test set with novel classes. We report the mean accuracy by randomly sampling 600 episodes in the fine-tuning or meta-testing stage.\nOn the CUB dataset, we followed the evaluation protocol in (Chen et al., 2019b) and split the dataset into 100 base, 50 validation, and 50 test classes in the exactly same split. On the Tabula Muris, we use 15 organs for training, 4 organs for validation, and 4 organs for test, resulting into 59 base, 47 validation, and 37 test classes corresponding to cell types. The 102 classes of Flowers dataset are split into 52, 25, 25 as the training, validation and testing set, respectively. As for Reuters dataset, we leave out 5 classes for validation and 5 for test.\nImplementation details. On the CUB dataset, we use the widely adopted four-layer convolutional backbones Conv-4 with an input size of 84 × 84 (Snell et al., 2017). We perform standard data augmentation, including random crop, rotation, horizontal flipping and color jittering. We use the Adam optimizer (Kingma & Ba, 2014) with an initial learning rate of 10−3 and weight decay 0. We train the 5-shot tasks for 40, 000 episodes and 1-shot tasks for 60, 000 episodes (Chen et al., 2019b). To speed up training of COMET, we share the network parameters between concept learners. In particular, we first forward the entire image xi into the convolutional network and get a spatial feature embedding fθ(xi), and then get the j-th concept embedding as fθ(xi) ◦ c(j). Since convolutional filters only operate on pixels locally, in practice we get similar performance if we apply the mask at the beginning or at the end while significantly speeding up training time. In case the part is not annotatated (i.e., visible), we use the prototypical concept corresponding to whole image to replace the missing concept. For the Tabula Muris dataset, we use a simple backbone network structure containing two fully-connected layers with batch normalization, ReLu activation and dropout. We use Adam optimizer (Kingma & Ba, 2014) with an initial learning rate of 10−3 and weight decay 0. We train the network for 1, 000 episodes. For MAML, RelationNet, MatchingNet, FineTune and ProtoNet, we use implementations from (Chen et al., 2019b). For MetaOptNet and DeepEMD we use implementations from the respective papers." }, { "heading": "B ABLATION STUDY ON DISTANCE FUNCTION", "text": "We compare the effect of distance metric on the COMET’s performance. We find that Euclidean distance consistently outperforms cosine distance in fine-grained image classification and cell type annotation tasks." }, { "heading": "C ABLATION STUDY ON BACKBONE NETWORK", "text": "We compare performance of COMET to baselines methods using deeper Conv-6 backbone instead of Conv-4 backbone on the CUB dataset. We use part based annotations to define concepts. The results are reported in Table 5. COMET outperforms all baselines even with deeper backbone. Additionally, by adding just one most frequent concept corresponding to a bird’s beak on top of the whole image concept, COMET improves ProtoNet’s performance by 3.8% on 1-shot task and 2.2% on 5-shot task." }, { "heading": "D ABLATION STUDY ON ENSEMBLE METHODS", "text": "We compare COMET to the ensemble of prototypical networks. We train ProtoNets in parallel and combine their outputs by majority voting as typically done in ensemble models. In particular, given a query point xq and prototypes {p(j)k }k , the prototypical ensemble outputs probability distribution for each ProtoNet model j:\np (j) θ (y = k|xq) = exp(−d(f (j)θ (xq),p (j) k ))∑\nk′ exp(−d(f (j) θ (xq),p (j) k′ ))\n. (5)\nOn the CUB dataset, we use 5 ProtoNets. We use smaller number than the number of concepts because training an ensemble of a larger number of ProtoNets on CUB results in memory issues due to the unshared weights. On the Tabula Muris and Reuters datasets we use the same number of ProtoNets as the number of concepts, that is 190 on Tabula Muris and 126 on Reuters." }, { "heading": "E UNSUPERVISED CONCEPT ANNOTATION: ADDITIONAL RESULTS", "text": "We evaluate COMET and baseline methods on the Flowers dataset for fine-grained image classification. We automatically extract concepts using unsupervised landmarks discovery approach (Zhang et al., 2018). Results in Table 6 show that COMET outperforms all baselines by a large margin.\nOn the Tabula Muris and Reuters datasets, we test COMET without any prior knowledge by defining concepts using selected random masks. In particular, we randomly select subsets of features as concepts and then use validation set to select the concepts with the highest importance scores as defined by COMET. We use same number of concepts used in Tabula Muris and Reuters datasets. Results are reported in Table 7." }, { "heading": "F UNSUPERVISED CONCEPT ANNOTATION: LANDMARKS EXAMPLES", "text": "To assess the performance of COMET using automatically extracted visual concepts on the CUB dataset, we applied autoencoding framework for landmarks discovery proposed in (Zhang et al.,\nTable 7: Results on 1-shot and 5-shot classification on Tabula Muris and Retuters dataset with selected random masks as concepts and human-defined concepts. We report average accuracy and standard deviation over 600 randomly sampled episodes.\nTabula Muris Reuters Method 1-shot 5-shot 1-shot 5-shot with selected random masks 77.2± 1.0 89.8± 0.5 70.1± 0.9 89.0± 0.4 with prior knowledge 79.4 ± 0.9 91.7 ± 0.5 71.5 ± 0.7 89.8 ± 0.3\n2018). We use default parameters and implementation provided by the authors, and set the number of landmarks to 30. The encoding module provides coordinates of the estimated landmarks. To create concept mask, we create a bounding box around discovered landmarks. We train the autoencoder using same parameters as Zhang et al. (2018), and set the number of concepts to 30. Examples of extracted landmarks for 20 images from the CUB dataset are visualized in Figure 5. 1\n20 40 60 80\n20\n40\n60\n80\n2\n20 40 60 80\n20\n40\n60\n80\n3\n20 40 60 80\n20\n40\n60\n80\n4\n20 40 60 80\n20\n40\n60\n80\n5\n20 40 60 80\n20\n40\n60\n80\n6\n20 40 60 80\n20\n40\n60\n80\n7\n20 40 60 80\n20\n40\n60\n80\n8\n20 40 60 80\n20\n40\n60\n80\n9\n20 40 60 80\n20\n40\n60\n80\n10\n20 40 60 80\n20\n40\n60\n80\n11\n20 40 60 80\n20\n40\n60\n80\n12\n20 40 60 80\n20\n40\n60\n80\n13\n20 40 60 80\n20\n40\n60\n80\n14\n20 40 60 80\n20\n40\n60\n80\n15\n20 40 60 80\n20\n40\n60\n80\n16\n20 40 60 80\n20\n40\n60\n80\n17\n20 40 60 80\n20\n40\n60\n80\n18\n20 40 60 80\n20\n40\n60\n80\n19\n20 40 60 80\n20\n40\n60\n80\n20\n20 40 60 80\n20\n40\n60\n80\n~/Downloads/ step 485000 (samples 1-20) data-encoded\n1\n20 40 60 80\n20\n40\n60\n80\n2\n20 40 60 80\n20\n40\n60\n80\n3\n20 40 60 80\n20\n40\n60\n80\n4\n20 40 60 80\n20\n40\n60\n80\n5\n20 40 60 80\n20\n40\n60\n80\n6\n20 40 60 80\n20\n40\n60\n80\n7\n20 40 60 80\n20\n40\n60\n80\n8\n20 40 60 80\n20\n40\n60\n80\n9\n20 40 60 80\n20\n40\n60\n80\n10\n20 40 60 80\n20\n40\n60\n80\n11\n20 40 60 80\n20\n40\n60\n80\n12\n20 40 60 80\n20\n40\n60\n80\n13\n20 40 60 80\n20\n40\n60\n80\n14\n20 40 60 80\n20\n40\n60\n80\n15\n20 40 60 80\n20\n40\n60\n80\n16\n20 40 60 80\n20\n40\n60\n80\n17\n20 40 60 80\n20\n40\n60\n80\n18\n20 40 60 80\n20\n40\n60\n80\n19\n20 40 60 80\n20\n40\n60\n80\n20\n20 40 60 80\n20\n40\n60\n80\n~/Downloads/ step 485000 (samples 1-20) data-encoded\n1\n20 40 60 80\n20\n40\n60\n80\n2\n20 40 60 80\n20\n40\n60\n80\n3\n20 40 60 80\n20\n40\n60\n80\n4\n20 40 60 80\n20\n40\n60\n80\n5\n20 40 60 80\n20\n40\n60\n80\n6\n20 40 60 80\n20\n40\n60\n80\n7\n20 40 60 80\n20\n40\n60\n80\n8\n20 40 60 80\n20\n40\n60\n80\n9\n20 40 60 80\n20\n40\n60\n80\n10\n20 40 60 80\n20\n40\n60\n80\n11\n20 40 60 80\n20\n40\n60\n80\n12\n20 40 60 80\n20\n40\n60\n80\n13\n20 40 60 80\n20\n40\n60\n80\n14\n20 40 60 80\n20\n40\n60\n80\n15\n20 40 60 80\n20\n40\n60\n80\n16\n20 40 60 80\n20\n40\n60\n80\n17\n20 40 60 80\n20\n40\n60\n80\n18\n20 40 60 80\n20\n40\n60\n80\n19\n20 40 60 80\n20\n40\n60\n80\n20\n20 40 60 80\n20\n40\n60\n80\n~/Downloads/ step 485000 (s mpl s 1-20) data-encod d\n1\n20 40 60 80\n20\n40\n60\n80\n2\n20 40 60 80\n20\n40\n60\n80\n3\n20 40 60 80\n20\n40\n60\n80\n4\n20 40 60 80\n20\n40\n60\n80\n5\n20 40 60 80\n20\n40\n60\n80\n6\n20 40 60 80\n20\n40\n60\n80\n7\n20 40 60 80\n20\n40\n60\n80\n8\n20 40 60 80\n20\n40\n60\n80\n9\n20 40 60 80\n20\n40\n60\n80\n10\n20 40 60 80\n20\n40\n60\n80\n11\n20 40 60 80\n20\n40\n60\n80\n12\n20 40 60 80\n20\n40\n60\n80\n13\n20 40 60 80\n20\n40\n60\n80\n14\n20 40 60 80\n20\n40\n60\n80\n15\n20 40 60 80\n20\n40\n60\n80\n16\n20 40 60 80\n20\n40\n60\n80\n17\n20 40 60 80\n20\n40\n60\n80\n18\n20 40 60 80\n20\n40\n60\n80\n19\n20 40 60 80\n20\n40\n60\n80\n20\n20 40 60 80\n20\n40\n60\n80\n~/Downloads/ step 485000 (samples 1-20) ata-encoded\n1\n20 40 60 80\n20\n40\n60\n80\n2\n20 40 60 80\n20\n40\n60\n80\n3\n20 40 60 80\n20\n40\n60\n80\n4\n20 40 60 80\n20\n40\n60\n80\n5\n20 40 60 80\n20\n40\n60\n80\n6\n20 40 60 80\n20\n40\n60\n80\n7\n20 40 60 80\n20\n40\n60\n80\n8\n20 40 60 80\n20\n40\n60\n80\n9\n20 40 60 80\n20\n40\n60\n80\n10\n20 40 60 80\n20\n40\n60\n80\n11\n20 40 60 80\n20\n40\n60\n80\n12\n20 40 60 80\n20\n40\n60\n80\n13\n20 40 60 80\n20\n40\n60\n80\n14\n20 40 60 80\n20\n40\n60\n80\n15\n20 40 60 80\n20\n40\n60\n80\n16\n20 40 60 80\n20\n40\n60\n80\n17\n20 40 60 80\n20\n40\n60\n80\n18\n20 40 60 80\n20\n40\n60\n80\n19\n20 40 60 80\n20\n40\n60\n80\n20\n20 40 60 80\n20\n40\n60\n80\n~/Downloads/ step 485000 (samples 1-20) data-encod d\n1\n20 40 60 80\n20\n40\n60\n80\n2\n20 40 60 80\n20\n40\n60\n80\n3\n20 40 60 80\n20\n40\n60\n80\n4\n20 40 60 80\n20\n40\n60\n80\n5\n20 40 60 80\n20\n40\n60\n80\n6\n20 40 60 80\n20\n40\n60\n80\n7\n20 40 60 80\n20\n40\n60\n80\n8\n20 40 60 80\n20\n40\n60\n80\n9\n20 40 60 80\n20\n40\n60\n80\n10\n20 40 60 80\n20\n40\n60\n80\n11\n20 40 60 80\n20\n40\n60\n80\n12\n20 40 60 80\n20\n40\n60\n80\n13\n20 40 60 80\n20\n40\n60\n80\n14\n20 40 60 80\n20\n40\n60\n80\n15\n20 40 60 80\n20\n40\n60\n80\n16\n20 40 60 80\n20\n40\n60\n80\n17\n20 40 60 80\n20\n40\n60\n80\n18\n20 40 60 80\n20\n40\n60\n80\n19\n20 40 60 80\n20\n40\n60\n80\n20\n20 40 60 80\n20\n40\n60\n80\n~/Downloads/ step 485000 (samples 1-20) data-encod d1\n20 40 60 80\n20\n40\n60\n80\n2\n20 40 60 80\n20\n40\n60\n80\n3\n20 40 60 80\n20\n40\n60\n80\n4\n20 40 60 80\n20\n40\n60\n80\n5\n20 40 60 80\n20\n40\n60\n80\n6\n20 40 60 80\n20\n40\n60\n80\n7\n20 40 60 80\n20\n40\n60\n80\n8\n20 40 60 80\n20\n40\n60\n80\n9\n20 40 60 80\n20\n40\n60\n80\n10\n20 40 60 80\n20\n40\n60\n80\n11\n20 40 60 80\n20\n40\n60\n80\n12\n20 40 60 80\n20\n40\n60\n80\n13\n20 40 60 80\n20\n40\n60\n80\n14\n20 40 60 80\n20\n40\n60\n80\n15\n20 40 60 80\n20\n40\n60\n80\n16\n20 40 60 80\n20\n40\n60\n80\n17\n20 40 60 80\n20\n40\n60\n80\n18\n20 40 60 80\n20\n40\n60\n80\n19\n20 40 60 80\n20\n40\n60\n80\n20\n20 40 60 80\n20\n40\n60\n80\n~/Downloads/ step 485000 (samples 1-20) data-encoded\n1\n20 40 60 80\n20\n4\n6\n8\n2\n20 40 60 80\n20\n4\n6\n8\n3\n20 40 60 80\n20\n4\n6\n8\n4\n20 40 60 80\n20\n4\n6\n8\n5\n20 40 60 80\n20\n4\n6\n8\n6\n20 40 60 80\n20\n4\n6\n8\n7\n20 40 60 80\n20\n4\n6\n8\n8\n20 40 60 80\n20\n4\n6\n8\n9\n20 40 60 80\n20\n4\n6\n8\n10\n20 40 60 80\n20\n4\n6\n8\n11\n20 40 60 80\n20\n4\n6\n8\n12\n20 40 60 80\n20\n4\n6\n8\n13\n20 40 60 80\n20\n4\n6\n8\n14\n20 40 60 80\n20\n4\n6\n8\n15\n20 40 60 80\n20\n4\n6\n8\n16\n20 40 60 80\n20\n4\n6\n8\n17\n20 40 60 80\n20\n4\n6\n8\n18\n20 40 60 80\n20\n4\n6\n8\n19\n20 40 60 80\n20\n4\n6\n8\n20\n20 40 60 80\n20\n4\n6\n8\n~/Downloads/ step 485000 (samples 1-20) data-encoded\n1\n20 40 60 80\n20\n40\n60\n80\n2\n20 40 60 80\n20\n40\n60\n80\n3\n20 40 60 80\n20\n40\n60\n80\n4\n20 40 60 80\n20\n40\n60\n80\n5\n20 40 60 80\n20\n40\n60\n80\n6\n20 40 60 80\n20\n40\n60\n80\n7\n20 40 60 80\n20\n40\n60\n80\n8\n20 40 60 80\n20\n40\n60\n80\n9\n20 40 60 80\n20\n40\n60\n80\n10\n20 40 60 80\n20\n40\n60\n80\n11\n20 40 60 80\n20\n40\n60\n80\n12\n20 40 60 80\n20\n40\n60\n80\n13\n20 40 60 80\n20\n40\n60\n80\n14\n20 40 60 80\n20\n40\n60\n80\n15\n20 40 60 80\n20\n40\n60\n80\n16\n20 40 60 80\n20\n40\n60\n80\n17\n20 40 60 80\n20\n40\n60\n80\n18\n20 40 60 80\n20\n40\n60\n80\n19\n20 40 60 80\n20\n40\n60\n80\n20\n20 40 60 80\n20\n40\n60\n80\n~/Downloads/ step 485000 (samples 1-20) data-encod d\n1\n20 40 60 80\n20\n40\n60\n80\n2\n20 40 60 80\n20\n40\n60\n80\n3\n20 40 60 80\n20\n40\n60\n80\n4\n20 40 60 80\n20\n40\n60\n80\n5\n20 40 60 80\n20\n40\n60\n80\n6\n20 40 60 80\n20\n40\n60\n80\n7\n20 40 60 80\n20\n40\n60\n80\n8\n20 40 60 80\n20\n40\n60\n80\n9\n20 40 60 80\n20\n40\n60\n80\n10\n20 40 60 80\n20\n40\n60\n80\n11\n20 40 60 80\n20\n40\n60\n80\n12\n20 40 60 80\n20\n40\n60\n80\n13\n20 40 60 80\n20\n40\n60\n80\n14\n20 40 60 80\n20\n40\n60\n80\n15\n20 40 60 80\n20\n40\n60\n80\n16\n20 40 60 80\n20\n40\n60\n80\n17\n20 40 60 80\n20\n40\n60\n80\n18\n20 40 60 80\n20\n40\n60\n80\n19\n20 40 60 80\n20\n40\n60\n80\n20\n20 40 60 80\n20\n40\n60\n80\n~/Downloads/ step 485000 (samples 1-20) data-encoded\n1\n20 40 60 80\n20\n40\n60\n80\n2\n20 40 60 80\n20\n40\n60\n80\n3\n20 40 60 80\n20\n40\n60\n80\n4\n20 40 60 80\n20\n40\n60\n80\n5\n20 40 60 80\n20\n40\n60\n80\n6\n20 40 60 80\n20\n40\n60\n80\n7\n20 40 60 80\n20\n40\n60\n80\n8\n20 40 60 80\n20\n40\n60\n80\n9\n20 40 60 80\n20\n40\n60\n80\n10\n20 40 60 80\n20\n40\n60\n80\n11\n20 40 60 80\n20\n40\n60\n80\n12\n20 40 60 80\n20\n40\n60\n80\n13\n20 40 60 80\n20\n40\n60\n80\n14\n20 40 60 80\n20\n40\n60\n80\n15\n20 40 60 80\n20\n40\n60\n80\n16\n20 40 60 80\n20\n40\n60\n80\n17\n20 40 60 80\n20\n40\n60\n80\n18\n20 40 60 80\n20\n40\n60\n80\n19\n20 40 60 80\n20\n40\n60\n80\n20\n20 40 60 80\n20\n40\n60\n80\n~/Downloads/ step 485000 (sampl s 1-20) data-encod d\n1\n20 40 60 80\n2\n4\n6\n80\n2\n20 40 60 80\n2\n4\n6\n80\n3\n20 40 60 80\n2\n4\n6\n80\n4\n20 40 60 80\n2\n4\n6\n80\n5\n20 40 60 80\n2\n4\n6\n80\n6\n20 40 60 80\n2\n4\n6\n80\n7\n20 40 60 80\n2\n4\n6\n80\n8\n20 40 60 80\n2\n4\n6\n80\n9\n20 40 60 80\n2\n4\n6\n80\n10\n20 40 60 80\n2\n4\n6\n80\n11\n20 40 60 80\n2\n4\n6\n80\n12\n20 40 60 80\n2\n4\n6\n80\n13\n20 40 60 80\n2\n4\n6\n80\n14\n20 40 60 80\n2\n4\n6\n80\n15\n20 40 60 80\n2\n4\n6\n80\n16\n20 40 60 80\n2\n4\n6\n80\n17\n20 40 60 80\n2\n4\n6\n80\n18\n20 40 60 80\n2\n4\n6\n80\n19\n20 40 60 80\n2\n4\n6\n80\n20\n20 40 60 80\n2\n4\n6\n80\n~/Downloads/ step 485000 (samples 1-20) data-encoded\n1\n20 40 60 80\n20\n40\n60\n80\n2\n20 40 60 80\n20\n40\n60\n80\n3\n20 40 60 80\n20\n40\n60\n80\n4\n20 40 60 80\n20\n40\n60\n80\n5\n20 40 60 80\n20\n40\n60\n80\n6\n20 40 60 80\n20\n40\n60\n80\n7\n20 40 60 80\n20\n40\n60\n80\n8\n20 40 60 80\n20\n40\n60\n80\n9\n20 40 60 80\n20\n40\n60\n80\n10\n20 40 60 80\n20\n40\n60\n80\n11\n20 40 60 80\n20\n40\n60\n80\n12\n20 40 60 80\n20\n40\n60\n80\n13\n20 40 60 80\n20\n40\n60\n80\n14\n20 40 60 80\n20\n40\n60\n80\n15\n20 40 60 80\n20\n40\n60\n80\n16\n20 40 60 80\n20\n40\n60\n80\n17\n20 40 60 80\n20\n40\n60\n80\n18\n20 40 60 80\n20\n40\n60\n80\n19\n20 40 60 80\n20\n40\n60\n80\n20\n20 40 60 80\n20\n40\n60\n80\n~/Downloads/ step 485000 (samples 1-20) data-encoded\n1\n20 40 60 80\n20\n40\n60\n80\n2\n20 40 60 80\n20\n40\n60\n80\n3\n20 40 60 80\n20\n40\n60\n80\n4\n20 40 60 80\n20\n40\n60\n80\n5\n20 40 60 80\n20\n40\n60\n80\n6\n20 40 60 80\n20\n40\n60\n80\n7\n20 40 60 80\n20\n40\n60\n80\n8\n20 40 60 80\n20\n40\n60\n80\n9\n20 40 60 80\n20\n40\n60\n80\n10\n20 40 60 80\n20\n40\n60\n80\n11\n20 40 60 80\n20\n40\n60\n80\n12\n20 40 60 80\n20\n40\n60\n80\n13\n20 40 60 80\n20\n40\n60\n80\n14\n20 40 60 80\n20\n40\n60\n80\n15\n20 40 60 80\n20\n40\n60\n80\n16\n20 40 60 80\n20\n40\n60\n80\n17\n20 40 60 80\n20\n40\n60\n80\n18\n20 40 60 80\n20\n40\n60\n80\n19\n20 40 60 80\n20\n40\n60\n80\n20\n20 40 60 80\n20\n40\n60\n80\n~/Downloads/ step 485000 (samples 1-20) data-encod d\n1\n20 40 60 80\n20\n40\n60\n80\n2\n20 40 60 80\n20\n40\n60\n80\n3\n20 40 60 80\n20\n40\n60\n80\n4\n20 40 60 80\n20\n40\n60\n80\n5\n20 40 60 80\n20\n40\n60\n80\n6\n20 40 60 80\n20\n40\n60\n80\n7\n20 40 60 80\n20\n40\n60\n80\n8\n20 40 60 80\n20\n40\n60\n80\n9\n20 40 60 80\n20\n40\n60\n80\n10\n20 40 60 80\n20\n40\n60\n80\n11\n20 40 60 80\n20\n40\n60\n80\n12\n20 40 60 80\n20\n40\n60\n80\n13\n20 40 60 80\n20\n40\n60\n80\n14\n20 40 60 80\n20\n40\n60\n80\n15\n20 40 60 80\n20\n40\n60\n80\n16\n20 40 60 80\n20\n40\n60\n80\n17\n20 40 60 80\n20\n40\n60\n80\n18\n20 40 60 80\n20\n40\n60\n80\n19\n20 40 60 80\n20\n40\n60\n80\n20\n20 40 60 80\n20\n40\n60\n80\n~/Downloads/ step 485000 (samples 1-20) data-encoded\n1\n20 40 60 80\n20\n40\n60\n80\n2\n20 40 60 80\n20\n40\n60\n80\n3\n20 40 60 80\n20\n40\n60\n80\n4\n20 40 60 80\n20\n40\n60\n80\n5\n20 40 60 80\n20\n40\n60\n80\n6\n20 40 60 80\n20\n40\n60\n80\n7\n20 40 60 80\n20\n40\n60\n80\n8\n20 40 60 80\n20\n40\n60\n80\n9\n20 40 60 80\n20\n40\n60\n80\n10\n20 40 60 80\n20\n40\n60\n80\n11\n20 40 60 80\n20\n40\n60\n80\n12\n20 40 60 80\n20\n40\n60\n80\n13\n20 40 60 80\n20\n40\n60\n80\n14\n20 40 60 80\n20\n40\n60\n80\n15\n20 40 60 80\n20\n40\n60\n80\n16\n20 40 60 80\n20\n40\n60\n80\n17\n20 40 60 80\n20\n40\n60\n80\n18\n20 40 60 80\n20\n40\n60\n80\n19\n20 40 60 80\n20\n40\n60\n80\n20\n20 40 60 80\n20\n40\n60\n80\n~/Downloads/ step 485000 (sampl s 1-20) data-encod d\n1\n20 40 60 80\n20\n40\n60\n80\n2\n20 40 60 80\n20\n40\n60\n80\n3\n20 40 60 80\n20\n40\n60\n80\n4\n20 40 60 80\n20\n40\n60\n80\n5\n20 40 60 80\n20\n40\n60\n80\n6\n20 40 60 80\n20\n40\n60\n80\n7\n20 40 60 80\n20\n40\n60\n80\n8\n20 40 60 80\n20\n40\n60\n80\n9\n20 40 60 80\n20\n40\n60\n80\n10\n20 40 60 80\n20\n40\n60\n80\n11\n20 40 60 80\n20\n40\n60\n80\n12\n20 40 60 80\n20\n40\n60\n80\n13\n20 40 60 80\n20\n40\n60\n80\n14\n20 40 60 80\n20\n40\n60\n80\n15\n20 40 60 80\n20\n40\n60\n80\n16\n20 40 60 80\n20\n40\n60\n80\n17\n20 40 60 80\n20\n40\n60\n80\n18\n20 40 60 80\n20\n40\n60\n80\n19\n20 40 60 80\n20\n40\n60\n80\n20\n20 40 60 80\n20\n40\n60\n80\n~/Downloads/ step 485000 (samples 1-20) data-encoded\n1\n20 40 60 80\n20\n40\n60\n80\n2\n20 40 60 80\n20\n40\n60\n80\n3\n20 40 60 80\n20\n40\n60\n80\n4\n20 40 60 80\n20\n40\n60\n80\n5\n20 40 60 80\n20\n40\n60\n80\n6\n20 40 60 80\n20\n40\n60\n80\n7\n20 40 60 80\n20\n40\n60\n80\n8\n20 40 60 80\n20\n40\n60\n80\n9\n20 40 60 80\n20\n40\n60\n80\n10\n20 40 60 80\n20\n40\n60\n80\n11\n20 40 60 80\n20\n40\n60\n80\n12\n20 40 60 80\n20\n40\n60\n80\n13\n20 40 60 80\n20\n40\n60\n80\n14\n20 40 60 80\n20\n40\n60\n80\n15\n20 40 60 80\n20\n40\n60\n80\n16\n20 40 60 80\n20\n40\n60\n80\n17\n20 40 60 80\n20\n40\n60\n80\n18\n20 40 60 80\n20\n40\n60\n80\n19\n20 40 60 80\n20\n40\n60\n80\n20\n20 40 60 80\n20\n40\n60\n80\n~/Downloads/ step 485000 (samples 1-20) data-encod d\n1\n20 40 60 80\n20\n40\n60\n80\n2\n20 40 60 80\n20\n40\n60\n80\n3\n20 40 60 80\n20\n40\n60\n80\n4\n20 40 60 80\n20\n40\n60\n80\n5\n20 40 60 80\n20\n40\n60\n80\n6\n20 40 60 80\n20\n40\n60\n80\n7\n20 40 60 80\n20\n40\n60\n80\n8\n20 40 60 80\n20\n40\n60\n80\n9\n20 40 60 80\n20\n40\n60\n80\n10\n20 40 60 80\n20\n40\n60\n80\n11\n20 40 60 80\n20\n40\n60\n80\n12\n20 40 60 80\n20\n40\n60\n80\n13\n20 40 60 80\n20\n40\n60\n80\n14\n20 40 60 80\n20\n40\n60\n80\n15\n20 40 60 80\n20\n40\n60\n80\n16\n20 40 60 80\n20\n40\n60\n80\n17\n20 40 60 80\n20\n40\n60\n80\n18\n20 40 60 80\n20\n40\n60\n80\n19\n20 40 60 80\n20\n40\n60\n80\n20\n20 40 60 80\n20\n40\n60\n80\n~/Downloads/ step 485000 (samples 1-20) data-encoded\n1\n20 40 60 80\n20\n4\n6\n8\n2\n20 40 60 80\n20\n4\n6\n8\n3\n20 40 60 80\n20\n4\n6\n8\n4\n20 40 60 80\n20\n4\n6\n8\n5\n20 40 60 80\n20\n4\n6\n8\n6\n20 40 60 80\n20\n4\n6\n8\n7\n20 40 60 80\n20\n4\n6\n8\n8\n20 40 60 80\n20\n4\n6\n8\n9\n20 40 60 80\n20\n4\n6\n8\n10\n20 40 60 80\n20\n4\n6\n8\n11\n20 40 60 80\n20\n4\n6\n8\n12\n20 40 60 80\n20\n4\n6\n8\n13\n20 40 60 80\n20\n4\n6\n8\n14\n20 40 60 80\n20\n4\n6\n8\n15\n20 40 60 80\n20\n4\n6\n8\n16\n20 40 60 80\n20\n4\n6\n8\n17\n20 40 60 80\n20\n4\n6\n8\n18\n20 40 60 80\n20\n4\n6\n8\n19\n20 40 60 80\n20\n4\n6\n8\n20\n20 40 60 80\n20\n4\n6\n8\n~/Downloads/ step 485000 (sampl s 1-20) data-encoded\nFigure 5: Examples of automatically extracted landmarks using (Zhang et al., 2018) on the CUB dataset.\nG INTERPRETABILITY: LOCAL EXPLANATIONS\nHere, we demonstrate COMET’s local explanations on the CUB dataset. Given a single query data point, COMET assigns local concept importance scores to each concept based on the distance between concept embedding of the query data point to the prototypical concept. We then rank concepts according to their local concept importance scores. Figure 6 shows examples of ranked concepts. Importance scores assigned by COMET visually reflect well the most relevant bird features.\nH INTERPRETABILITY: LOCAL SIMILARITY\nGiven fixed concept of interest, we apply COMET to sort images with respect to the distance of their concept embedding to the concept prototype. Figure 7 shows example of chipping sparrow images with the belly concept embedding most similar to the prototypical belly, and images with the belly\nconcept embedding most distant to the prototypical belly. Most similar images indeed have clearly visible belly part and reflect prototypical belly well. On the contrary, most distant images have only small part of belly visible, indicating that COMET can be used to detect misannotated or non-visible concepts." } ]
2,021
null
SP:9395fc883c2947587ff26fd36ce0fc797d062f3e
[ "This paper presents Conditional Masked Language Modeling (CMLM), which integrates sentence representation learning into MLM training by conditioning on the encoded vectors of adjacent sentences. It is shown that the English CMLM model achieves strong performance on SentEval, and outperforms models learned using (semi-)supervised signals. It is also found that a multilingual CMLM model co-trained with bitext retrieval (BR) and natural language inference (NLI) tasks outperforms the previous state-of-the-art multilingual models by a large margin. The paper further proposes a principle component based approach to remove the language identifying information from the representation while still retaining sentence semantics.", "The authors present conditional masked language modeling (CMLM), a new method for unsupervised pretraining, in which the skip-thought notion of conditioning on neighboring sentences is adopted for masked language modeling. The upshot of the proposed approach is that it generates single sentence embeddings that perform competitively on SentEval. In the multilingual setting, the authors combine their CMLM method with a bitext retrieval objective (selecting a sentence’s translation from the other sentences of the language in the batch) that increases performance on a version of the SentEval tasks translated into 14 other languages. In their analysis, the authors make further claims about multilingual embeddings capturing language ID information in their first principle components, a conclusion somewhat substantiated by their results. The authors provide a small amount of ablation experiments for experimental/model design choices." ]
This paper presents a novel training method, Conditional Masked Language Modeling (CMLM), to effectively learn sentence representations on large scale unlabeled corpora. CMLM integrates sentence representation learning into MLM training by conditioning on the encoded vectors of adjacent sentences. Our English CMLM model achieves state-of-the-art performance on SentEval (Conneau & Kiela, 2018), even outperforming models learned using (semi-)supervised signals. As a fully unsupervised learning method, CMLM can be conveniently extended to a broad range of languages and domains. We find that a multilingual CMLM model co-trained with bitext retrieval (BR) and natural language inference (NLI) tasks outperforms the previous state-of-the-art multilingual models by a large margin. We explore the same language bias of the learned representations, and propose a principle component based approach to remove the language identifying information from the representation while still retaining sentence semantics.
[]
[ { "authors": [ "Mikel Artetxe", "Holger Schwenk" ], "title": "Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Samuel Bowman", "Gabor Angeli", "Christopher Potts", "Christopher D Manning" ], "title": "A large annotated corpus for learning natural language inference", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "Daniel Cer", "Yinfei Yang", "Sheng-yi Kong", "Nan Hua", "Nicole Limtiaco", "Rhomni St John", "Noah Constant", "Mario Guajardo-Cespedes", "Steve Yuan", "Chris Tar" ], "title": "Universal sentence encoder", "venue": "arXiv preprint arXiv:1803.11175,", "year": 2018 }, { "authors": [ "Muthu Chidambaram", "Yinfei Yang", "Daniel Cer", "Steve Yuan", "Yunhsuan Sung", "Brian Strope", "Ray Kurzweil" ], "title": "Learning cross-lingual sentence representations via a multi-task dual-encoder model", "venue": "In Proceedings of the 4th Workshop on Representation Learning for NLP", "year": 2019 }, { "authors": [ "Alexis Conneau", "Douwe Kiela" ], "title": "Senteval: An evaluation toolkit for universal sentence representations", "venue": "In Proceedings of the Eleventh International Conference on Language Resources and Evaluation", "year": 2018 }, { "authors": [ "Alexis Conneau", "Douwe Kiela", "Holger Schwenk", "Loı̈c Barrault", "Antoine Bordes" ], "title": "Supervised learning of universal sentence representations from natural language inference data", "venue": "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "venue": "In Proceedings of the 2019 Conference", "year": 2019 }, { "authors": [ "Akiko Eriguchi", "Melvin Johnson", "Orhan Firat", "Hideto Kazawa", "Wolfgang Macherey" ], "title": "Zeroshot cross-lingual classification using multilingual neural machine translation", "venue": "arXiv preprint arXiv:1809.04686,", "year": 2018 }, { "authors": [ "Fangxiaoyu Feng", "Yinfei Yang", "Daniel Cer", "Naveen Arivazhagan", "Wei Wang" ], "title": "Languageagnostic bert sentence embedding", "venue": "arXiv preprint arXiv:2007.01852,", "year": 2020 }, { "authors": [ "John M Giorgi", "Osvald Nitski", "Gary D Bader", "Bo Wang" ], "title": "Declutr: Deep contrastive learning for unsupervised textual representations", "venue": "arXiv preprint arXiv:2006.03659,", "year": 2020 }, { "authors": [ "Junjie Hu", "Sebastian Ruder", "Aditya Siddhant", "Graham Neubig", "Orhan Firat", "Melvin Johnson" ], "title": "Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization", "venue": "arXiv preprint arXiv:2003.11080,", "year": 2020 }, { "authors": [ "Minqing Hu", "Bing Liu" ], "title": "Mining and summarizing customer reviews", "venue": "In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2004 }, { "authors": [ "Ryan Kiros", "Yukun Zhu", "Russ R Salakhutdinov", "Richard Zemel", "Raquel Urtasun", "Antonio Torralba", "Sanja Fidler" ], "title": "Skip-thought vectors", "venue": "Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Zhenzhong Lan", "Mingda Chen", "Sebastian Goodman", "Kevin Gimpel", "Piyush Sharma", "Radu Soricut" ], "title": "Albert: A lite bert for self-supervised learning of language representations", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Lajanugen Logeswaran", "Honglak Lee" ], "title": "An efficient framework for learning sentence representations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Lajanugen Logeswaran", "Honglak Lee" ], "title": "An efficient framework for learning sentence representations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Marco Marelli", "Stefano Menini", "Marco Baroni", "Luisa Bentivogli", "Raffaella Bernardi", "Roberto Zamparelli" ], "title": "A sick cure for the evaluation of compositional distributional semantic models", "venue": "In LREC,", "year": 2014 }, { "authors": [ "Bo Pang", "Lillian Lee" ], "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", "venue": "In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics", "year": 2004 }, { "authors": [ "Bo Pang", "Lillian Lee" ], "title": "Seeing stars: exploiting class relationships for sentiment categorization with respect to rating scales", "venue": "In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics,", "year": 2005 }, { "authors": [ "Peter Prettenhofer", "Benno Stein" ], "title": "Cross-language text classification using structural correspondence learning", "venue": "In Proceedings of the 48th annual meeting of the association for computational linguistics,", "year": 2010 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J. Liu" ], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": null, "year": 2019 }, { "authors": [ "Nils Reimers", "Iryna Gurevych" ], "title": "Sentence-bert: Sentence embeddings using siamese bertnetworks", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language", "year": 2019 }, { "authors": [ "Uma Roy", "Noah Constant", "Rami Al-Rfou", "Aditya Barua", "Aaron Phillips", "Yinfei Yang" ], "title": "Lareqa: Language-agnostic answer retrieval from a multilingual pool", "venue": "arXiv preprint arXiv:2004.05484,", "year": 2020 }, { "authors": [ "Sebastian Ruder", "Anders Søgaard", "Ivan Vulić" ], "title": "Unsupervised cross-lingual representation learning", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts,", "year": 2019 }, { "authors": [ "Richard Socher", "Alex Perelygin", "Jean Wu", "Jason Chuang", "Christopher D Manning", "Andrew Y Ng", "Christopher Potts" ], "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "venue": "In Proceedings of the 2013 conference on empirical methods in natural language processing,", "year": 2013 }, { "authors": [ "Sandeep Subramanian", "Adam Trischler", "Yoshua Bengio", "Christopher J Pal" ], "title": "Learning general purpose distributed sentence representations via large scale multi-task learning", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Adina Williams", "Nikita Nangia", "Samuel Bowman" ], "title": "A broad-coverage challenge corpus for sentence understanding through inference", "venue": "In Proceedings of the 2018 Conference of the North Amer-", "year": 2018 }, { "authors": [ "Yinfei Yang", "Daniel Cer", "Amin Ahmad", "Mandy Guo", "Jax Law", "Noah Constant", "Gustavo Hernandez Abrego", "Steve Yuan", "Chris Tar", "Yun-Hsuan Sung" ], "title": "Multilingual universal sentence encoder for semantic retrieval", "venue": "arXiv preprint arXiv:1907.04307,", "year": 2019 }, { "authors": [ "Yinfei Yang", "Gustavo Hernandez Abrego", "Steve Yuan", "Mandy Guo", "Qinlan Shen", "Daniel Cer", "Yunhsuan Sung", "Brian Strope", "Ray Kurzweil" ], "title": "Improving multilingual sentence embedding using bi-directional dual encoder with additive margin softmax", "venue": "In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Yinfei Yang", "Yuan Zhang", "Chris Tar", "Jason Baldridge" ], "title": "Paws-x: A cross-lingual adversarial dataset for paraphrase identification", "venue": "In Proceedings of the 2019 Conference on Empirical Meth-", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Sentence embeddings map sentences into a vector space. The vectors capture rich semantic information that can be used to measure semantic textual similarity (STS) between sentences or train classifiers for a broad range of downstream tasks (Conneau et al., 2017; Subramanian et al., 2018; Logeswaran & Lee, 2018b; Cer et al., 2018; Reimers & Gurevych, 2019; Yang et al., 2019a;d; Giorgi et al., 2020). State-of-the-art models are usually trained on supervised tasks such as natural language inference (Conneau et al., 2017), or with semi-structured data like question-answer pairs (Cer et al., 2018) and translation pairs (Subramanian et al., 2018; Yang et al., 2019a). However, labeled and semi-structured data are difficult and expensive to obtain, making it hard to cover many domains and languages. Conversely, recent efforts to improve language models include the development of masked language model (MLM) pre-training from large scale unlabeled corpora (Devlin et al., 2019; Lan et al., 2020; Liu et al., 2019). While internal MLM model representations are helpful when fine-tuning on downstream tasks, they do not directly produce good sentence representations, without further supervised (Reimers & Gurevych, 2019) or semi-structured (Feng et al., 2020) finetuning.\nIn this paper, we explore an unsupervised approach, called Conditional Masked Language Modeling (CMLM), to effectively learn sentence representations from large scale unlabeled corpora. CMLM integrates sentence representation learning into MLM training by conditioning on sentence level representations produced by adjacent sentences. The model therefore needs to learn effective sentence representations in order to perform good MLM. Since CMLM is fully unsupervised, it can be easily extended to new languages. We explore CMLM for both English and multilingual sentence embeddings for 100+ languages. Our English CMLM model achieves state-of-the-art performance on SentEval (Conneau & Kiela, 2018), even outperforming models learned using (semi-)supervised signals. Moreover, models training on the English Amazon review data using our multilingual vectors exhibit strong multilingual transfer performance on translations of the Amazon review evaluation data to French, German and Japanese, outperforming existing multilingual sentence embedding models by > 5% for non-English languages and by > 2% on the original English data.\nWe further extend the multilingual CMLM to co-training with parallel text (bitext) retrieval task, and finetuning with cross-lingual natural language inference (NLI) data, inspired by the success of prior work on multitask sentence representation learning (Subramanian et al., 2018; Yang et al., 2019a)\nand NLI learning (Conneau et al., 2017; Reimers & Gurevych, 2019). We achieve performance 1.4% better than the previous state-of-the-art multilingual sentence representation model (see details in section 4.2). Language agnostic representations require semantically similar cross-lingual pairs to be closer in representation space than unrelated same-language pairs (Roy et al., 2020). While we find our original sentence embeddings do have a bias for same language sentences, we discover that removing the first few principal components of the embeddings eliminates the self language bias.\nThe rest of the paper is organized as follows. Section 2 describes the architecture for CMLM unsupervised learning. In Section 3 we present CMLM trained on English data and evaluation results on SentEval. In Section 3 we apply CMLM to learn sentence multilingual sentence representations. Multitask training strategies on how to effectively combining CMLM, bitext retrieval and cross lingual NLI finetuning are explored. In Section 5, we investigate self language bias in multilingual representations and how to eliminate it.\nThe contributions of this paper can be summarized as follows: (1) A novel pre-training technique CMLM for unsupervised sentence representation learning on unlabeled corpora (either in monolingual and multilingual). (2) An effective multitask training framework, which combines unsupervised learning task CMLM with supervised learning Bitext Retrieval and cross-lingual NLI finetuning. (3) An evaluation benchmark for multilingual sentence representations. (4) A simple and effective algebraic method to remove same language bias in multilingual representations." }, { "heading": "2 CONDITIONAL MASKED LANGUAGE MODELING", "text": "We introduce Conditional Masked Language Modeling (CMLM) as a novel architecture for combining next sentence prediction with MLM training. By “conditional,” we mean the MLM task for one sentence depends on the encoded sentence level representation of the adjacent sentence. This builds on prior work on next sentence prediction that has been widely used for learning sentence level representations (Kiros et al., 2015; Logeswaran & Lee, 2018b; Cer et al., 2018; Yang et al., 2019a), but has thus far produced poor quality sentence embeddings within BERT based models (Reimers & Gurevych, 2019).\nWhile existing MLMs like BERT include next sentence prediction tasks, they do so without any inductive bias to try to encode the meaning of a sentence within a single embedding vector. We introduce a strong inductive bias for learning sentence embeddings by structuring the task as follows. Given a pair of ordered sentences, the first sentence is fed to an encoder that produces a sentence level embedding. The embedding is then provided to an encoder that conditions on the sentence embedding in order to better perform MLM prediction over the second sentence. This is notably similar to Skip-Thought (Kiros et al., 2015), but replaces the generation of the complete second sentence with the MLM denoising objective. It is also similar to T5’s MLM inspired unsupervised encode-decoder objective (Raffel et al., 2019), with the second encoder acting as a sort of decoder given the representation produced for the first sentence. Our method critically differs from T5’s in that a sentence embedding bottleneck is used to pass information between two model components\nand in that the task involves denoising a second sentence when conditioning on the first rather than denoising a single text stream.\nFig. 1 illustrates the architecture of our model. The first sentence s1 is tokenized and input to a transformer encoder and a sentence vector v 2 Rd is computed from the sequence outputs by average pooling.1 The sentence vector v is then projected into N spaces with one of the projections being the identity mapping, i.e. vp = P (v) 2 Rd⇥N . Here we use a three-layer MLP as the projection P (·). The second sentence s2 is then masked following the procedure described in the original BERT paper, including random replacement and the use of unchanged tokens. The second encoder shares the same weights with the encoder used to embed s1 2. Tokens in the masked s2 are converted into word vectors and concatenated with vp. The concatenated representations are provided to the transformer encoder to predict the masked tokens in s2. At inference time, we keep the first encoding module and discard the subsequent MLM prediction. In Section 5.2, we explore various different configurations of CMLM, including the number of projection spaces, and how the projected vectors are connected to the embeddings of the second sentence." }, { "heading": "3 LEARNING ENGLISH SENTENCE REPRESENTATIONS WITH CMLM", "text": "For training English sentence encoders with CMLM, we use three Common Crawl dumps. The data are filtered by a classifier which detects whether a sentence belongs to the main content of the web page or not. We use WordPiece tokenization and the vocabulary is the same as public English uncased BERT. In order to enable the model to learn bidirectional information, for two consecutive sequences s1 and s2, we swap their order for 50% of the time. This order-swapping process echos with the preceding and succeeding sentences prediction in Skip-Thought (Kiros et al., 2015). The length of s1 and s2 are set to be 256 tokens (the maximum length). The number of masked tokens in s2 are 80 (31.3%), moderately higher than classical BERT. This change in the ratio of masked tokens is to make the task more challenging, due to the fact that in CMLM, language modeling has access to extra information from adjacent sentences. We train with batch size of 2048 for 1 million steps. The optimizer is LAMB with learning rate of 10 3, 1 = 0.9, 2 = 0.999, warm-up in the first 10,000 steps and linear decay afterwards. We explore two transformer configurations, base and large, same as in the original BERT paper. The number of projections N is 15 by experimenting with multiple choices." }, { "heading": "3.1 EVALUATION", "text": "We evaluate the sentence representations on the following tasks: (1) classification: MR (movie reviews Pang & Lee (2005)), binary SST (sentiment analysis, Socher et al. (2013)), TREC (question-type, Voorhees & Tice (2000)), CR (product reviews, Hu & Liu (2004)), SUBJ (subjectivity/objectivity, Pang & Lee (2004)). (2) Entailment: SNLI (Bowman et al., 2015) and SICK dataset for entailment (SICK-E, Marelli et al. (2014)). The evaluation is done using SentEval (Conneau & Kiela, 2018) which is a prevailing evaluation toolkit for sentence embeddings. The classifier for the downstream is logistic regression. For each task, the encoder and embeddings are fixed and only downstream neural structures are trained.\nThe baseline sentence embedding models include SkipThought (Kiros et al., 2015), InferSent (Conneau et al., 2017), USE (Cer et al., 2018), QuickThought (Logeswaran & Lee, 2018a) and English BERT using standard pre-trained models from TensorFlow Hub website (Devlin et al. (2019), and SBert (Reimers & Gurevych, 2019). To evaluate the possible improvements coming from training data and processes, we train standard BERT models (English BERT base/large (CC)) on the same Common Crawl Corpora that CMLM is trained on. Similarly, we also train QuickThought, a competitive unsupervised sentence representations learning model, on the same Common Crawl Corpora (denoted as “QuickThought (CC)”). To further address the possible advantage from using Transformer encoder, we use a Transformer encoder as the sentence encoder in QuickThought (CC). The representations for BERT are computed by averaging pooling of the sequence outputs\n1One can equivalently choose other pooling methods, such as max pooling or use the vector output corresponding to a special token position such as the [CLS] token.\n2The dual-encoder sharing encoder weights for different inputs can be also referred as “siamese encoder”\n(we also explore options including [CLS] vector and max pooling and the results are available in the appendix)." }, { "heading": "3.2 RESULTS", "text": "Evaluation results are presented in Table 1. CMLM outperforms existing models overall, besting MLM (both English BERT and English BERT (CC)) using both base and large configurations. The closest competing model is SBERT, which uses supervised NLI data rather than a purely unsupervised approach. Interestingly, CMLM outperforms SBERT on the SICK-E NLI task." }, { "heading": "4 LEARNING MULTILINGUAL SENTENCE REPRESENTATIONS WITH CMLM", "text": "As a fully unsupervised method, CMLM can be conveniently extended to multilingual modeling even for less well resourced languages. Learning good multilingual sentence representations is more challenging than learning monolingual ones, especially when attempting to capture the semantic alignment between different languages. As CMLM does not explicitly address cross-lingual alignment, we explore several modeling approaches besides CMLM: (1) Co-training CMLM with a bitext retrieval task; (2) Fine-tuning with cross-lingual NLI data." }, { "heading": "4.1 MULTILINGUAL CMLM", "text": "We follow the same configuration used to learn English sentence representations with CMLM, but extend the training data to include more languages. Results below will show that CMLM again exhibits competitive performance as a general technique to learn from large scale unlabeled corpora." }, { "heading": "4.2 MULTITASK TRAINING WITH CMLM AND BITEXT RETRIEVAL", "text": "Besides the monolingual pretraining data, we collect a dataset of bilingual translation pairs {(si, ti)} using a bitext mining system (Feng et al., 2020). The source sentences {si} are in English and the target sentences {ti} covers over 100 languages. We build a retrieval task with the translation parallel data, identifying the corresponding translation of the input sentence from candidates in the same batch. Concretely, incorporating Additive Margin Softmax (Yang et al., 2019b), we compute the bitext retrieval loss Lsbr for the source sentences as:\nLsbr = 1\nB\nBX\ni=1\ne (si,ti) m\ne (si,ti) m + PB\nj=1,j 6=i e (si,tj)\n(1)\nAbove (l(i)s , l (i) t ) denotes the the inner products of sentence vectors of l (i) s and l (i) t (embedded by the transformer encoder); m and B denotes the additive margin and the batch size respectively. Note the way to generate sentence embeddings is the same as in CMLM. We can compute the bitext\nretrieval loss for the target sentences Ltbr by normalizing over source sentences, rather than target sentences, in the denominator.3 The final bitext retrieval loss Lbr is given as Lbr = Lsbr + Ltbr. There are several ways to incorporate the monolingual CMLM task and bitext retrieval (BR). We explore the following multistage and multitask pretraining strategies:\nS1. CMLM+BR: Train with both CMLM and BR from the start; S2. CMLM ! BR: Train with CMLM in the first stage and then train with on BR; S3. CMLM ! CMLM+BR: Train with only CMLM in the first stage and then with both tasks.\nWhen training with both CMLM and BR, the optimization loss is a weighted sum of the language modeling and the retrieval loss Lbr, i.e. L = LCMLM + ↵Lbr. We empirically find ↵ = 0.2 works well. As shown in Table 3, S3 is found to be the most effective. Unless otherwise denoted, our models trained with CMLM and BR follow S3. We also discover that given a pre-trained transformer encoder, e.g. mBERT, we can improve the quality of sentence representations by finetuning the transformer encoder with CMLM and BR. As shown in Table 2 and Table 3, the improvements between “mBERT” and “f-mBERT” (finetuned mBERT) are significant." }, { "heading": "4.3 FINETUNING WITH CROSS LINGUAL NATURAL LANGUAGE INFERENCE", "text": "Finetuning with NLI data has proved to be an effective method to improve the quality of embeddings for English models. We extend this to the multilingual domain. Given a premise sentence u and a hypothesis sentence v, we train a 3-way classifier on the concatenation of [u,v, |u v|,u ⇤ v]. Weights of transformer encoders are also updated in the finetuning process. Different from previous work also using multilingual NLI data (Yang et al., 2019a), the premise u and hypothesis v here are in different languages. The cross lingual NLI data are generated by translating Multi-Genre NLI Corpus (Williams et al., 2018) into 14 languages using Google Translate API." }, { "heading": "4.4 CONFIGURATIONS", "text": "Monolingual training data for CMLM are generated from 3 versions of Common Crawl data in 113 languages. The data cleaning and filtering is the same as the English-only ones. A new cased vocabulary is built from the all data sources using the WordPiece vocabulary generation library from Tensorflow Text. The language smoothing exponent from the vocab generation tool is set to 0.3, as the distribution of data size for each language is imbalanced. The final vocabulary size is 501,153. The number of projections N is set to be 15, the batch size B is 2048 and the positive margin is 0.3. For CMLM only pretraining, the number of steps is 2 million. In multitask learning, for S1 and S3, the first stage is of 1.5 million and the second stage is of 1 million steps; for S2, number of training steps is 2 million. The transformer encoder uses the BERT base configuration. Initial learning rate and optimizer chosen are the same as the English models." }, { "heading": "4.5 EVALUATIONS", "text": "" }, { "heading": "4.5.1 XEVAL: MULTILINGUAL BENCHMARKS FOR SENTENCE REPRESENTATIONS EVALUATION", "text": "Evaluations in previous multilingual literature focused on the cross lingual transfer learning ability from English to other languages. However, this evaluation protocol that treats English as the “anchor” does not equally assess the quality of non-English sentence representations with English ones. In order to address the issue, we prepare a new benchmark for multilingual sentence vectors, XEVAL, by translating SentEval (English) to other 14 languages with an industrial translation API.\nResults of models trained with monolingual data are shown in Table 2. Baseline models include mBERT (Devlin et al., 2019), XLM-R (Ruder et al., 2019) and a transformer encoder trained with MLM on the same Common Crawl data (MLM(CC), again this is to control the effects of training data). The method to produce sentence representations for mBERT and XLM-R is chosen to be averaging pooling after exploring options including [CLS] representations and max pooling. The\n3i.e., by swapping the i and j subscripts in the last term of the denominator.\nmultilingual model CMLM on monolingual data outperform all baselines in 12 out of 15 languages and the average performance.\nResults of models trained with cross lingual data are presented in Table 3. Baseline models for comparison include LASER (Artetxe & Schwenk (2019), trained with parallel data) and multilingual USE ((Yang et al., 2019a), trained with cross lingual NLI). Our model (S3) outperforms LASER in 13 out of 15 languages. Notably, finetuning with NLI in the cross lingual way produces significant improvement (S3 + NLI v.s. S3) and it also outperforms mUSE by significant margins4. Multitask learning with CMLM and BR can also be used to increase the performance of pretrained encoders, e.g. mBERT. mBERT trained with CMLM and BR (f-mBERT) has a significant improvement upon mBERT." }, { "heading": "4.5.2 AMAZON REVIEWS", "text": "We also conduct a zero-shot transfer learning evaluation on Amazon reviews dataset (Prettenhofer & Stein, 2010). Following Chidambaram et al. (2019), the original dataset is converted to a classification benchmark by treating reviews with strictly more than 3 stars as positive and negative otherwise. We split 6000 English reviews in the original training set into 90% for training and 10% for development. The two-way classifier, upon the concatenation of [u,v, |u v|,u ⇤ v] (following previous works e.g. Reimers & Gurevych (2019)), is trained on the English training set and then evaluated on English, French, German and Japanese test sets (each has 6000 examples). Note the same multilingual encoder and classifier are used for all the evaluations. We also experiment with whether freezing the encoder weights or not during training. As presented in Table 4, CMLM alone has already outperformed baseline models. Training with BR and cross lingual NLI finetuning further boost the performance." }, { "heading": "4.6 TATOEBA: SEMANTIC SEARCH", "text": "To directly assess the ability of our models on capturing semantics, we test on Tatoeba dataset proposed in Artetxe & Schwenk (2019). Tatoeba dataset include up to 1,000 English-aligned sentence pairs for each evaluated language. The task is to find the nearest neighbor for the query sentence in the other language by cosine similarity. The experiments is conducted on the 36 languages sent as in XTREME benchmark (Hu et al., 2020) and the evaluation metric is retrieval accuracy. We test\n4Note mUSE only supports 16 languages, the best CMLM model is still significantly better if only considering the mUSE supported languages (underline in table 2 indicates the unsupported languages by mUSE)\nour models with configuration CMLM+BR and CMLM+BR+NLI. Baselines (results collected from Hu et al. (2020); Artetxe & Schwenk (2019)) include mBERT, LASER, XLM, XLM-R. Results are presented in Table 5. Our model CMLM+BR outperforms all baseline models in 30 out of 36 languages and has the highest average performance. One interesting observation is that finetuning with NLI actually undermines the model performance on semantic search, in contrary with the significant improvements from CMLM+BR to CMLM+BR+NLI on XEVAL (Table 3). We speculate this is because unlike semantic search, NLI inference is not a linear process. Finetuning with cross-lingual NLI is not expected to help the linear retrieval by nearest neighbor search." }, { "heading": "5 ANALYSIS", "text": "" }, { "heading": "5.1 LANGUAGE AGNOSTIC PROPERTIES", "text": "Language Agnosticism has been a property of great interest for multilingual representations. However, there has not been a qualitative measurement or rigid definition for this property. Here we propose that “language agnostic” refers to the property that sentences representations are neutral w.r.t their language information. For example, two sentences with similar semantics should be close in embedding space whether they are of the same languages or not. Another case is that given one query sentence in language l1 and two candidate sentences with the identical meanings (different from the query sentence) in languages l1 and l2, the l1 input sentence should not be biased towards the l1 candidate sentence. To capture this intuition, we convert the PAWS-X dataset (Yang et al., 2019c) to a retrieval task to measure the language agnostic property. Specifically, PAWS-X dataset\nconsists of English sentences and their translations in other six languages (x-axis labels in Fig. 2). Given a query, we inspect the language distribution of the retrieved sentences (by ranking cosine similarities). In Fig. 2, query sentences are in German, French and Chinese for each row. Representations of mBERT (first row) have a strong self language bias, i.e. sentences in the language matching the query are dominant. In contrast, the bias is much weaker in our model trained with CMLM and BR (the third column), probably due to the cross lingual retrieval pretraining. We discover that removing the first principal component of each monolingual space from sentence representations effectively eliminate the self language bias. As shown in the second and the fourth column in Fig. 2, with principal component removal (PCR), the language distribution is much more uniform. We further explore PCR by experimenting on the Tatoeba dataset. Table 5 shows the retrieval accuracy of multilingual model with and w/o PCR. PCR increases the overall retrieval performance for both two models. This suggests the first principal components in each monolingual space primarily encodes language identification information.\nWe also visualize the sentence representations in Tatoeba dataset in Fig. 3. Our model (the first row) shows both weak and strong semantic alignment (Roy et al., 2020). Representations are close to others with similar semantics regardless of their languages (strong alignment), especially for French and Russian, where representations form several distinct clusters. Also representations from the same language tend to cluster (weak alignment). While representations from mBERT generally exhibit weak alignment." }, { "heading": "5.2 ABLATION STUDY", "text": "In this section, we explore different configurations of CMLM, including the number of spaces in the projection N and CMLM architecture. As shown in Table 7, projecting the sentence vector into N = 15 produces highest overall performance. We also try a modification to CMLM architecture. Besides the concatenation with token embeddings of s2 before input to the transformer encoder, the projected\nvectors are also concatenated with the sequence outputs of s2 for the masked token prediction. This version of architecture is denoted as “skip” and model performance actually becomes worse.\nNote that the projected vector can also be used to produce the sentence representation vs. For example, one way is to use the average of projected vectors, i.e. vs = 1N P i v (i) p . Recall v (i) p is the ith projection. This version is denoted as “proj” in Table 7. Sentence representations produced in this way still yield competitive performance, which further confirm the effectiveness of the projection." }, { "heading": "6 CONCLUSION", "text": "We present a novel sentence representation learning method Conditional Masked Language Modeling (CMLM) for training on large scale unlabeled corpus. CMLM outperforms the previous state-ofthe-art English sentence embeddings models, including those trained with (semi-)supervised signals. For multilingual representations learning, we discover that co-training CMLM with bitext retrieval and cross-lingual NLI finetuning achieves state-of-the-art performance. We also discover that multilingual representations have the same language bias and principal component removal (PCR) can eliminate the bias by separating language identity information from semantics." } ]
2,020
null
SP:14a10829b5d4b5fcdf1c02720b767e6af2733a48
[ "This paper studies the transferability in multi-task learning. They propose a metric, transference, to evaluate how tasks affect each other during multi-task training, and a method called IT-MTL which utilizes this metric to compute and improve lookahead loss changes. Although the proposed metric and method are interesting from a scientific point of view, there are a few key downsides (as the author themselves summarized in the conclusion) that require further investigation/improvements.", "[Summary] This paper studies the problem of task relationship/transference in multi-task learning, by introducing a quantifiable measurement based on relative loss updates. A (nonsymmetric) task transference between task $i$ and task $j$ then can be computed by measuring the relative change of training loss of task $j$, with the updated shared parameters from the training loss of task $i$. By finding a subset of tasks that achieves maximal total transference over every single task, multi-task learning performance can be further improved." ]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks. Despite this capacity, naı̈ve formulations often degrade performance and in particular, identifying the tasks that would benefit from cotraining remains a challenging design question. In this paper, we analyze the dynamics of information transfer, or transference, across tasks throughout training. Specifically, we develop a similarity measure that can quantify transference among tasks and use this quantity to both better understand the optimization dynamics of multi-task learning as well as improve overall learning performance. In the latter case, we propose two methods to leverage our transference metric. The first operates at a macro-level by selecting which tasks should train together while the second functions at a micro-level by determining how to combine task gradients at each training step. We find these methods can lead to significant improvement over prior work on three supervised multi-task learning benchmarks and one multi-task reinforcement learning paradigm.
[ { "affiliations": [], "name": "HARNESSING TRANSFERENCE" } ]
[ { "authors": [ "Martı́n Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: A system for largescale machine learning", "venue": "In 12th {USENIX} symposium on operating systems design and implementation ({OSDI}", "year": 2016 }, { "authors": [ "Alessandro Achille", "Michael Lam", "Rahul Tewari", "Avinash Ravichandran", "Subhransu Maji", "Charless C Fowlkes", "Stefano Soatto", "Pietro Perona" ], "title": "Task2vec: Task embedding for meta-learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp. 6430–6439,", "year": 2019 }, { "authors": [ "Alessandro Achille", "Giovanni Paolini", "Glen Mbeng", "Stefano Soatto" ], "title": "The information complexity of learning tasks, their structure and their distance", "venue": "arXiv preprint arXiv:1904.03292,", "year": 2019 }, { "authors": [ "Jonathan Baxter" ], "title": "A model of inductive bias learning", "venue": "Journal of artificial intelligence research,", "year": 2000 }, { "authors": [ "Shai Ben-David", "Reba Schuller" ], "title": "Exploiting task relatedness for multiple task learning", "venue": "In Learning Theory and Kernel Machines,", "year": 2003 }, { "authors": [ "Joachim Bingel", "Anders Søgaard" ], "title": "Identifying beneficial task relations for multi-task learning in deep neural networks", "venue": "arXiv preprint arXiv:1702.08303,", "year": 2017 }, { "authors": [ "Lukas Brinkmeyer", "Rafael Rego Drumond", "Randolf Scholz", "Josif Grabocka", "Lars SchmidtThieme" ], "title": "Chameleon: Learning model initializations across tasks with different schemas", "venue": null, "year": 1909 }, { "authors": [ "Rich Caruana" ], "title": "Multitask learning", "venue": "Machine Learning,", "year": 1997 }, { "authors": [ "Richard Caruana" ], "title": "Multitask learning: A knowledge-based source of inductive bias", "venue": "In Proceedings of the Tenth International Conference on Machine Learning,", "year": 1993 }, { "authors": [ "Zhao Chen", "Vijay Badrinarayanan", "Chen-Yu Lee", "Andrew Rabinovich" ], "title": "Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks", "venue": "arXiv preprint arXiv:1711.02257,", "year": 2017 }, { "authors": [ "Zhao Chen", "Jiquan Ngiam", "Yanping Huang", "Thang Luong", "Henrik Kretzschmar", "Yuning Chai", "Dragomir Anguelov" ], "title": "Just pick a sign: Optimizing deep multitask models with gradient sign dropout", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "François Chollet" ], "title": "Keras: The python deep learning library", "venue": "ascl, pp. ascl–1806,", "year": 2018 }, { "authors": [ "Long Duong", "Trevor Cohn", "Steven Bird", "Paul Cook" ], "title": "Low resource dependency parsing: Crosslingual parameter sharing in a neural network parser", "venue": "In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume", "year": 2015 }, { "authors": [ "Kshitij Dwivedi", "Gemma Roig" ], "title": "Representation similarity analysis for efficient task taxonomy & transfer learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Erin Grant", "Chelsea Finn", "Sergey Levine", "Trevor Darrell", "Thomas Griffiths" ], "title": "Recasting gradientbased meta-learning as hierarchical bayes, 2018", "venue": null, "year": 2018 }, { "authors": [ "Michelle Guo", "Albert Haque", "De-An Huang", "Serena Yeung", "Li Fei-Fei" ], "title": "Dynamic task prioritization for multitask learning", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Pengsheng Guo", "Chen-Yu Lee", "Daniel Ulbricht" ], "title": "Learning to branch for multi-task learning", "venue": "arXiv preprint arXiv:2006.01895,", "year": 2020 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Offpolicy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Geoffrey E Hinton", "David C Plaut" ], "title": "Using fast weights to deblur old memories", "venue": "In Proceedings of the ninth annual conference of the Cognitive Science Society,", "year": 1987 }, { "authors": [ "Pavel Izmailov", "Dmitrii Podoprikhin", "Timur Garipov", "Dmitry Vetrov", "Andrew Gordon Wilson" ], "title": "Averaging weights leads to wider optima and better generalization", "venue": "arXiv preprint arXiv:1803.05407,", "year": 2018 }, { "authors": [ "Rie Johnson", "Tong Zhang" ], "title": "Accelerating stochastic gradient descent using predictive variance reduction", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Alex Kendall", "Yarin Gal", "Roberto Cipolla" ], "title": "Multi-task learning using uncertainty to weigh losses for scene geometry and semantics", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Taesup Kim", "Jaesik Yoon", "Ousmane Dia", "Sungwoong Kim", "Yoshua Bengio", "Sungjin Ahn" ], "title": "Bayesian model-agnostic meta-learning", "venue": "arXiv preprint arXiv:1806.03836,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Xi Lin", "Hui-Ling Zhen", "Zhenhua Li", "Qing-Fu Zhang", "Sam Kwong" ], "title": "Pareto multi-task learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Shikun Liu", "Edward Johns", "Andrew J Davison" ], "title": "End-to-end multi-task learning with attention", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Yongxi Lu", "Abhishek Kumar", "Shuangfei Zhai", "Yu Cheng", "Tara Javidi", "Rogerio Feris" ], "title": "Fullyadaptive feature sharing in multi-task networks with applications in person attribute classification", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Ishan Misra", "Abhinav Shrivastava", "Abhinav Gupta", "Martial Hebert" ], "title": "Cross-stitch networks for multi-task learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 1983 }, { "authors": [ "Alex Nichol", "Joshua Achiam", "John Schulman" ], "title": "On first-order meta-learning algorithms", "venue": "arXiv preprint arXiv:1803.02999,", "year": 2018 }, { "authors": [ "Sebastian Ruder" ], "title": "An overview of multi-task learning in deep neural networks", "venue": "arXiv preprint arXiv:1706.05098,", "year": 2017 }, { "authors": [ "Ozan Sener", "Vladlen Koltun" ], "title": "Multi-task learning as multi-objective optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Nathan Silberman", "Derek Hoiem", "Pushmeet Kohli", "Rob Fergus" ], "title": "Indoor segmentation and support inference from rgbd images", "venue": "In European conference on computer vision,", "year": 2012 }, { "authors": [ "Trevor Standley", "Amir R Zamir", "Dawn Chen", "Leonidas Guibas", "Jitendra Malik", "Silvio Savarese" ], "title": "Which tasks should be learned together in multi-task learning", "venue": null, "year": 1905 }, { "authors": [ "Ximeng Sun", "Rameswar Panda", "Rogerio Feris" ], "title": "Adashare: Learning what to share for efficient deep multi-task learning", "venue": "arXiv preprint arXiv:1911.12423,", "year": 2019 }, { "authors": [ "Mihai Suteu", "Yike Guo" ], "title": "Regularizing deep multi-task networks using orthogonal gradients", "venue": "arXiv preprint arXiv:1912.06844,", "year": 2019 }, { "authors": [ "Grzegorz Swirszcz", "Aurelie C Lozano" ], "title": "Multi-level lasso for sparse multi-task regression", "venue": "In Proceedings of the 29th International Conference on Machine Learning", "year": 2012 }, { "authors": [ "Simon Vandenhende", "Stamatios Georgoulis", "Bert De Brabandere", "Luc Van Gool" ], "title": "Branched multi-task networks: deciding what layers to share", "venue": null, "year": 1904 }, { "authors": [ "Zirui Wang", "Zachary C Lipton", "Yulia Tsvetkov" ], "title": "On negative interference in multilingual models: Findings and a meta-learning treatment", "venue": "arXiv preprint arXiv:2010.03017,", "year": 2020 }, { "authors": [ "Zirui Wang", "Yulia Tsvetkov", "Orhan Firat", "Yuan Cao" ], "title": "Gradient vaccine: Investigating and improving multi-task optimization in massively multilingual models", "venue": "arXiv preprint arXiv:2010.05874,", "year": 2020 }, { "authors": [ "Sen Wu", "Hongyang R Zhang", "Christopher Ré" ], "title": "Understanding and improving information transfer in multi-task learning", "venue": "arXiv preprint arXiv:2005.00944,", "year": 2020 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "Yongxin Yang", "Timothy M. Hospedales" ], "title": "Trace norm regularised deep multi-task learning", "venue": "arXiv preprint arXiv:1606.04038,", "year": 2016 }, { "authors": [ "Tianhe Yu", "Deirdre Quillen", "Zhanpeng He", "Ryan Julian", "Karol Hausman", "Chelsea Finn", "Sergey Levine" ], "title": "Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning, 2019", "venue": null, "year": 2019 }, { "authors": [ "Tianhe Yu", "Saurabh Kumar", "Abhishek Gupta", "Sergey Levine", "Karol Hausman", "Chelsea Finn" ], "title": "Gradient surgery for multi-task learning", "venue": "arXiv preprint arXiv:2001.06782,", "year": 2020 }, { "authors": [ "Amir R. Zamir", "Alexander Sax", "William Shen", "Leonidas Guibas", "Jitendra Malik", "Silvio Savarese" ], "title": "Taskonomy: Disentangling task transfer learning", "venue": "IEEE/CVF", "year": 2018 }, { "authors": [ "Michael Zhang", "James Lucas", "Jimmy Ba", "Geoffrey E Hinton" ], "title": "Lookahead optimizer: k steps forward, 1 step back", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yu Zhang", "Qiang Yang" ], "title": "A survey on multi-task learning", "venue": "arXiv preprint arXiv:1707.08114,", "year": 2017 }, { "authors": [ "Xiangyun Zhao", "Haoxiang Li", "Xiaohui Shen", "Xiaodan Liang", "Ying Wu" ], "title": "A modulation module for multi-task learning with applications in image retrieval", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Fuzhen Zhuang", "Zhiyuan Qi", "Keyu Duan", "Dongbo Xi", "Yongchun Zhu", "Hengshu Zhu", "Hui Xiong", "Qing He" ], "title": "A comprehensive survey on transfer learning", "venue": "Proceedings of the IEEE,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deciding if two or more objectives should be trained together in a multi-task model, as well as choosing how that model’s parameters should be shared, is an inherently complex issue often left to human experts (Zhang & Yang, 2017). However, a human’s understanding of similarity is motivated by their intuition and experience rather than a prescient knowledge of the underlying structures learned by a neural network. To further complicate matters, the benefit or detriment induced from co-training relies on many non-trivial decisions including, but not limited to, dataset characteristics, model architecture, hyperparameters, capacity, and convergence (Wu et al., 2020; Vandenhende et al., 2019; Standley et al., 2019; Sun et al., 2019). As a result, a quantifiable measure which conveys the effect of information transfer in a neural network would be valuable to practitioners and researchers alike to better construct or understand multi-task learning paradigms (Baxter, 2000; Ben-David & Schuller, 2003).\nThe training dynamics specific to multitask neural networks, namely cross-task interactions at the shared parameters (Zhao et al., 2018), are difficult to predict and only fully manifest at the completion of training. Given the cost, both with regards to time and resources, of fully training a deep neural network, an exhaustive search over the 2m−1 possible combinations ofm tasks to determine ideal task groupings can be infeasible. This search is only complicated by the irreproducibility inherent in traversing a loss landscape with high curvature; an effect which appears especially pronounced in multi-task learning paradigms (Yu et al., 2020; Standley et al., 2019).\nIn this paper, we aim to take a step towards quantifying transference, or the dynamics of information transfer, and understanding its effect on multi-task training efficiency. As both the input data and state of model convergence are fundamental to transference (Wu et al., 2020), we develop a parameter-free approach to measure this effect at a per-minibatch level of granularity. Moreover, our quantity makes no assumptions regarding model architecture, and is applicable to any paradigm in which shared parameters are updated with respect to multiple task losses.\nBy analyzing multi-task training dynamics through the lens of transference, we present the following observations. First, information transfer is highly dependent on model convergence and varies significantly throughout training. Second, and perhaps surprisingly, excluding certain task gradients from the multi-task gradient update for select minibatches can improve learning efficiency. Our\nUnder review as a conference paper at ICLR 2021 Transference in CelebA Transference in MetaWorld\nA0\nA8\nA7\nA6\nA5 A4\nA3\nA2\nA1\n-0.4\n-0.3\n-0.2\n-0.1\n1.0\nTransference in CelebA Gradient Update\nA0 A1 A2 A3 A4 A5 A6 A7 A8\n(a)\nreach\npush\npress button top open window\nclose window\n-0.8\n-0.6\n-0.4\n-0.2\n1.0\nTransference in MetaWorld\nGradient Update close window open window press button top push reach\n(b)\nFigure 1: Transference in (a) CelebA for a subset of 9 attributes; (b) Meta-World for “push”, “reach”, “press button top”, and “open window”. To determine task groupings, we compute the transference of each task i on all other tasks j, i.e. Zt{i}→j and average over time. For the purpose of illustration, we normalize the transference along each axis. Notice the majority of the tasks in (a) concentrate around a single value for each attribute. Tasks which exhibit transference above this value are considered to have relatively high transference. For instance, A3 exhibits higher-than-average transference on A0, A4, and A5. A similar effect is observed in (b), with “close window” manifesting high transference on “push” and “reach”.\nanalysis suggests this is due to large variation in loss landscapes for different tasks as illustrated in Figure 4. Building on these observations, we propose two methods to utilize transference in multitask learning algorithms – to choose which tasks to train together as well as determining which gradients to apply at each minibatch. Our experiments indicate the former can identify promising task groupings, while the latter can improve learning performance over prior methods.\nIn summary, our main contributions are three-fold: we (1) propose the first measure (to our knowledge) which quantifies information transfer among tasks in multi-task learning; (2) demonstrate how transference can be used as a heuristic to select task groupings; (3) present a method which leverages minibatch-level transference to augment network performance." }, { "heading": "2 RELATED WORK", "text": "Multi-Task Formulation. The most prevalent formulation of MTL is hard parameter sharing of hidden layers (Ruder, 2017; Caruana, 1993). In this design, a subset of the hidden layers are typically shared among all tasks, and task-specific layers are stacked on top of the shared base to output a prediction value. Each task is assigned a weight, and the loss of the entire model is a linear combination of each task’s loss multiplied by its respective loss weight. This particular design enables parameter efficiency by sharing hidden layers across tasks, reduces overfitting, and can facilitate transfer learning effects among tasks (Ruder, 2017; Baxter, 2000; Zhang & Yang, 2017).\nInformation Sharing. Prevailing wisdom suggests tasks which are similar or share a similar underlying structure may benefit from co-training in a multi-task system (Caruana, 1993; 1997). A plethora of multi-task methods addressing what to share have been developed, such as Neural Architecture Search (Guo et al., 2020; Sun et al., 2019; Vandenhende et al., 2019; Rusu et al., 2016; Huang et al., 2018; Lu et al., 2017) and Soft-Parameter Sharing (Misra et al., 2016; Duong et al., 2015; Yang & Hospedales, 2016), to improve multi-task performance. Though our measure of transference is complementary with these methods, we direct our focus towards which tasks should be trained together rather than architecture modifications to maximize the benefits of co-training.\nWhile deciding which tasks to train together has traditionally been addressed with costly crossvalidation techniques or high variance human intuition, recent advances have developed increasingly efficient algorithms to assess co-training performance. Swirszcz & Lozano (2012) and Bingel & Søgaard (2017) approximate multi-task performance by analyzing single-task learning characteristics. An altogether different approach may leverage recent advances in transfer learning focused on understanding task relationships (Zamir et al., 2018; Achille et al., 2019b; Dwivedi & Roig, 2019; Zhuang et al., 2020; Achille et al., 2019a); however, Standley et al. (2019) show transfer learning\nalgorithms which determine task similarity do not carry over to the multi-task learning domain and instead propose a multi-task specific framework.\nTraining Dynamics. Significant effort has also been invested to improve the training dynamics of MTL systems. In particular, dynamic loss reweighing has achieved performance superior to using fixed loss weights found with extensive hyperparameter search (Kendall et al., 2018; Guo et al., 2018; Liu et al., 2019; Chen et al., 2017; Sener & Koltun, 2018; Lin et al., 2019). Another set of methods seeks to mitigate the optimization challenges in multi-task learning by manipulating the task gradients in a number of ways such as (1) modifying the direction of task gradients with the underlying assumption that directional inconsistency of gradients on the shared parameters are detrimental to model convergence and performance (Zhao et al., 2018; Suteu & Guo, 2019), and (2) altering both the direction and the magnitude of the task gradients (Yu et al., 2020; Chen et al., 2020; Wang et al., 2020b). Instead of directly modifying the task gradients during optimization, our work builds upon these approaches by quantifying how a gradient update to the shared parameters would affect training loss and choosing the combination of gradients which maximizes positive information transfer.\nLooking into the Future. Looking at what could happen to determine what should happen has been used extensively in both the meta-learning (Finn et al., 2017; Nichol et al., 2018; Brinkmeyer et al., 2019; Grant et al., 2018; Kim et al., 2018) as well as optimization domains (Nesterov, 1983; Hinton & Plaut, 1987; Zhang et al., 2019; Izmailov et al., 2018; Johnson & Zhang, 2013). Lookahead metalearning algorithms focusing on validation loss have also been used to improve generalization in multi-task learning systems (Wang et al., 2020a), and our work further adapts this central concept to multi-task learning to both quantify and improve information transfer." }, { "heading": "3 TRANSFERENCE IN MULTI-TASK LEARNING", "text": "Within the context of a hard-parameter sharing paradigm, tasks collaborate to build a shared feature representation which is then specialized by individual task-specific heads to output a prediction. Accordingly, tasks implicitly transfer information to each other by updating this shared feature representation with successive gradient updates. We can then view transference, or information transfer in multi-task learning, as the effect of a task’s gradient update to the shared parameters on another task’s loss during training.\nWhile the the shared parameter update using a task’s gradient, often but not always, increases the losses of the other tasks in the network, we find the extent to which these losses change to be highly task specific. This indicates certain tasks interact more constructively than others. Further, we notice this effect to be reproducible and nearly unchanged across successive training runs with varying parameter initializations. Motivated by these observations, we derive a quantitative measure of transference, describe how it can be used to determine which tasks should be trained together, and provide empirical analysis of these claims. Later, we will build upon these ideas to derive a new multi-task learning algorithm." }, { "heading": "3.1 MEASURING TRANSFERENCE", "text": "Consider an m-multitask loss function parameterized by {θs} ∪ {θi| i ∈ [m]} where θs represents the shared parameters and θi represents the task i ∈ [m] specific parameters. Let\nLtotal(X , θs, {θi}) = ∑ i∈[m] Li(X , θs, θi) ,\ndenote the total loss where Li represents the non-negative loss of task i. For simplicity of notation, we set the loss weight of each task to be equal to 1, though our construction generalizes to arbitrary weightings.\nFor a given training batch X t at time-step t, we can first update the task specific parameters {θt+1i } using standard gradient updates. We can now define the quantity θt+1s|ξ to represent the updated shared parameters after a gradient step with respect to the tasks in the non-empty subset ∅ ⊂ ξ ⊆ [m]. Assuming SGD for simplicity, we have1\nθt+1s|ξ := θ t s − η ∑ i∈ξ ∇θsLi(X t, θts, θti) .\nThis quantity allows us to calculate a lookahead loss using the updated shared parameters while keeping the task-specific parameters as well as the input batch unchanged across different subsets of task gradients. That is, in order to assess the effect of the gradient update of tasks in ξ on a given task j, we can compare the loss of task j before and after applying the gradient update on the shared parameters with respect to ξ. In order to eliminate the scale discrepancy among different task losses, we consider the ratio of a task’s loss before and after the gradient step on the shared parameters as a scale invariant measure of relative progress. We can then define an asymmetric measure for calculating the transference of the tasks in ξ at a given time-step t on a single task j as\nZtξ j = 1− Lj(X t, θt+1s|ξ , θ t+1 j )\nLj(X t, θts, θtj) . (1)\nNotice that a positive value of Ztξ j indicates that the update on the shared parameters results in a lower loss on task j than the original parameter values, while a negative value of Ztξ j indicates that the shared parameter update is antagonistic for this task’s performance. Also, note that for ξ = {j}, our definition of transference encompasses a notion of self-transference, i.e. the effect of a task’s gradient update on its own loss. This quantity is particularly useful as a baseline to determine whether a subset of gradient updates can result in improved performance when compared with a task’s own self-transference. As we discuss in the next section, transference provides an effective guideline for choosing the subset of tasks to train together in a multi-task setting." }, { "heading": "3.2 TASK GROUPINGS BASED ON TRANSFERENCE", "text": "Before using transference to develop a multi-task training augmentation, we aim to evaluate if our measure of transference is meaningful in practice. To do this, we empirically test whether transference is predictive of whether a group of tasks should be trained together. We consider two multi-task learning problems based on the CelebA dataset (Liu et al., 2015) and the Meta-World benchmark (Yu et al., 2019). Compiling transference scores into a radar chart, we use Figure 1 to identify groupings of tasks which exhibit beneficial or antagonistic transference. We then evaluate if our heuristic led us to select ideal task groupings by comparing against all other possible task groupings. Unlike prior approaches, our method requires only a single training run and is minimally complex, only making additional forward and backward passes through the network which can be done in parallel. 1Note that the backward pass is computed only once and the gradients are calculated at {θts} ∪ {θti | i ∈ [m]}.\nWe first consider a multi-task classification problem by selecting 9 attributes2 from the CelebA dataset and computing their transference when trained together in a single model. Specifically, we compute the transference of each task i on all other tasks j in the network, i.e. Zt{i} j . While transference is computed at a per-minibatch level, we can average the transference across minibatches to compute an epoch-level transference metric. Integrating across the number of steps in training then provides us with an overall (scalar) transference score. Figure 1(a) shows the transference score among the 9 attributes in the CelebA dataset. For purposes of illustration, we normalize the transference scores on each task by dividing the values by the task’s self-transference. Thus, self-transference becomes 1 for all tasks.\nAs illustrated in Figure 1(a), two clusters of strong mutual transference manifest: (1) {A0, A3, A4, A5} and (2) {A6, A7, A8}. We draw this grouping by choosing subsets of tasks which induce relatively high mutual transference. For instance, A3 demonstrates significantly higher transference on A0, A4, and A5, when compared with the transference of A1, A2, A6, A7, and A8 on these tasks. In Table 1, we construct Group 1 and Group 2 from interpreting Figure 1(a) and co-train both groups with all other attributes as shown in the left column of Table 1. We find the inclusion of A3 in Group 1 (A0, A4, and A5) results in the highest accuracy when compared to co-training with any other attribute. Similarly, A6 is the best attribute to co-train with A7 and A8 in Group 2.\nWe also consider a multi-task reinforcement learning (RL) problem using the Meta-World benchmark (Yu et al., 2019), which contains 50 qualitatively different robotic manipulation tasks. We select five tasks from Meta-World task suite, namely “reach”, “push”, “press button top”, “open window” and “close window”. We train these five tasks together using the soft actor critic (SAC) (Haarnoja et al., 2018) algorithm with the weights of the critic and the policy shared across all tasks. We compute the transference on the critic loss to produce Figure 1(b). We include more details on the multi-task RL experiment in Appendix A.2.\nFigure 1(b) indicates that “open window” exhibits relatively low transference with all tasks while “close window” exhibits especially high transference with “push” and “reach”. Accordingly, we group “push” and “reach” together and then compute the efficacy of these tasks when co-training with “press button top”, “open window”, and “close window”. As shown in Figure 3 and as suggested by transference, the success rate of “reach” converges more quickly to a significantly higher value when it is co-trained with “close window”, and marginally faster when it is co-trained with “press button top”, as compared to co-training with “open window”. This effect is only magnified when we assess the performance of “push”. For “push”, its performance in terms of success rates and data efficiency is greatly increased when co-trained with either “close window” or “press button top” when compared to co-training with “open window”.\nIn summary, transference can be used as a heuristic to determine task groupings. A set of tasks which exhibit relatively high transference tend to train effectively together, while tasks which exhibit relatively low transference with this set should be excluded. Using this method, our empirical analysis suggests transference is capable of identifying beneficial and antagonistic task groupings in both supervised and reinforcement learning paradigms.\n2To avoid possible biases or implications conveyed by the definition of the attributes, we omit their names." }, { "heading": "4 INCREASED TRANSFER MTL", "text": "As shown in Section 3.2, the transference measure defined in Eq. (1) is an effective “macro-level” quantity to recognize the tasks that may benefit from co-training. In this section, we extend the utility of our transference measure beyond determining task groupings by incorporating it directly into the training dynamics of multi-task learning. In particular, we present a parameter-free algorithm which selects the combination of task gradients in each step of training that most increases transference among all tasks. Let us define total transference for the subset of tasks ξ at time-step t as\nZtξ := ∑ j∈[m] Ztξ j = ∑ j∈[m] ( 1− Lj(X, θ t+1 s|ξ , θ t+1 j ) Lj(X, θts, θ t j) ) . (2)\nTotal transference provides a cumulative measure of relative progress across all tasks as a result of applying a gradient update to the shared parameters with respect to a subset of tasks ξ. Perhaps surprisingly, ξ = [m] is not always the set of tasks which most increases transference. Rather, we find that this particular update can often result in worse transference than a gradient update using a subset of tasks, an effect especially pervasive at the beginning of training.\nWith this motivation in mind, we present increased transfer multi-task learning (IT-MTL), a parameter-free augmentation to multi-task learning which chooses the gradient that most increases transference. Specifically, IT-MTL chooses the shared parameter update using a subset of tasks which induce the highest transference in a given minibatch. Formally, we define J ⊆ P([m]) − ∅ where P(S) denotes the power-set of set S.3 Although the possible number of task combinations is exponentially large in the number of tasks m, in practice and as found in our experiments in Section 5, a carefully chosen subset of tasks of size |J | = O(m) provides compelling results. Specifically, choosing J as the set of m-many leave-one-out subsets, i.e. [m] − {i} for all i ∈ [m], plus the set of all tasks [m] works well in practice. IT-MTL proceeds by calculating the total transference defined in Eq. (2) for each subset of tasks ξ ∈ J and then applies the gradient update to the shared parameters using the subset that induces the highest total transference. Task-specific parameters are updated as normal. The full algorithm is provided in Algorithm 1.\nTo further illuminate the intuition behind IT-MTL, we present a deeper analysis into the loss landscape of MultiFashion (Lin et al., 2019) in Figure 4. This figure provides insight into several cases where a single task gradient update on the shared parameters is more beneficial than the gradient using the full set of tasks. Figure 4(a) exemplifies the case where high curvature in the direction of the right task gradient significantly increases the total loss. Similarly, the combined gradient marginally decreases the total loss while the left gradient significantly decreases total loss. In a related instance, and as illustrated in Figure 4(b), high curvature in the combined gradient direction, but relatively low curvature in the direction of the left and right gradient, will also lead to a single task gradient exhibiting higher transference than the combined gradient.\n3In other words, for a subset ξ ∈ J and ξ 6= ∅, either a particular task i participates in the gradient update, i.e. i ∈ ξ, or not i /∈ ξ.\nAlgorithm 1 Increased Transfer Multi-Task Learning 1: Initialize network weights: {θs} ∪ {θi| i ∈ [m]} 2: Set candidate subsets: J ⊆ P([m])−∅ 3: for t = 0, . . . , T − 1 do 4: Compute per-task loss: Li(X t, θts, θti), ∀i ∈ [m] . typical forward pass 5: Update task-specific parameters: θt+1i = θ t i − η∇θiLi, ∀i ∈ [m]\n6: for ξ ∈ J do 7: θt+1s|ξ = θ t s − η ∑ i∈ξ∇θsLi(X t, θts, θti)\n8: Ztξ = ∑ i∈[m] ( 1− Li(X t,θt+1s|ξ ,θ t+1 i )\nLi(X t,θts,θti) ) 9: end for\n10: Select max transfer task combination: ξ? = argmaxξ Ztξ 11: Update shared parameters: θt+1s = θ t s − η ∑ i∈ξ? ∇θsLi(X t, θts, θti) 12: end for\nWhile the first two cases of high curvature occur predominantly during the early rounds of training, a third case which occurs throughout training is shown in Figure 4(c). In this instance, the right task’s gradient marginally decreases its own loss but significantly increases the loss of the left task. This causes the combined gradient to only marginally decrease the total loss. On the other hand, the left gradient most increases transference. As a result, only using the left gradient significantly improves the total loss. Additional information regarding this analysis can be found in Appendix A.2.1." }, { "heading": "4.1 A FIRST ORDER APPROXIMATION OF THE INCREASED TRANSFER MTL METHOD", "text": "The IT-MTL procedure requires multiple forward-backward passes to calculate the lookahead losses of tasks inJ . This may become computationally prohibitive for large models, especially as the number of candidate tasks in J grows. In this section, we derive a simple first order approximation of ITMTL which requires only a single forward-backward pass. Unlike Algorithm 1, the approximation does not update the task-specific parameters before computing the update to the shared parameters, effectively moving line 5 in Algorithm 1 to line 11. Ignoring the learning rate η for simplicity, a first order Taylor series expansion of transference in Eq. (1) yields:\nZtξ j = 1− Lj(X t, θt+1s|i , θ t+1 j )\nLj(X t, θts, θtj)\n≈ 1− Lj(X t, θts, θtj)− 〈∇θsLj(X t, θts, θtj),\n∑ i∈ξ∇θsLi(X t, θts, θti)〉\nLj(X t, θts, θtj)\n= 〈∇θsLj(X t, θts, θtj),\n∑ i∈ξ∇θsLi(X t, θts, θti)〉\nLj(X t, θts, θtj) ,\nwhere 〈·, ·〉 denotes inner product. Thus, total transference defined in Eq. (2) can be written as\nZtξ = ∑ j∈[m] Ztξ j ≈〈 ∑ j∈[m] ∇θsLj(X t, θts, θtj) Lj(X t, θts, θtj) , ∑ i∈ξ ∇θsLi(X t, θts, θti)〉 ,\nwhich can be rewritten as\n=〈∇θs ∑ j∈[m] logLj(X t, θts, θtj), ∑ i∈ξ ∇θsLi(X t, θts, θti)〉\n=〈∇θs log ∏ j∈[m]\nLj(X t, θts, θtj)︸ ︷︷ ︸ log-product loss\n, ∑ i∈ξ ∇θsLi(X t, θts, θti)〉 .\nOur IT-MTL approximation computes alignment between the gradients of the candidate tasks with the gradient of the first quantity in the inner product, which we call the “log-product” loss. The gradient of the subtasks with the strongest alignment to the gradient of the log-product loss is used to make the final update to the shared parameters. Note in the approximate procedure, the gradients are calculated once, and the approximation has computational complexity similar to that of gradient correction methods such as PCGrad (Yu et al., 2020) and GradNorm (Chen et al., 2017)." }, { "heading": "4.2 AFFINITY WITH GRADIENT PROJECTION METHODS", "text": "IT-MTL can be combined with related work which modifies gradient direction and/or magnitude. The modified gradient can be added to the set J in Algorithm 1 as an additional candidate gradient for the current minibatch. If the modified gradient increases the total transference more so than the gradient of the candidate tasks in J , it is used to update the shared parameters. We explore this idea in our experiments by composing J = {total loss, PCGrad(total loss)} to select between the typical multitask gradient and the PCGrad gradient." }, { "heading": "5 EXPERIMENTS", "text": "Motivated by our analysis in Section 4, we study the utility of transference in selecting the combination of gradients which increases transference for each minibatch. Unlike our evaluation of transference in Section 3.2 on datastes with a large number of tasks, IT-MTL is most computationally efficient when the number of tasks is small. Accordingly, we focus our evaluation on datasets with either 2 or 3 tasks and perform our analysis on MultiMNIST, a multitask variant of the MNIST dataset (LeCun et al., 1998); MultiFashion, a multitask variant of the MNISTFashion dataset (Xiao et al., 2017); and NYUv2 (Silberman et al., 2012). Further, we found the rate at which a singletask gradient is chosen over the combined gradient to be higher during the beginning of training. With this observation as motivation, we evaluate how information transfer changes with respect to convergence." }, { "heading": "5.1 INCREASED TRANSFER MULTI-TASK LEARNING EVALUATION", "text": "To assess the efficacy of IT-MTL, we evaluate its performance on the MultiMNIST and MultiFashion datasets. To increase comparability, we run our experiments on the same datasets as in (Lin et al., 2019) with a multitask variant of LeNet (LeCun et al., 1998). Full experimental details are provided in Appendix A.2.1, and (Lin et al., 2019) can be referenced for dataset construction. The code used in generating experimental results is attached to the supplementary material part of our submission.\nFor both datasets, we compare against the corresponding single-task baselines, equal weight multitask learning, PCGrad (Yu et al., 2020), MGDA-UB (Sener & Koltun, 2018), and Uncertainty Weighing (UW) (Kendall et al., 2018). Table 2 summarises our experimental results. Notably, we find IT-MTL improves both left and right image accuracy over the equal-weight MTL baseline on both MultiFashion and MultiMNIST by choosing the gradient combination which most increases transference. Aside from this augmentation, there is no difference between these two models. Moreover, the IT augmentation can be combined with prior approaches to dynamically reweigh the tasks or directly modify the gradient by choosing the gradient combination which most increases transference. In particular, our method, and its corresponding approximation described in Section 4.1, combined with uncertainty weights and PCGrad (IT-UW-PCGrad) achieves very strong performance on both datasets.\nTo further evaluate the robustness of IT-MTL, we assess its performance on the more challenging NYUv2 dataset with a Multi-Task Attention Network architecture (MTAN) (Liu et al., 2019). The\ndataset is composed of RGB-D images of indoor scenes and supports modeling of 13-class semantic segmentation, true depth estimation, and surface normal prediction. We follow the procedure of (Liu et al., 2019) and directly utilize their framework to evaluate the performance of IT-MTL. For computational efficiency, we form J = {semantic + depth, semantic + normal, depth + normal, semantic + depth + normal} in IT-MTAN and J = {semantic + depth + normal, PCGrad gradient} in IT-PCGrad. Table 3 summarizes our experimental findings. We find IT-MTAN improves modeling performance across all measurements for segmentation, depth, and surface normal tasks as compared with the MTAN baseline. IT-PCGrad and the approximation IT-PCGrad‡ demonstrate similar improvements when compared with the PCGrad-MTAN baseline. This result indicates the benefit of IT-MTL can hold for complex neural network architectures on a challenging real world dataset." }, { "heading": "5.2 EFFECT OF CONVERGENCE ON TRANSFERENCE", "text": "In this section, we return our focus to CelebA to analyze the effects of model convergence on information transfer. As shown in Figure 2, transference is a highly dynamic process that is significantly affected by model convergence. In particular, we find the transference of A6 on A8 to be nearly identical to that of A8’s self-transference during the first two epochs of training. This result indicates the information encapsulated in the gradient update of A6 on the shared parameters is as effective at minimizing the loss of A8 as its own gradient update. However, this effect is dampened throughout training with positive transference only manifesting at the beginning of training.\nInterpreting our results in the context of CelebA, the model may learn the location of certain attributes in the beginning of training which are highly transferable to other related attributes. Once this fundamental structure is learned, gradients may encode increasingly task-specific information leading to lower positive information transfer among tasks. These observations lend weight to the development of flexible sharing architectures, in particular those which can quickly adapt to changing information transfer dynamics in the shared parameters throughout training." }, { "heading": "6 CONCLUSION", "text": "In this work, we take a first step towards quantifying information transfer in multi-task learning. We develop a measure to quantify transference and leverage this quantity to determine which tasks should be trained together as well as develop a method which improves multi-task learning efficiency and performance.\nNonetheless, the method is not without its shortfalls. Using transference to select task groupings does not account for regularization-like effects inherent in multi-task learning. As a result, although a specific set of task groupings may exhibit high transference, there will be cases when this grouping is sub-par. Moreover, the transference radar charts are open to interpretation. While the charts provide flexibility in determining task groupings or identifying tasks which detract from co-training, they do not unequivocally produce a final ranking. With regards to IT-MTL, training time scales linearly with respect to the number of tasks if the lookahead loss computation is not run in parallel.\nIn spite of these detriments, we hope our analysis of information transfer in multi-task learning encourages further analysis into its training dynamics. Future work on transference can incorporate this measure into a continuous-space learning algorithm, or guide the development of flexible architectures to further improve multi-task learning performance." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 EXPERIMENTAL DETAILS", "text": "In this section, we detail our experimental methodology with the goal of facilitating reproducibility. The code used to produce our experimental results can be found by accessing the Supplementary Material section of our OpenReview submission." }, { "heading": "A.1.1 CELEBA", "text": "Our experiments on CelebA are generated using a combination of Keras (Chollet et al., 2018) and TensorFlow (Abadi et al., 2016) and access the CelebA dataset publicly available on TensorFlow datasets https://www.tensorflow.org/datasets/catalog/celeb_a. We selected 9 attributes from the subset of 40 annotated attributes for our analysis.\nThe encoder architecture is based loosely on ResNet18 (He et al., 2016) with a shallow feed forward network decoder. A learning rate of 0.001 is used for 40 epochs with the learning rate halved at 30 epochs. The model uses a momentum optimizer with momentum set to 0.9 and a batch size of 256. We maintain a 5-epoch moving average of the task accuracies and report the highest average 5-epoch moving accuracy achieved during training.\nWe found our model exhibits similar, if not slightly improved, performance over the ResNet18 variant used in (Sener & Koltun, 2018) that was trained for 100 epochs; however given transference computes an update to the shared parameters, we adopted an architecture with less shared parameter capacity and more task-specific capacity to improve training time without sacrificing performance." }, { "heading": "A.2 META-WORLD", "text": "We use the five tasks: “reach”, “push”, “press button top”, “open window”, and “close window” from Meta-World (Yu et al., 2019). We use 6 fully-connected layers with 400 hidden units for both the policy and the critic with weights shared across all tasks. For each iteration, we collect 600 data points for each environment and train the policy and the critic for 600 steps with a batch size 128 per task. We use the soft actor critic (SAC) (Haarnoja et al., 2018) as our RL algorithm and adopt the default\nhyperparameters used in the public repository of SAC (https://github.com/rail-berkeley/ softlearning at hash 59f9ad357b11b973b9c64dcfeb7cf7b856bbae91). We compute the transference on the critic loss\nJQ(θ) = E(s,a)∼D [ 1\n2 (Q(s,a)− Q̂(s,a)2)\n] ,\nwhere s and a denote the state and action, Q̂ denotes the target Q network, D denotes the off-policy dataset collected by the agent, and θ denotes the parameter of the critic Q network." }, { "heading": "A.2.1 MULTIMNIST/FASHION", "text": "Our experimental results on Multi-MNIST and Multi-Fashion are generated using a combination of Keras and TensorFlow. We evaluate on the datasets released by Lin et al. (2019) but further split 16 of the training dataset into a validation set for final dataset splits of 100k/20k/20k train/valid/test.\nThe model architecture is loosely based on LeNet (LeCun et al., 1998) with a fully convolutional decoder and shallow feed-forward neural net decoder. A visual depiction is presented in (figure 6). The model uses a momentum optimizer with momentum set to 0.9 and a batch size of 256. The lookahead loss is computed by simulating the full momentum update to the shared parameters rather than the SGD update described in Section 3.1. The learning rate of the MTL baseline was selected on a validation dataset over {1e− 4, 5e− 4, 5e− 3, 1e− 2, 5e− 2} using a schedule which halves the learning rate every 30 epochs. A coarse grid search of the task-specific weights with left image weight = 1. - right image weight yielded left weight = right weight = 0.5. IT-MTL, Uncertainty Weight, and PCGrad used the same hyperparameters as the baseline. GradNorm was found to be much more sensitive to hyperparameters, and these were tuned via random search between [1e − 6, 1e− 2] for the learning rate and [1e− 6, 5.0] for the spring constant. The parameters we used for each experiment are listed in Table 4.\nDue to non-trivial inter-run variance, we ran each experiment to completion 6 times, dropped the worst performance, and averaged the remaining 5 runs to produce the results shown in Table 2. Moreover, we report the average accuracy of the final 5 epochs to further improve comparability." }, { "heading": "A.3 NYUV2", "text": "We clone the MTAN repository released by (Liu et al., 2019) (https://github.com/lorenmt/ mtan at hash b6504093ea6d646dccf501bbbb525a4d85db96ba) to empirically test method and ITPCGrad choosing between the combined gradient (i.e. gradient with respect to depth + semantic + normals loss) with the PCGrad gradient. The optimization uses Adam (Kingma & Ba, 2014), and the lookahead loss is computed by simulating the full Adam update to the shared parameters rather than the SGD update described in Section 3.1.\nWe run all MTAN experiments with default hyperparameters and settings with the exception of reducing the number of steps in PCGrad and IT-PCGrad to 100 as we find significant overfitting begins after this stage. Results from Split, Wide; Split, Deep; Dense; and Cross-Stitch results are taken from (Liu et al., 2019).\nExpanded Analysis of MultiFashion 1-D Loss Landscapes" }, { "heading": "A.4 LOSS LANDSCAPE EXPANDED ANALYSIS", "text": "Figure 4 was created by halting training in a given epoch immediately after either the left or the right task gradient update manifests higher transference than the 1/2(left + right) (i.e. combined) gradient update. We then applied the parameter update to the shared parameters using SGD with momentum to create a linear interpolation between the current parameter values and the parameter values following an update. We extend this interpolation 3x past the typical update to measure the curvature along the direction of the update.\nIn Figure 7, we compute the loss along a linear interpolation of the left gradient, the right gradient, and the combined gradient direction with each column corresponding to the total loss plot presented in Figure 4. For instance, Column 1, Row 2 plots the left and right loss along the left gradient step for the leftmost plot in Figure 4 and Column 2, Row 2 plots the left and right loss along the right gradient step for the center plot in Figure 4.\nIn Figure 8, we plot the 2-dimensional loss landscape of the left and right loss as well as the combined loss for MultiFashion. To generate the plots, we first sample two random directions in the parameter space and then scale the norms of these directions to be equal to the norm of the parameters. Next, we interpolate the parameters along these two directions in the range [−0.1,+0.1] times the norm of the parameters.\nThe left image loss depicts a smooth landscape whereas the right image loss is highly non-smooth. Notice that the level sets of the combined (i.e. average) loss is higher than those of the left loss. For this step, IT-MTL chooses the left gradient for the shared parameter update which aligns with the curvature discrepancy between the right image loss and the left image loss." } ]
2,020
null
SP:e70a869dc8d81a0338d382ea6a761145ed8e59bd
[ "This paper first studies the tradeoffs between two forms of spatial robustness, including robustness against Flow-based spatial attack and Rotation-Translation (RT) attack. In particular, it proposes an approach to account for both local and global spatial transformations in an integrated framework. In addition, the paper investigates the relationship between the sensitivity-based (lp-norm based) and spatial robustness, and proposes a training method called ‘Pareto Adversarial Training’ to find optimal combination between natural accuracy, sensitivity-based and spatial robustness.", "This paper first provides explanations to the inherent tradeoff between rotation adversarial attack and sensitivity attacks/spatial transform attacks, through their differences in saliency maps. Further, the authors proposed to utilize pareto training to find the best tradeoff among the four dimensions: natural accuracy, robustness against sensitivity/rotation/spatial transformation attacks. Experimental results show the proposed pareto adversarial training achieves better tradeoff between clean accuracy and adversarial robustness averaged across three types of attacks." ]
Adversarial robustness, mainly including sensitivity-based robustness and spatial robustness, plays an integral part in the robust generalization. In this paper, we endeavor to design strategies to achieve comprehensive adversarial robustness. To hit this target, firstly we investigate the less-studied spatial robustness and then integrate existing spatial robustness methods by incorporating both local and global spatial vulnerability into one spatial attack design. Based on this exploration, we further present a comprehensive relationship between natural accuracy, sensitivity-based and different spatial robustness, supported by the strong evidence from the perspective of representation. More importantly, in order to balance these mutual impact within different robustness into one unified framework, we incorporate the Pareto criterion into the adversarial robustness analysis, yielding a novel strategy towards comprehensive robustness called Pareto Adversarial Training. The resulting Pareto front, the set of optimal solutions, provides the set of optimal balance among natural accuracy and different adversarial robustness, shedding light on solutions towards comprehensive robustness in the future. To the best of our knowledge, we are the first to consider comprehensive robustness via the multi-objective optimization.
[]
[ { "authors": [ "Martin Arjovsky", "Léon Bottou", "Ishaan Gulrajani", "David Lopez-Paz" ], "title": "Invariant risk minimization", "venue": "arXiv preprint arXiv:1907.02893,", "year": 2019 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In 2017 ieee symposium on security and privacy (sp),", "year": 2017 }, { "authors": [ "Gavin Weiguang Ding", "Yash Sharma", "Kry Yik Chau Lui", "Ruitong Huang" ], "title": "Max-margin adversarial (mma) training: Direct input space margin maximization through adversarial training", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Logan Engstrom", "Dimitris Tsipras", "Ludwig Schmidt", "Aleksander Madry" ], "title": "A rotation and a translation suffice: Fooling cnns with simple transformations", "venue": "arXiv preprint arXiv:1712.02779,", "year": 2017 }, { "authors": [ "Logan Engstrom", "Brandon Tran", "Dimitris Tsipras", "Ludwig Schmidt", "Aleksander Madry" ], "title": "Exploring the landscape of spatial robustness", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Yarin Gal" ], "title": "Uncertainty in deep learning", "venue": "University of Cambridge,", "year": 2016 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Steven Basart", "Norman Mu", "Saurav Kadavath", "Frank Wang", "Evan Dorundo", "Rahul Desai", "Tyler Zhu", "Samyak Parajuli", "Mike Guo" ], "title": "The many faces of robustness: A critical analysis of out-of-distribution generalization", "venue": "arXiv preprint arXiv:2006.16241,", "year": 2020 }, { "authors": [ "Max Jaderberg", "Karen Simonyan", "Andrew Zisserman" ], "title": "Spatial transformer networks. In Advances in neural information processing", "venue": null, "year": 2017 }, { "authors": [ "Sandesh Kamath", "Amit Deshpande", "KV Subrahmanyam" ], "title": "Invariance vs. robustness of neural networks", "venue": "arXiv preprint arXiv:2002.11318,", "year": 2020 }, { "authors": [ "Il Yong Kim", "OL De Weck" ], "title": "Adaptive weighted sum method for multiobjective optimization: a new method for pareto front generation", "venue": "Structural and multidisciplinary optimization,", "year": 2006 }, { "authors": [ "Il Yong Kim", "Oliver L De Weck" ], "title": "Adaptive weighted-sum method for bi-objective optimization: Pareto front generation", "venue": "Structural and multidisciplinary optimization,", "year": 2005 }, { "authors": [ "David Krueger", "Ethan Caballero", "Joern-Henrik Jacobsen", "Amy Zhang", "Jonathan Binas", "Remi Le Priol", "Aaron Courville" ], "title": "Out-of-distribution generalization via risk extrapolation (rex)", "venue": "arXiv preprint arXiv:2003.00688,", "year": 2020 }, { "authors": [ "Hao Li", "Zheng Xu", "Gavin Taylor", "Christoph Studer", "Tom Goldstein" ], "title": "Visualizing the loss landscape of neural nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Xi Lin", "Hui-Ling Zhen", "Zhenhua Li", "Qing-Fu Zhang", "Sam Kwong" ], "title": "Pareto multi-task learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Pratyush Maini", "Eric Wong", "J Zico Kolter" ], "title": "Adversarial robustness against the union of multiple perturbation models", "venue": "International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Radford M Neal" ], "title": "Bayesian learning for neural networks, volume 118", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Aditi Raghunathan", "Sang Michael Xie", "Fanny Yang", "John Duchi", "Percy Liang" ], "title": "Understanding and mitigating the tradeoff between robustness and accuracy", "venue": "International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Mahmood Sharif", "Lujo Bauer", "Michael K Reiter" ], "title": "On the suitability of lp-norms for creating and preventing adversarial examples", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2018 }, { "authors": [ "Baifeng Shi", "Dinghuai Zhang", "Qi Dai", "Zhanxing Zhu", "Yadong Mu", "Jingdong Wang" ], "title": "Informative dropout for robust representation learning: A shape-bias perspective", "venue": "International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Daniel Smilkov", "Nikhil Thorat", "Been Kim", "Fernanda Viégas", "Martin Wattenberg" ], "title": "Smoothgrad: removing noise by adding noise", "venue": "arXiv preprint arXiv:1706.03825,", "year": 2017 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Richard Szeliski" ], "title": "Computer vision: algorithms and applications", "venue": "Springer Science & Business Media,", "year": 2010 }, { "authors": [ "Florian Tramèr", "Dan Boneh" ], "title": "Adversarial training and robustness for multiple perturbations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Florian Tramèr", "Jens Behrmann", "Nicholas Carlini", "Nicolas Papernot", "Jörn-Henrik Jacobsen" ], "title": "Fundamental tradeoffs between invariance and sensitivity to adversarial perturbations", "venue": null, "year": 2020 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ], "title": "Robustness may be at odds with accuracy", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Vladimir N Vapnik", "A Ya Chervonenkis" ], "title": "On the uniform convergence of relative frequencies of events to their probabilities", "venue": "In Measures of complexity,", "year": 2015 }, { "authors": [ "Chaowei Xiao", "Jun-Yan Zhu", "Bo Li", "Warren He", "Mingyan Liu", "Dawn Song" ], "title": "Spatially transformed adversarial examples", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Dong Yin", "Raphael Gontijo Lopes", "Jon Shlens", "Ekin Dogus Cubuk", "Justin Gilmer" ], "title": "A fourier perspective on model robustness in computer vision", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Milan Zeleny" ], "title": "Multiple criteria decision making Kyoto 1975, volume 123", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Haichao Zhang", "Jianyu Wang" ], "title": "Joint adversarial training: Incorporating both spatial and pixel attacks", "venue": "arXiv preprint arXiv:1907.10737,", "year": 2019 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P Xing", "Laurent El Ghaoui", "Michael I Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Tianyuan Zhang", "Zhanxing Zhu" ], "title": "Interpreting adversarially trained convolutional neural networks", "venue": "International Conference on Machine Learning,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Robust generalization can serve as an extension of tradition generalization, i.e., Empirical Risk Minimization in the case of i.i.d. data (Vapnik & Chervonenkis, 2015), where the test environments might differ slightly or dramatically from the training environment (Krueger et al., 2020). Improving the robustness of deep neural networks has been one of the crucial research topics, with various different threads of research, including adversarial robustness (Goodfellow et al., 2014; Szegedy et al., 2013), non-adversarial robustness (Hendrycks & Dietterich, 2019; Yin et al., 2019), Bayesian deep learning (Neal, 2012; Gal, 2016) and causality (Arjovsky et al., 2019). In this paper, we focus on the adversarial robustness where adversarial examples are carefully manipulated by human to drastically fool the machine learning models, e.g., deep neural networks, posing a serious threat especially on safety-critical applications. Currently, adversarial training (Goodfellow et al., 2014; Madry et al., 2017; Ding et al., 2018) is regarded as one promising and widely accepted strategy to address this issue.\nHowever, similar to Out-of-Distribution (OoD) robustness, one crucial issue is that adversarial robustness also has many aspects (Hendrycks et al., 2020), mainly including sensitivity-based robustness (Tramèr et al., 2020), i.e. robustness against pixel-wise perturbations (normally within the constraints of an lp ball), and spatial robustness, i.e., robustness against multiple spatial transformations. Firstly, in the computer vision and graphics literature, there are two main factors that determine the appearance of a pictured object (Xiao et al., 2018; Szeliski, 2010): (1) the lighting and materials, and (2) geometry. Most previous adversarial robustness focus on the (1) factor (Xiao et al., 2018) based on pixel-wise perturbations, e.g., Projected Gradient Descent (PGD) attacks, assuming the underlying geometry stays the same after the adversarial perturbation. The other rising research branch tackled with the second factor, such as Flow-based (Xiao et al., 2018) and RotationTranslation (RT)-based attacks (Engstrom et al., 2017; 2019). Secondly, by explicitly exploring the human perception, Sharif et al. (2018) pointed out that sensitivity-based robustness, i.e., lp-distance\nmeasured robustness, is not sufficient to adversarial robustness in order to maintain the perceptual similarity. This is owing to the fact that although spatial attacks or geometric transformations also result in small perceptual differences, they yield large lp distances.\nIn order to head towards the comprehensive adversarial robustness, we find that the crucial issue to investigate the aforementioned whole part of adversarial robustness is the relationships among accuracy, sensitivity-based robustness and spatial robustness. Prior to our work, a clear trade-off between sensitivity-based robustness and accuracy has been revealed by a series of works (Zhang et al., 2019; Tsipras et al., 2018; Raghunathan et al., 2020). Besides, recent work (Tramèr & Boneh, 2019; Kamath et al., 2020) exhibited that there seems to exist an obscure trade-off between RotationTranslation and sensitivity-based robustness. However, this conclusion lacks considering Flowbased attacks (Xiao et al., 2018; Zhang & Wang, 2019), another non-negligible part in the spatial robustness evaluation, making the previous conclusion less comprehensive or reliable. As such, the comprehensive relationships among all the quantities mentioned above are still unclear and remain to be further explored. More importantly, new robust strategy that can harmonize all the considered correlations is needed, in order to achieve optimal balance within the comprehensive robustness.\nIn this paper, in order to design a new approach towards comprehensive robustness, we firstly explore the two main branches in the spatial robustness, i.e., Flow-based spatial attack (Xiao et al., 2018) and Rotation-Translation (RT) attack (Engstrom et al., 2019). By investigating the different impacts of these two attacks on the spatial sensitivity, we propose an integrated differentiable spatial attack framework, considering both local and global spatial vulnerability. Based on that, we present a comprehensive relationship among accuracy, sensitivity-based robustness and two branches of spatial robustness. Especially we show that the trade-off between sensitivity-based and RT robustness is fundamental trade-off as opposed to the highly interwoven correlation between sensitivity-based and Flow-based spatial robustness. We further provide strong evidence based on their different saliency maps from the perspectives of shape-bias, sparse or dense representation. Lastly, to balance these different kinds of mutual impacts within a unified adversarial training framework, we introduce the Pareto criterion (Kim & De Weck, 2005; 2006; Zeleny, 2012) in the multi-objective optimization, thus developing an optimal balance between the interplay of natural accuracy and different adversarial robustness. By additionally incorporating the two-moment term capturing the interaction between losses of accuracy and different robustness, we finally propose a bi-level optimization framework called Pareto Adversarial Training. The resulting Pareto front provides the set of optimal solutions that balance perfectly all the considered relationships, outperforming other existing strategies. Our contributions are summarized as follows:\n• We propose an integrated spatial attack framework that incorporates both local and global spatial vulnerability based on Flow-based and RT attacks, paving the way towards the comprehensive spatial robustness analysis in the future.\n• We present comprehensive relationships within accuracy, sensitivity-based, different spatial robustness, supported by strong evidence from the perspective of representation.\n• We incorporate the Pareto criterion into adversarial robustness analysis, and are the first attempt to consider multiple adversarial robustness via the multi-objective optimization." }, { "heading": "2 TOWARDS COMPREHENSIVE SPATIAL ROBUSTNESS", "text": "" }, { "heading": "2.1 MOTIVATION", "text": "In order to better investigate the relationships between accuracy and different kinds of adversarial robustness, we need to firstly provide a fine-grained understanding of spatial robustness, which has been less studied as opposed to sensitivity-based robustness. We summarize two major branches among a flurry of related work about spatial robustness (Engstrom et al., 2017; 2019; Xiao et al., 2018; Zhang & Wang, 2019; Tramèr & Boneh, 2019; Kamath et al., 2020): (1) Flow-based Attacks, and (2) Rotation-Translation (RT) Attacks. Specifically, we find that the former mainly focuses on the local spatial vulnerability while the latter tends to capture the global spatial sensitivity. Our motivation is to firstly shed light on the fundamental difference between these two approaches, and then propose an integrated spatial robustness evaluation metric." }, { "heading": "2.2 INTEGRATED SPATIAL ATTACK: COMBING LOCAL AND GLOBAL SPATIAL SENSITIVITY", "text": "Local Spatial Robustness: Flow-based Attacks The most representative Flow-based Attack is Spatial Transformed Attack (Xiao et al., 2018), in which a differentiable flow vector wF = (∆µ,∆v) is defined in the 2D coordinate (µ, v) to craft adversarial spatial transformation. The vanilla targeted Flow-based attack (Xiao et al., 2018) (κ = 0) follows the optimization manner:\nw∗F = arg min wF max i6=t f iθ(xwF )− f tθ(xwF ) + τLflow(wF ), (1)\nwhere fθ(x) = ( f1θ (x), . . . , f K θ (x) ) is the classifier in the K-classification problem. xwF is Flowbased adversarial example parameterized by flow vector wF . Lflow measures the local smoothness of spatial transformation further balanced by τ .\nInterestingly, in our empirical study shown in the left part of Figure 1, we find that Flow-based attack tends to yield local permutations among pixels in some specific regions irrespective of the option of τ , rather than the global spatial transformation based on their shapes. We analyze that this phenomenon is owing to two factors: 1) Local permutations, especially in regions where colors of pixels change dramatically, are already sufficiently sensitive to manipulate, demonstrated by our empirical results shown above. 2) The optimization manner does not incorporate any sort of shape transformation information, e.g., a parameter equation of rotation, as opposed to vanilla RotationTranslation attack, which we present in the following. Thus, Flow-based attacks tend to capture the local spatial vulnerability. Further, for the need to design the integrated spatial attack, we transform Eq 1 into its untargeted version under cross entropy loss with flow vector bounded by an F -ball:\nw∗F = arg max wF LCEθ (xwF , y) s.t. ‖wF ‖ ≤ F (2)\nwhere LCEθ (x, y) = log ∑ j exp ( f jθ (x) ) − fyθ (x). One difference compared with Eq. 1 is that we replace local smoothness term Lflow with our familiar lp constraint. Moreover, vanilla Flowbased attack (Xiao et al., 2018) follows the max operation suggested in (Carlini & Wagner, 2017). However we leverage cross entropy loss instead in pursuit of a uniform optimization form in our integrated spatial attack. Proposition 1 reveals the correlation between the two loss, indicating that the smooth approximation version ofmax operation in Eq. 1, denoted as LSθ , has a parallel updating direction with Cross Entropy loss regarding wF . Proof can be found in Appendix A.2. Proposition 1. For a fixed (xwF , y) and θ, consider LSθ (x, y) = log ∑ i 6=y exp ( f iθ(x) ) − fyθ (x), the smooth version loss of Eq. 1 without local smoothness term, then we have\n∇wFLCEθ (xwF , y) = r(xwF , y)∇wFLSθ (xwF , y),where r(xwF , y) = ∑ i6=y exp ( f iθ(xwF ) )∑ i exp ( f iθ(xwF )\n) . (3) Global Spatial Robustness: Rotation-Translation (RT)-based Attacks The original RotationTranslation attack (Engstrom et al., 2017; 2019) applies parameter equations constraint on the 2D coordinate, thus capturing the global spatial information:[\nu′ v′\n] = [ cos θ − sin θ sin θ cos θ ] · [ u v ] + [ δu δv ] . (4)\nTo design a generic spatial transformation matrix that can simultaneously consider rotation, translation, cropping and scaling, we re-parameterize the transform matrix as a generic 6-dimensional affine transformation matrix, inspired by (Jaderberg et al., 2015):[\nu′ v′\n] = ( [ 1 0 0 0 1 0 ] + [ w11RT w 12 RT w 13 RT\nw21RT w 22 RT w 23 RT\n] ) · [ u v 1 ] , (5)\nwhere we denote AwRT as the generic 6-dimensional affine transformation matrix, in which each wRT indicates the increment on different spatial aspects. For example, (w13RT , w 23 RT ) determines the translation. Finally, the optimization form of the resulting generic and differentiable RT-based attack bounded by RT -ball is exhibited as:\nw∗RT = arg max wRT LCEθ (xwRT , y) s.t. ‖wRT ‖ ≤ RT . (6)\nIntegrated Spatial Robustness The key to achieve this goal is to design an integrated parameterized sampling grid TwRT ,wF (G) that can warp the regular grid with both affine and flow transformation, where G is the generated grid. We show our integrated approach as follows:\nTwRT ,wF (G) = AwRT [ u v 1 ] + [ wF 1 ] ,\nxadv = TwRT ,wF (G) ◦ x.\n(7)\nThen we sample new xadv by TwRT ,wF (G) via differentiable bilinear interpolation (Jaderberg et al., 2015). Then the loss function of the differentiable integrated spatial attack can be presented as:\nw∗ = arg max w LCEθ (x+ ηw, y), s.t. ‖w‖ ≤ , (8)\nwhere w = [wF , wRT ]T and ηw is the crafted integrated spatial perturbation. Note that ηw itself does not necessarily satisfy the lp constraint directly. For the implementation, we follow the PGD procedure (Madry et al., 2017), a common practice in sensitivity-based attacks. We consider the infinity norm of w and different learning rates for the two sorts of spatial robustness:[\nwt+1F wt+1RT\n] = [ wtF wtRT ] + [ αF αRT ] clip (sign(∇wLCEθ (xtwt , y))),\nxt+1wt+1 = Twt+1 (G) ◦ x t wt ,\n(9)\nwhere we denote wt+1 = [wt+1F , w t+1 RT ] T and = [ F , RT ]T . From Figure 1, we can observe that our Integrated Spatial Attack can construct both local and global spatial transformations on images. Then, we visualize the loss surface under this Integrated Spatial Attack leveraging “filter normalization” (Li et al., 2018) as illustrated in Figure 2. It is worth noting that the highly nonconcave loss landscape with respect to only rotation and translation raised by (Engstrom et al., 2019) has been largely alleviated by considering both local and global spatial vulnerability, verifying the efficiency of our Integrated Spatial Attack.\nRemote-look of Initial Landscape Close-look of Initial Landscape Landscape around the maxima example" }, { "heading": "3 RELATIONSHIPS BETWEEN SENSITIVITY AND SPATIAL ROBUSTNESS", "text": "" }, { "heading": "3.1 RELATIONSHIPS", "text": "Based on the analysis above, next we focus on investigating the relationships between the strength of one specific robustness and other types of robustness. Firstly, we empirically explore these relationships through conducting thorough experiments on MNIST, CIFAR-10 and Caltech-256 datasets. By adversarially training multiple PGD (sensitivity-based) robust models with different iteration steps, we further test their Flow-based and RT-based spatial robustness via methods proposed above.\nAs shown in Figure 3, it turns out that Flow-based spatial robustness (red) measured by its robust test accuracy presents a steady ascending tendency across three datasets as the PGD robustness increases, while the trend of RT-based spatial robustness fluctuates conversely. It is worth noting that we test the accuracy on correctly classified test data for the considered model for a fair comparison. The trade-off between sensitivity-based and RT-based spatial robustness is consistent with previous conclusion (Kamath et al., 2020; Tramèr & Boneh, 2019), but it does not (even on the contrary) apply to Flow-based spatial robustness that delicately measures the local spatial sensitivity of an image. We provide the strong evidence from the perspective of representation in the next subsection." }, { "heading": "3.2 EXPLANATION FROM THE VIEWPOINT OF SHAPE-BIAS REPRESENTATION", "text": "We go first with our brief conclusion: the sensitivity-based robustness corresponds to the shape-bias representation (Shi et al., 2020; Zhang & Zhu, 2019), indicating that sensitivitybased robust models rely more on global shape when making decisions rather than local texture. By contrast, the spatial robustness is associated with different representation strategies, serving as a significant supplement toward the comprehensive robust representation. To demonstrate this conclusion, we visualize the saliency maps of naturally trained, PGD, Flowbased and RT adversarially trained models on some randomly selected images on Caltech-256 exhibited in Figure 4. Specifically, visualizing the saliency maps aims at assigning a sensitivity value, sometimes also called “attribution”, to show the sensitivity of the output to each pixel of an input image. Following (Shi et al., 2020;\nZhang & Zhu, 2019), we leverage SmoothGrad (Smilkov et al., 2017) to calculate saliency map S(x):\nS(x) = 1\nn n∑ i=1 ∂fyθ (xi) ∂xi . (10)\nFigure 4 manifests that PGD trained models tend to learn a scarce and shape-biased representation among all pixels of an image, while two types of spatially adversarially trained models suggest converse representation. In particular, the resulting representation from the Flow-based training model has the tendency towards a shape-biased one as it places extreme values on the pixels around the shape of objects, e.g., the edge between the horse and the background shown in Flow AT in Figure 4. On the contrary, RT-based models have less reliance on the shape of objects, and at the meantime, the saliency values tend to be dense, scattering around more pixels of an image. Quantitatively, we calculate the distance of saliency maps from different models across all test data on Caltech-256 dataset, and then compute their skewness shown in Figure 5.\nSpecifically, we compute the pixel-wise distance between the saliency map of a robust model and that from a considered model, and then we calculate the median of skewness of the saliency map difference among all test data. Note that if two saliency maps have no difference, then the difference values will be a normal distribution with skewness 0. A negative skewness indicates that the original saliency map (representation) tends to be sparse compared with a considered model. We plot the tendency of skewness as the strength of some specific robustness increases shown in Figure 5. Based on the observations in Figure 5, we summarize the following conclusions:\n1) Based on the first and forth sub-pictures, both PGD and Flow-based robust models tend to learn a sparse and shape-biased representation compared with the natural model, but the Flow-based trained model is less sparse or shape-biased in comparison with the PGD trained one. 2) On the contrary, RT-based robust models have the trend to learn a dense representation, which is also intuitive as the RT trained model is expected to memorize broader pixel locations to cope with potential rotation and transformation in the test data. The fundamental representation discrepancy of RT-based and sensitivity-based robustness provides deep insights to explain why the trade-off of these two robustness occurs. In the Appendix A.4, we provide a sketch map that better illustrates their relationships." }, { "heading": "4 PARETO ADVERSARIAL ROBUSTNESS WITH PARETO FRONT", "text": "" }, { "heading": "4.1 MOTIVATION", "text": "Pareto Optimization. Based on the aforementioned analysis on the relationships between natural accuracy and different kinds of adversarial robustness, a natural question is how to design a training strategy that can perfectly balance their mutual impacts, which mainly sources from their different representation manners. In particular, in most cases their relationships reveal trade-off ones, except when the sensitivity robustness increases, Flow-based spatial robustness is enhanced. To better address these competing optimization objectives, we introduce Pareto optimization (Kim & De Weck, 2005; Lin et al., 2019), and the resulting Pareto front, the set of Pareto optimal solutions, can offer valuable trade-off information between objectives. We provide more background information about Pareto optimization in Appendix A.5.\nLimitation of Existing Strategies. Given perturbation sets Si, i = 1, ...,m, and its corresponding adversarial risk Radv(f ;Si) := E(x,y)∼D [maxr∈Si L(f(x+ r), y)], our goal is to find fθ that can achieve the uniform risk minimization across all Si as well as the minimal risk in the natural data. 1) Average adversarial training (Ave AT) (Tramèr & Boneh, 2019), i.e., Rave(f ;S) := E(x,y)∼D [ 1 m ∑m i=1 maxr∈Si L(f(x+ r), y) ] , regards each adversarial robustness as the equal status. It may yield unsatisfactory solutions when the strength of different attacks mixed in training are not balanced, which we demonstrate in our experiments. 2) Max adversarial training (Max AT) (Tramèr & Boneh, 2019; Maini et al., 2019), i.e., Rmax(f ;S) := E(x,y)∼D[maxi{maxr∈Si L(f(x + r), y)}] may overfit to specific type of adversarial robustness if its adversarial attack used for training is too strong. Figure 6 demonstrates that as the strength of PGD attack used in Max AT increases, the comprehensive robustness of Max AT degenerates to a single PGD adversarial training, owing to the fact that the PGD loss tends to dominate among\nall losses. Appendix A.6 provides more details about Figure 6 and also introduces Proposition 2 to illuminate that Max AT is also closely linked with specific weights of Ave AT." }, { "heading": "4.2 PARETO ADVERSARIAL ROBUSTNESS AND PARETO ADVERSARIAL TRAINING", "text": "The key to Pareto adversarial robustness is to find the optimal combination (trade-off in most cases) between natural accuracy, sensitivity-based and spatial robustness. Specifically, we hope to compute optimal α in the following formula:\nmin θ,α Lθ = α0Lnat + α1LPGD + α2LFlow + α3LRT, (11)\nwhere α = (α0, α1, α2, α3) and Lθ is the cross entropy loss based on the integrated framework we previously analyzed. Lnat,LPGD,LFlow and LRT represent the natural loss, the PGD adversarial loss, the Flow-based and the RT-based adversarial loss, respectively. Note that we additionally introduce natural loss to guarantee a high-level natural accuracy (Raghunathan et al., 2020). However, direct joint minimization over Eq. 11 will degenerate to the trivial solution and the introduction of validation dataset to tune α, e.g., DARTS (Liu et al., 2018), is also computationally expensive for the adversarial training with multiple iterations. To avoid these, our approach is to introduce the Pareto criteria to choose optimal α to balance the mutual impacts between different adversarial robustness. Specifically, based on Eq 11, we additionally introduce the two-moment term regarding all losses into a bi-level optimization framework, in order to compute the optimal combination α during the whole training process. We name this bi-level optimization approach as Pareto Adversarial Training, and the lower-level optimization regarding α can be formulated as follows:\nmin α 3∑ i=0 3∑ j=0 Ex(αiLi − αjLj)2 s.t. 3∑ i=1 αiEx(Li) = r, 3∑ i=0 αi = 1, αi ≥ 0,∀i = 0, 1, 2, 3,\n(12) where L0,L1,L2,L3 represent Lnat,LPGD,LFlow,LRT respectively for simplicity, sharing the same model parameter θ. r indicates the expectation of one-moment over all robust losses, i.e., spatial and sensitivity-based losses, which reflects the strength of comprehensive robustness we require after solving this quadratic optimization. In particular, given the fixed Ex(Li) following the updating of θ based on Eq. 11, the larger r we require will push the resulting αi, i = 1, 2, 3 larger as well, thus putting more weights on the robust losses while the whole process of Pareto Adversarial Training. For the understanding of the two-moment objective function, firstly we regard all losses as random variables with its stochasticity arising from the mini-batch sampling from data. The weighted quadratic difference is to measure the trade-off within natural accuracy and various robustness, and then the minimization is to alleviate this mutual trade-off under certain constraints. In addition, we leverage sliding windows technique to compute the expectation and CVXOPT tool to\nsolve this quadratic optimization within each mini-batch training. Overall, for the upper level optimization in our bi-level Pareto adversarial training method, we leverage our familiar SGD method to update θ in Eq. 11 with α calculated from the lower level problem. In the lower level procedure, we solve the quadratic optimization regarding α to obtain the optimal combination among natural loss, sensitivity-based and spatial adversarial loss. We provide the proof about the quadratic formulation in Eq. 12 and our algorithm description in Appendix A.7." }, { "heading": "4.3 PARETO FRONT IN EMPIRICAL STUDY", "text": "By adjusting the upper bound of expected adversarial robustness loss, i.e., r, we can evenly generate Pareto optimal solutions where the obtained models will have different levels of robustness under optimal combinations. The set of all Pareto optimal solutions then forms the Pareto front. Concretely, we train deep neural networks under different adversarial training strategies, and then evaluate their robustness by PGD, Flow-based and RT attacks, which we proposed previously, under different iteration steps. After equally averaging robust accuracy for each category of these attacks, we then compute the difference of robust accuracy between different training strategies and standard training, attaining Robustness Score to evaluate the comprehensive robustness of all adversarial training strategies. Finally, we plot the Robustness Score and sacrificed clean accuracy of all methods across three datasets in Figure 7. Experimental details can be found in Appendix A.8.\nThe Pareto criterion (Appendix A.5) exhibited in Figure 7 can be interpreted that Pareto Adversarial Training can achieve the best comprehensive robustness compared with other training strategies, given a certain level of sacrificed clean accuracy we can tolerate. By adjusting the different levels of expected comprehensive robustness r in Pareto Adversarial Training, we can develop the set of Pareto optimal solutions, i.e., the Pareto front. It manifests that all other methods are above our Pareto front, thus lacking effectiveness compared with our proposal. Overall, our proposed Pareto Adversarial Training develops an optimal (Pareto) criterion, by which we can maintain the optimal balance among the mutual impacts of natural accuracy and different robustness, based on the deep understanding of their relationships." }, { "heading": "5 DISCUSSION AND CONCLUSION", "text": "The essential purpose of our work is to design a novel approach towards comprehensive adversarial robustness. To achieve this goal, we firstly analyze the two main branches of spatial robustness and then integrate them into one framework. Based on that, we further investigate the thorough relationships between sensitivity-based and two distinct spatial robustness from the perspective of representation. More importantly, having understanding the mutual impacts of different kinds of adversarial robustness, we introduce Pareto criterion into adversarial training framework, yielding the Pareto Adversarial Training. The resulting Pareto front provides optimal performance under the Pareto Criterion over existing baselines. In the future, we hope to apply Pareto analysis into more general Out-of-Distribution generalization settings." } ]
null
null
SP:737ec0b9d0df72ef8c1db34a89773a627105b240
[ "The paper addresses the problem of vision-and-language navigation (Anderson et al., 2018). The idea of the paper is to use a generative policy where a distribution over all instruction tokens given the previous actions is computed. The agent takes the action that maximizes the probability of the current instruction. The paper reports the results on R2R and R4R datasets.", "The paper focuses on learning a navigation policy for a vision-and-language navigation problem. In this problem, the agent are given a language instruction and are asked to follow the instruction to navigation in a simulated 3D room. Unlike baselines which maximize the probability of selecting an action given an instruction, the authors proposed to apply the Bayes rule to maximize the probability of generating the instruction given an action. The authors claim that this gives better generalization in unseen environments." ]
Vision-and-language navigation (VLN) is a task in which an agent is embodied in a realistic 3D environment and follows an instruction to reach the goal node. While most of the previous studies have built and investigated a discriminative approach, we notice that there are in fact two possible approaches to building such a VLN agent: discriminative and generative. In this paper, we design and investigate a generative language-grounded policy which uses a language model to compute the distribution over all possible instructions i.e. all possible sequences of vocabulary tokens given action and the transition history. In experiments, we show that the proposed generative approach outperforms the discriminative approach in the Room-2-Room (R2R) and Room-4-Room (R4R) datasets, especially in the unseen environments. We further show that the combination of the generative and discriminative policies achieves close to the state-of-the art results in the R2R dataset, demonstrating that the generative and discriminative policies capture the different aspects of VLN.
[ { "affiliations": [], "name": "BAYES’ RULE" }, { "affiliations": [], "name": "Shuhei Kurita" }, { "affiliations": [], "name": "Kyunghyun Cho" } ]
[ { "authors": [ "Peter Anderson", "Angel X. Chang", "Devendra Singh Chaplot", "Alexey Dosovitskiy", "Saurabh Gupta", "Vladlen Koltun", "Jana Kosecka", "Jitendra Malik", "Roozbeh Mottaghi", "Manolis Savva", "Amir Roshan Zamir" ], "title": "On evaluation of embodied navigation agents", "venue": "ArXiv, abs/1807.06757,", "year": 2018 }, { "authors": [ "Peter Anderson", "Qi Wu", "Damien Teney", "Jake Bruce", "Mark Johnson", "Niko Sünderhauf", "Ian Reid", "Stephen Gould", "Anton van den Hengel" ], "title": "Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Peter Anderson", "Ayush Shrivastava", "Joanne Truong", "A. Majumdar", "D. Parikh", "Dhruv Batra", "Stefan Lee" ], "title": "Sim-to-real transfer for vision-and-language navigation", "venue": "In Conference on Robot Learning (CoRL),", "year": 2020 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": null, "year": 2014 }, { "authors": [ "Ilya Sutskever", "Dario Amodei" ], "title": "Language models are few-shot learners", "venue": null, "year": 2020 }, { "authors": [ "Angel Chang", "Angela Dai", "Thomas Funkhouser", "Maciej Halber", "Matthias Niessner", "Manolis Savva", "Shuran Song", "Andy Zeng", "Yinda Zhang" ], "title": "Matterport3D: Learning from RGB-D data in indoor environments", "venue": "International Conference on 3D Vision", "year": 2017 }, { "authors": [ "Howard Chen", "Alane Suhr", "Dipendra Misra", "Noah Snavely", "Yoav Artzi" ], "title": "Touchdown: Natural language navigation and spatial reasoning in visual street environments", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Abhishek Das", "Samyak Datta", "Georgia Gkioxari", "Stefan Lee", "Devi Parikh", "Dhruv Batra" ], "title": "Embodied Question Answering", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Daniel Fried", "Ronghang Hu", "Volkan Cirik", "Anna Rohrbach", "Jacob Andreas", "Louis-Philippe Morency", "Taylor Berg-Kirkpatrick", "Kate Saenko", "Dan Klein", "Trevor Darrell" ], "title": "Speaker-follower models for vision-and-language navigation", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Tsu-Jui Fu", "Xin Eric Wang", "Matthew F. Peterson", "Scott T. Grafton", "Miguel P. Eckstein", "William Yang Wang" ], "title": "Counterfactual vision-and-language navigation via adversarial path sampler", "venue": "European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Daniel Gordon", "Aniruddha Kembhavi", "Mohammad Rastegari", "Joseph Redmon", "Dieter Fox", "Ali Farhadi" ], "title": "IQA: Visual question answering in interactive environments", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Weituo Hao", "Chunyuan Li", "Xiujun Li", "Lawrence Carin", "Jianfeng Gao" ], "title": "Towards learning a generic agent for vision-and-language navigation via pre-training", "venue": "URL https://arxiv.org/abs/2002.10638", "year": 2002 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Karl Moritz Hermann", "Felix Hill", "Simon Green", "Fumin Wang", "Ryan Faulkner", "Hubert Soyer", "David Szepesvari", "Wojciech Marian Czarnecki", "Max Jaderberg", "Denis Teplyashin", "Marcus Wainwright", "Chris Apps", "Demis Hassabis", "Phil Blunsom" ], "title": "Grounded language learning in a simulated 3d world", "venue": "CoRR, abs/1706.06551,", "year": 2017 }, { "authors": [ "Ronghang Hu", "Daniel Fried", "Anna Rohrbach", "Dan Klein", "Trevor Darrell", "Kate Saenko" ], "title": "Are you looking? grounding to multiple modalities in vision-and-language navigation", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Haoshuo Huang", "Vihan Jain", "Harsh Mehta", "Alexander Ku", "Gabriel Magalhães", "Jason Baldridge", "Eugene Ie" ], "title": "Transferable representation learning in vision-and-language navigation", "venue": "IEEE/CVF International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Gabriel Ilharco", "Vihan Jain", "Alexander Ku", "Eugene Ie", "Jason Baldridge" ], "title": "General evaluation for instruction conditioned navigation using dynamic time warping. In Visually Grounded Interaction and Language (ViGIL)", "venue": "NeurIPS", "year": 2019 }, { "authors": [ "Vihan Jain", "Gabriel Magalhaes", "Alexander Ku", "Ashish Vaswani", "Eugene Ie", "Jason Baldridge" ], "title": "Stay on the path: Instruction fidelity in vision-and-language navigation", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Liyiming Ke", "Xiujun Li", "Yonatan Bisk", "Ari Holtzman", "Zhe Gan", "Jingjing Liu", "Jianfeng Gao", "Yejin Choi", "Siddhartha Srinivasa" ], "title": "Tactical rewind: Self-correction via backtracking in vision-andlanguage navigation", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Jacob Krantz", "Erik Wijmans", "A. Majumdar", "Dhruv Batra", "Stefan Lee" ], "title": "Beyond the nav-graph: Vision-and-language navigation in continuous environments", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Xiujun Li", "Chunyuan Li", "Qiaolin Xia", "Yonatan Bisk", "Asli Celikyilmaz", "Jianfeng Gao", "Noah A. Smith", "Yejin Choi" ], "title": "Robust navigation with language pretraining and stochastic sampling", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Chih-Yao Ma", "Jiasen Lu", "Zuxuan Wu", "Ghassan AlRegib", "Zsolt Kira", "Richard Socher", "Caiming Xiong" ], "title": "Self-monitoring navigation agent via auxiliary progress estimation", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "A. Majumdar", "Ayush Shrivastava", "Stefan Lee", "Peter Anderson", "D. Parikh", "Dhruv Batra" ], "title": "Improving vision-and-language navigation with image-text pairs from the web", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Khanh Nguyen", "Hal Daumé III" ], "title": "Help, anna! visual navigation with natural multimodal assistance via retrospective curiosity-encouraging imitation learning", "venue": null, "year": 2019 }, { "authors": [ "Khanh Nguyen", "Debadeepta Dey", "Chris Brockett", "Bill Dolan" ], "title": "Vision-based navigation with language-based assistance via imitation learning with indirect intervention", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)", "year": 2019 }, { "authors": [ "Yuankai Qi", "Zizheng Pan", "S. Zhang", "A.V.D. Hengel", "Qi Wu" ], "title": "Object-and-action aware model for visual language navigation", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever. Language models are unsupervised multitask learners." ], "title": "URL https://d4mucfpksywv", "venue": "cloudfront.net/better-language-models/language-models.pdf.", "year": 2018 }, { "authors": [ "Manolis Savva", "Abhishek Kadian", "Oleksandr Maksymets", "Yili Zhao", "Erik Wijmans", "Bhavana Jain", "Julian Straub", "Jia Liu", "Vladlen Koltun", "Jitendra Malik", "Devi Parikh", "Dhruv Batra" ], "title": "Habitat: A Platform for Embodied AI Research", "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Mohit Shridhar", "Jesse Thomason", "Daniel Gordon", "Yonatan Bisk", "Winson Han", "Roozbeh Mottaghi", "Luke Zettlemoyer", "Dieter Fox" ], "title": "ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Hao Tan", "Licheng Yu", "Mohit Bansal" ], "title": "Learning to navigate unseen environments: Back translation with environmental dropout", "venue": null, "year": 2019 }, { "authors": [ "Tadahiro Taniguchi", "Takayuki Nagai", "Tomoaki Nakamura", "Naoto Iwahashi", "Tetsuya Ogata", "Hideki Asoh" ], "title": "Symbol emergence in robotics: a survey", "venue": "Advanced Robotics,", "year": 2016 }, { "authors": [ "Jesse Thomason", "Daniel Gordon", "Yonatan Bisk" ], "title": "Shifting the baseline: Single modality performance on visual navigation & QA", "venue": null, "year": 2019 }, { "authors": [ "Jesse Thomason", "Michael Murray", "Maya Cakmak", "Luke Zettlemoyer" ], "title": "Vision-and-dialog navigation", "venue": "In Conference on Robot Learning", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "H. Wang", "Qi Wu", "Chunhua Shen" ], "title": "Soft expert reward learning for vision-and-language navigation", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Hanqing Wang", "Wenguan Wang", "Tianmin Shu", "W. Liang", "J. Shen" ], "title": "Active visual information gathering for vision-language navigation", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Xin Wang", "Wenhan Xiong", "Hongmin Wang", "William Yang Wang" ], "title": "Look before you leap: Bridging model-free and model-based reinforcement learning for planned-ahead visionand-language navigation", "venue": "In The European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Xin Wang", "Qiuyuan Huang", "Asli Çelikyilmaz", "Jianfeng Gao", "Dinghan Shen", "Yuan-Fang Wang", "William Yang Wang", "Lei Zhang" ], "title": "Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Luca Weihs", "Jordi Salvador", "Klemen Kotar", "Unnat Jain", "Kuo-Hao Zeng", "Roozbeh Mottaghi", "Aniruddha Kembhavi" ], "title": "Allenact: A framework for embodied AI research", "venue": "CoRR, abs/2008.12760,", "year": 2020 }, { "authors": [ "Erik Wijmans", "Samyak Datta", "Oleksandr Maksymets", "Abhishek Das", "Georgia Gkioxari", "Stefan Lee", "Irfan Essa", "Devi Parikh", "Dhruv Batra" ], "title": "Embodied Question Answering in Photorealistic Environments with Point Cloud Perception", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Yi Wu", "Yuxin Wu", "Georgia Gkioxari", "Yuandong Tian" ], "title": "Building generalizable agents with a realistic and rich 3d environment", "venue": "arXiv preprint arXiv:1801.02209,", "year": 2018 }, { "authors": [ "Wang Xin", "Jain Vihan", "Ie Eugene", "William Wang Yang", "Kozareva Zornitsa", "Ravi Sujith" ], "title": "Environment-agnostic multitask learning for natural language grounded navigation", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Victor Zhong", "Tim Rocktäschel", "Edward Grefenstette" ], "title": "Rtfm: Generalising to new environment dynamics via reading", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Fengda Zhu", "Yi Zhu", "Xiaojun Chang", "Xiaodan Liang" ], "title": "Vision-language navigation with selfsupervised auxiliary reasoning tasks", "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Wang Zhu", "Hexiang Hu", "Jiacheng Chen", "Zhiwei Deng", "Vihan Jain", "Eugene Ie", "Fei Sha" ], "title": "Babywalk: Going farther in vision-and-language navigation by taking baby", "venue": "steps. CoRR,", "year": 2020 }, { "authors": [ "Taniguchi" ], "title": "2016) might be one clue to close the gap between machine learning-based embodied agents, human interactions, and robotics. In this paper, we propose the first approach to directly utilize the vision-conditioned language modeling for navigation. Our generative policy applies the vision-conditioned language modeling", "venue": null, "year": 2016 }, { "authors": [ "See Jain" ], "title": "R4R, the shortest path length from the start node to the goal node can become 0. To consider the fidelity-based path length, we need to re-define a new SPL’ based on the reference path length instead of the shortest path length for R4R", "venue": null, "year": 2019 }, { "authors": [ "Jain" ], "title": "nDTW An agent that with reinforcement learning with nDTW-based rewards (Ilharco et al., 2019). BabyWalk An agent that exploits the proposed BABY-STEPs to follow micro-instructions", "venue": "BABY-", "year": 2019 }, { "authors": [ "Jain" ], "title": "The CLS and SDTW graphs suggest that our generative policy closely follows the goal trajectory compared to the discriminative policy especially when the gold trajectories are long. In the right pane, the horizontal axis is the number of steps when agents stop their navigation in the trials", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Vision-and-language navigation (Anderson et al., 2018b) is a task in which a computational model follows an instruction and performs a sequence of actions to reach the final objective. An agent is embodied in a realistic 3D environment, such as that from the Matterport 3D Simulator (Chang et al., 2017) and asked to follow an instruction. The agent observes the surrounding environment and moves around. This embodied agent receives a textual instruction to follow before execution. The success of this task is measured by how accurately and quickly the agent could reach the destination specified in the instruction. VLN is a sequential decision making problem: the embodied agent makes a decision each step considering the current observation, transition history and the initial instruction.\nPrevious studies address this problem of VLN by building a language grounded policy which computes a distribution over all possible actions given the current state and the language instruction. In this paper, we notice there are two ways to formulate the relationship between the action and instruction. First, the action is assumed to be generated from the instruction, similarly to most of the existing approaches (Anderson et al., 2018b; Ma et al., 2019; Wang et al., 2019; Hu et al., 2019; Huang et al., 2019). This is often called a follower model (Fried et al., 2018). We call it a discriminative approach analogous to logistic regression in binary classification.\nOn the other hand, the action may be assumed to generate the instruction. In this case, we build a neural network to compute the distribution over all possible instructions given an action and the transition history. With this neural network, we use Bayes’ rule to build a language-grounded policy. We call this generative approach, similarly to naı̈ve Bayes in binary classification.\nThe generative language-grounded policy only considers what is available at each time step and chooses one of the potential actions to generate the instruction. We then apply Bayes’ rule to obtain the posterior distribution over actions given the instruction. Despite its similarity to the speaker\nmodel of Fried et al. (2018), there is a stark difference that the speaker model of Fried et al. (2018) cannot be used for navigation on its own due to its formulation, while our generative languagegrounded policy can be used for it by its own. The speaker model of Fried et al. (2018) takes as input the entire sequence of actions and predicts the entire instruction, which is not the case in ours.\nGiven these discriminative and generative parameterizations of the language-grounded policy, we hypothesize that the generative parameterization works better than discriminative parameterization does, because the former benefits from richer learning signal arising from scoring the entire instruction rather than predicting a single action. Such rich learning signal arises, because the generative policy must learn to associate all salient features of a language instruction with an intended action, in order to learn the distribution over the language instructions. This is unlike the discriminative policy which may rely on only a minimal subset of salient features of the language instruction in order to model the distribution over a much smaller set of actions. Furthermore, the generative policy enables us to more readily encode our prior about the action distribution when it deemed necessary.\nWe empirically show that indeed the proposed generative approach outperforms the discriminative approach in both the R2R and R4R datasets, especially in the unseen environments. Figure 1 illustrates the proposed generative approach on VLN. Furthermore, we show that the combination of the generative and discriminative policies results in near state-of-the art results in R2R and R4R, demonstrating that they capture two different aspects of VLN. We demonstrate that the proposed generative policy is more interpretable than the conventional discriminative policy, by introducing a token-level prediction entropy as a way to measure the influence of each token in the instruction on the policy’s decision. The source code is available at https://github.com/shuheikurita/glgp." }, { "heading": "2 DISCRIMINATIVE AND GENERATIVE PARAMETERIZATIONS OF LANGUAGE-GROUNDED POLICY", "text": "Vision-and-language navigation (VLN) is a sequential decision making task, where an agent performs a series of actions based on the initially-given instruction, visual features, and past actions. Given the instructionX , past and current observations s:t and past actions a:t−1, the agent computes the distribution p(at|X, s:t, a:t−1) at time t. For brevity, we write the current state that consists of the current and past scene observations, and past actions as ht = {s:t, a:t−1}, and the next action prediction as p(at|X,ht). The instruction X is a sequence of tokens X = (w0, w1, ..., wk, ...). The relationship between these notations are also presented in Appendix B.\nIn VLN, the goal is to model p(at|ht, X) so as to maximize the success rate of reaching the goal while faithfully following the instruction X . In doing so, there are two approaches: generative and discriminative, analogous to solving classification with either logistic regression or naive Bayes.\nIn the discriminative approach, we build a neural network to directly estimate p(at|ht, X). This neural network takes as input the current state ht and the language instruction X and outputs a distribution over the action set. Learning corresponds to\nmax θ N∑ n=1 Tn∑ t=1 log p(ant |hnt , Xn), (1)\nwhere N is the number of training trajectories.\nIn the generative approach, on the other hand, we first rewrite the action distribution as\np(at|ht, X) = p(X|at, ht)p′(at|ht)∑\na′t∈A p(X|a′t, ht)p′(a′t|ht)\n= p(X|at, ht)∑\na′t∈A p(X|a′t, ht)\n, (2)\nassuming p′(at|ht) = 1/|A|, where A is the action set. This assumption implies that the action is independent of the state without the language instruction, which is a reasonable assumption as the goal is specified using the instructionX . p(X|at, ht) = Πkp(wk|at, ht, w:k−1) is a language model conditioned on an action at and the current hidden state ht, and outputs the distribution over all possible sequences of vocabulary tokens.\nLearning is then equivalent to solving\nmax θ N∑ n=1 Tn∑ t=1 ( log p(Xn|ant , hnt )− log ∑ a′nt ∈A p(Xn|a′nt , hnt ) ) . (3)\nlog p(Xn|ant , hnt ) is the language model loss conditioned on the reference action ant , while the second term log ∑ a′t∈A\np(Xn|a′nt , hnt ) penalizes all the actions. Both terms of Eq. 3 are critical for learning the generative language-grounded policy. When we train the model only with the language model term log p(Xn|ant , hnt ) of Eq. 3, the resulting neural network may not learn how to distinguish different actions rather than simply focusing on generating the instruction from the state observation.\nFor navigation, we use the model to capture the probability of the instruction conditioned on each action at ∈ A. The agent takes the action that maximizes the probability of generating the instruction: arg maxat p(X|at, ht). In other words, the language-conditional generative policy has a language model inside and navigates the environment by choosing an action that maximizes the probability of the entire instruction." }, { "heading": "3 RELATED WORK", "text": "While most of previous studies (Anderson et al., 2018b; Ma et al., 2019; Wang et al., 2019; Li et al., 2019; Hao et al., 2020) have relied on the discriminative approach p(at|X,ht), a few of previous studies (Fried et al., 2018; Tan et al., 2019; Ke et al., 2019) have proposed the so-called speaker model which scores the instruction against the entire trajectory. Such speaker models are mainly used for two purposes; (i) data augmentation with automatically generated trajectories (Fried et al., 2018; Tan et al., 2019) and (ii) reranking the complete trajectories in beam decoding (Fried et al., 2018; Tan et al., 2019; Ke et al., 2019). They however have not been used for selecting local actions directly in either training or decoding. To the best of our knowledge, this paper is the first work that propose a standalone generative language-grounded policy for vision-and-language-navigation, that does not need the full state-action sequence nor to look ahead into the next state, before taking the action at each step.\nSome of the previous studies (Thomason et al., 2019a; Hu et al., 2019) discuss the ablation studies from the multimodal baselines. These studies suggest there are some action biases in the environments. Although it is possible to model these action biases in the action prior of Eq. 2 from the training environment, we choose not to do so in order to avoid overfitting our policy to the training environments. If we know the target environment beforehand, the engineering on the action prior is possibly effective.\nInspired by the the success of the embodied navigation datasets (Wu et al., 2018; Chang et al., 2017; Chen et al., 2019), new experimental settings and navigation tasks in realistic 3D modeling have been proposed, such as dialog-based navigation tasks which include vision-and-dialog navigation (Thomason et al., 2019b), vision-based navigation withlanguage-based assistanc (Nguyen et al., 2019), and HANNA (Nguyen & Daumé III, 2019). Embodied question answering (Das et al., 2018; Wijmans et al., 2019), interactive visual question answering (Gordon et al., 2018) and ALFRED (Shridhar et al., 2020) for the navigation and object interaction are quite interesting task variants. The proposed generative language-grounded policy is applicable to these tasks where an agent solves a problem by following an instruction or having a conversation with another agent." }, { "heading": "4 EXPERIMENTAL SETTINGS", "text": "" }, { "heading": "4.1 DATASETS", "text": "We conduct our experiments on the R2R navigation task (Anderson et al., 2018b), which is widely used for evaluating language-grounded navigation models and R4R (Jain et al., 2019), which consists of longer and more complex paths when compared to R2R. R2R contains four splits of data: train, validation-seen, validation-unseen and test-unseen. From the 90 scenes of Matterport 3D modelings (Chang et al., 2017), 61 scenes are pooled together and used as seen environments in both the training and validation-seen sets. Among the remaining scenes, 11 scenes form the validationunseen set and 18 scenes the test-unseen set. This setup tests the agent’s ability to navigate in unseen environments in the test phase. Some of previous studies make use of augmented datasets (Fried et al., 2018; Ma et al., 2019; Tan et al., 2019; Ke et al., 2019) in R2R experiments. We use the same augmented dataset from Fried et al. (2018) which has been used in recent studies (Ma et al., 2019; Ke et al., 2019) for comparison.\nR4R was created based on R2R. In R4R, paths are composed of two paths drawn from R2R, implying that each reference path in R4R is not necessarily the shortest path between the starting point and the goal point. R4R is more suitable for evaluating how closely the agent follows a given instruction that corresponds to a long and complex path. R4R consists of train, validation-seen and validationunseen sets, but does not contain the test-unseen set, unlike R2R. We provide more detailed statistics of R2R and R4R in Appendix C." }, { "heading": "4.2 NEURAL NETWORK MODELS", "text": "We use the network architecture of the speaker from (Fried et al., 2018) to implement generative policies which include a language model p(X|at, ht). We also use the follower network architecture by Fried et al. (2018) for implementing discriminative policies. We follow Fried et al. (2018) and create the embedding of the next action by concatenating the 4-dimensional orientation feature [sinφ; cosφ; sin θ; cos θ] and the image feature extracted from a pretrained ResNet (He et al., 2016), where φ and θ are the heading and elevation angles, respectively. Both generative and discriminative models use the panoramic view and action embedding, following Fried et al. (2018). The generative policy scores an instruction based on the embedding of each of the next possible actions and the state representation which is also used by the discriminative policy.\nNavigation In all our experiments, a single agent navigates in an environment only once given a single instruction, for each task, because it is unrealistic to have multiple agents simultaneously navigating in an indoor, hosehold environment. This implies that we do not use beam search nor pre-exploration in unseen environments. See Anderson et al. (2018a) for more discussion on the condition and evaluation of the navigation task." }, { "heading": "4.3 TRAINING", "text": "R2R We first train a language model that predict an instruction from the entire trajectory in the same way as Fried et al. (2018) from the dataset. We finetune each policy using imitation learning, where we let the policy navigate the environment and give the action that leads to the shortest path at each time step as supervision, closely following Anderson et al. (2018b). Just like Fried et al. (2018), we start training a policy with both the augmented and original training sets, and then switches to using the original training set alone.\nR4R we first train a language model for the generative policy from the R4R dataset. Since there are more than 10 times more training instances in R4R than in R2R, we do not augment data. Unlike in R2R, we test both learning strategies; supervised learning and fidelity-oriented learning. In the case of supervised learning, we train both our generative and discriminative policies to maximize the log-probability of the correct action from the training set (Fried et al., 2018). On the other hand, fidelity-oriented learning is a particular instantiation of imitation learning, in which a set of heuristics are used to determine the correct next action based on the proximity of the current state to the reference trajectory at each time step. We describe fidelity-oriented learning in Appendix D.1." }, { "heading": "4.4 TRAINING DETAILS FOR BOTH R2R AND R4R DATASETS", "text": "We use the same neural network architecture with Fried et al. (2018). We use the minibatch-size of 25. We use a single NVIDIA V100 GPU for training. We use the validation-unseen dataset to select hyperparameters.1\nWe use the mixture of supervised learning and imitation learning (Tan et al., 2019; Li et al., 2019) for both the generative and discriminative policies, which are referred as teacher-forcing and studentforcing (Anderson et al., 2018b). In particular, during training between the reference action aT and a sampled action aS, we select the next action by\na = δaS + (1− δ)aT (4) where δ ∼ Bernoulli(η) following Li et al. (2019). We examine η ∈ [0, 1/5, 1/3, 1/2, 1] using the validation set and choose η = 1/3.\nAfter both generative and discriminative policies are trained separately, they are combined by\narg max at\n{ β log p(X|at, ht) + (1− β) log pf (at|X,ht) } ,\nto jointly navigate in the greedy experimental setting in the R2R dataset. Here β ∈ [0, 1] is a hyperparameter, although our generative model is able to navigate on itself unlike the speaker model by Fried et al. (2018). β is determined after the training of both generative and discriminative policies with the same manner. In our experiment, we report the score of β = 0.5.\nFAST (Ke et al., 2019) is a framework of back-tracking to visited nodes. For single-agent backtracking, FAST(short) adapts a simple heuristic to continue navigation from one of the previously visited nodes. This back-tracking is triggered when the agent choose to visit the same node for the second time. Simple heuristic scoring of the sum of the transition logits is used to choose the returning node. We use this mechanism of back-tracking in the validation and test phase. We use the negative inverse of the logits to determine the node to continue from each time back-tracking is triggered. All movements in back-tracking are counted in the agent trajectory and penalized in the evaluation." }, { "heading": "4.5 EVALUATION METRICS", "text": "We use the following four metrics that have been commonly used to assess a policy in R2R: path length (PL) of the entire trajectory, navigation error (NE), success rate (SR) and success weighted by path length (SPL). Among those evaluation metrics, we consider SR and SPL as primary ones for R2R because they are derived from the number of successes trials in the navigation. We report CLS (Jain et al., 2019), nDTW and SDTW (Ilharco et al., 2019) in addition to the four metrics above, as these additional metrics are better suited to longer and more complex paths. These three metrics are based on the distance between the policy’s trajectory and the reference path. Following Zhu et al. (2020b), we use CLS and SDTW as primary metrics for R4R. We avoid using SPL for the R4R evaluation because it is not suitable for R4R performance comparisons (Jain et al., 2019). See Appendix E for the detailed description of each metric.\n1For more details of training and evaluations, we closely follow the publicly available code https:// github.com/ronghanghu/speaker_follower of Fried et al. (2018)." }, { "heading": "5 RESULTS", "text": "We use R2R as the main task to investigate the efficacy of and analyze the behaviour of the proposed language-grounded generative policy and its relative performance against existing approaches, as R2R has been studied more extensively than R4R has. We thus present the result on R2R first, and then finish the section with the result on R4R." }, { "heading": "5.1 GENERATIVE VS. DISCRIMINATIVE POLICIES", "text": "Table 1 shows the performances of the generative language-grounded policy (Generative Policy) and discriminative policy (Discriminative Policy) in the R2R dataset. We show the result with and without data augmentation. All the policies were trained with a stochastic mixture of supervised learning and imitation learning, resulting in better performance than those reported by Fried et al. (2018), even in the case of the discriminative baseline.\nThe first observation we make is that data augmentation has a bigger effect on the validation-unseen split than on the validation-seen split. This suggests that the main effect of data augmentation is to make a policy more robust to the changes in environments by discouraging the policy from overfitting to environments that are seen during training. This effect is observed with both discriminative and generative policies. However, we consider the discriminative policies are easy to overfit to seen environments in the training time especially without the augmentated dataset. In the validationunseen split, the generative policy always performs better than the discriminative one in both SR and SPL.\nSecond, when data augmentation was used, the proposed generative policy outperforms the discriminative policy in both validation-seen and validation-unseen splits. This is particularly true with the primary metrics, SR and SPL. The path length (PL) is generally longer with the generative policy, but the difference is within 1 meter on average.\nFinally, the best performing approach is the combination of the discriminative and generative policies (both trained with data augmentation). This clearly indicates that these two approaches are capturing two different aspects of visual language navigation. Back-tracking further improves this hybrid policy in terms of SR, although the improvement in SPL is minimal, as back-tracking introduces extra transitions.\nIn CLS, nDTW and SDTW, the generative policy achieves higher performance than the discriminative policy does, which suggests that the proposed generative policy follows the reference path more closely compared to the discriminative one. We conjecture this is because the generative policy is sensitive to the language instructions by construction." }, { "heading": "5.2 COMPARISON AGAINST BASELINES", "text": "Table 2 lists the performances in the validation-seen, validation-unseen and test-unseen sets in R2R, collected from the public leaderboard and publications. We achieve near state-of-the-art result only\nwith the original training set and augmented dataset released by Fried et al. (2018). We compare our approach against the following previous baselines: Random (Anderson et al., 2018b), Seq2Seq (Anderson et al., 2018b), RPA (Wang et al., 2018), Follower (Fried et al., 2018), Self-Monitoring (Ma et al., 2019), RCM (Wang et al., 2019), EnvDrop (Tan et al., 2019), FAST (Ke et al., 2019) and PRESS (Li et al., 2019). They are described in detail in Appendix F. All of them, except for the random agent, follow the discriminative approach, unlike our proposal.\nIn terms of SR, our model “Gen.+Disc. Policy∗” performs comparably to FAST which uses the same neural network by Fried et al. (2018), while our model is better in SPL. In terms of SPL, our model is the second best only next to the EnvDrop model.2 Our policy however ends up with a better SR than EnvDrop does. Overall, the proposed approach is equivalent to or close to the existing state-of-the-art models in both SR and SPL.\nThe recently proposed PREVALENT model (Hao et al., 2020) benefits from large scale cross-modal attention-based pretraining. They apply extensive data augmentation to create 6,482K image-textaction triples for pretraining, unlike the other approaches in Table 2. Thanks to this extensive augmentation, they achieve SR of 0.54 and SPL of 0.51. On the other hand, we only use 178K augmented examples from Fried et al. (2018), widely used in previous studies (Ma et al., 2019; Ke et al., 2019), for more direct comparison with previous studies. Although we have nevertheless achieved the comparable SR with an order of magnitude smaller augmented data, we expect our approach would further improve with this more aggressive augmentation strategy in the future." }, { "heading": "5.3 ACTION PREDICTION ACCURACY", "text": "Figure 2 plots the precision of predicted actions over time on the validation-seen and validationunseen sets in R2R for both the generative policy and the discriminative language-ground policies. We use the discriminative and generative policies notated as A and B in Table 1 for this analysis. When the agents are presented with the gold trajectories, both policies predict actions more accurately than they would with their own trajectories. In real navigation, the action selection error accumulates, and prediction by both policies degrades over time. The generative policy, however, is more tolerant to such accumulated error than the discriminative policy is, achieving a higher precision in later steps. This is especially the case in unseen environments. Additional analyses for the difference of policies are in Appendix G." }, { "heading": "5.4 TOKEN-WISE PREDICTION ENTROPY", "text": "The proposed generative policy allows us to easily inspect how it uses the instruction. A few tokens in an instruction often have significant influence on the agent’s decision. For example, if an instruction ends with “...then stop at the kitchen” and the agent is between the kitchen and dinning room, the token “kitchen” decides when the agent predicts the “STOP” action. Since the generative language-grounded policy relies on token-wise scoring of the instruction given each action, we can directly measure how each token in the instruction affects the action prediction. We call this measure token-wise prediction entropy (TENT) and define it as\nS(wk) = − ∑ at∈A q(at, wk) log|A| q(at, wk), (5)\n2Our reported SPL is 0.4647 only marginally lower than 0.47 of EnvDrop.\nwhere S(wk) ∈ [0, 1], A is the action set, and\nq(at, wk) = p(wk|at, ht, w:k−1)∑\nat∈A p(wk|at, ht, w:k−1) . (6)\nWhen some tokens are influential to the action prediction, the entropy of scoring these tokens will be low. Otherwise, when S(wk) is close to 1, the corresponding token wk is deemed less influential for the next action prediction. We visualize 1 − S(wk), to which we refer as 1-TENT, to identify highly influential tokens at each time step.\nFigure 3 visualizes how actions are related to each token in each time step with two sample navigation trajectories from the validation-seen and validation-unseen splits in R2R. We use the generative policy trained with data augmentation from Table 1. Both trials end successfully within five and seven time steps, and we plot five and seven curves of 1-TENT. In the early stage of the navigation (t < 3), initial tokens exhibit large 1-TENT, meaning the change of actions yields a great difference in those token predictions. This tendency is observed in both seen and unseen environments. We conjecture this is a natural strategy learned by the policy to take when there is no navigation history.\nIn the seen navigation example, the agent is asked to navigate from the kitchen to the dinning room table. In the initial steps, the agent tries to go out from the kitchen, and phrases such as “right” and “walk out” have high 1-TENT. At t = 3, the agent is out of the kitchen and needs to turn left at the middle of the large room with high 1-TENT on “left”. Finally, the agent finds the dinning table and stops there with the high 1-TENT for the tokens indicating the stop point.\nIn the unseen navigation instance, the agent is asked to navigate from the hallway, cross the large bedroom and stop outside the carpet. In the trial, the agent first moves toward the goal node based on the keywords “bedroom” and “fireplace”. It also exhibits high 1-TENT for “doorway”, which is a clue for identifying the goal node. This agent, however, passes the node of the success for the first time at t = 4. At t = 5, the agent has the high 1-TENT for both “doorway” and “rag” and then goes back to the same place with t = 4. Finally, it stops with the high 1-TENT for “before” and the slight 1-TENT for “rag” at t = 6. As we have seen here, the agent has different 1-TENT values depending on the context even if it is in the same place.\nAlthough the result of the 1-TENT visualization is similar to the attention maps (Bahdanau et al., 2014; Vaswani et al., 2017), 1-TENT is much more directly related to the action prediction. The attention map represents the internal state of the neural network, while 1-TENT is based on the output of the neural network. This property makes the proposed 1-TENT a powerful tool for investigating and understanding the generative language-grounded policy." }, { "heading": "5.5 R4R", "text": "Table 3 presents the results on R4R with baseline model performance. Similarly to our earlier observation on R2R, the proposed generative policy works as well as or better than the discriminative policy as well as the other baselines, in terms of the primary metrics which are CLS and SDTW in this case especially in the validation-unseen split. The generative policy trained with supervised learning outperforms all the baseline policies in CLS, while the generative policy trained with imitation learning is close to BabyWalk trained with reinforcement learning (IL+RL) and curriculum learning (IL+RL+Cur.) in SDTW. As both reinforcement learning and curriculum learning could also be applied to our approach, we expect to this gap to completely close in the future.\nAs we have observed on R2R, without data augmentation, the discriminative policy works as well as or often better than the generative approach does on R4R. the generative policy is however significantly better than the discriminative one in the validation-unseen split, confirming our conjecture that the discriminative policy tends to overfit to environments that were seen during training." }, { "heading": "6 CONCLUSION", "text": "We have investigated two approaches, discriminative and generative, for the vision-and-language navigation task, and presented the generative language-grounded policy which we empirically observed to perform better than the more widely used discriminative approach. We were able to combine the generative and discriminative policies and achieve the (near) state-of-the-art results for the Room-2-Room and Room-4-Room navigation datasets, despite the simplicity of both parameterization and learning relative to the existing baselines. Finally, we have demonstrated that the proposed generative approach is more interpretable than discriminative ones by designing a token-wise prediction entropy.\nThe proposed generative parameterization, including 1-TENT visualization, is directly applicable to language-grounded reinforcement learning, such as Zhong et al. (2020); Hermann et al. (2017), which should be investigated in the future. The proposed generative parameterization further enables natural integration of large-scale language model pretraining, such as Radford et al. (2018); Brown et al. (2020), for various language-conditioned tasks. It is however important to investigate an efficient way to approximate the posterior distribution in order to cope with a large action set, for instance, by importance sampling and amortized inference, for the proposed generative parameterization to be more broadly applicable, in the future." }, { "heading": "ACKNOWLEDGMENTS", "text": "SK was supported by ACT-I, JST JPMJPR17U8 and PRESTO, JST JPMJPR20C. KC was supported by NSF Award 1922658 NRT-HDR: FUTURE Foundations, Translation. This work was done when SK visited New York University." }, { "heading": "A EMBODIED AI AND VISION-AND-LANGUAGE NAVIGATION", "text": "In the R2R dataset (Anderson et al., 2018b), an agent moves on a graph that was constructed from one of the realistic 3D models of houses and buildings based on Matteport 3D dataset (Chang et al., 2017). At the beginning of each trial, the agent is given textual instruction, is placed at the start node and attempts to reach at the goal node by moving along the edges. At each node of the graph the agent observes the visual features of the surrounding environment and makes a decision to which neighbour node it will move next. When the agent determines that the current node is sufficiently close to the destination node, it outputs “STOP”, and the navigation trial ends. The agent is evaluated in terms of the accuracy of their final location and the trajectory length (Anderson et al., 2018b;a).\nThe difficulties in VLN mainly arise from the diversity of textual instructions. R2R provides multiple instructions for each trajectory. These instructions are created via crowd-sourcing, and their granularity and specificity highly vary (Li et al., 2019). The agent furthermore needs to generalize to unseen environments. Previous studies have reported that models with rich visual and textual features often overfit to the seen environments (Hu et al., 2019).\nRecently, VLN becomes the central part of embodied navigation studies (Zhu et al., 2020a; Wang et al., 2020a; Xin et al., 2020; Qi et al., 2020; Wang et al., 2020b; Krantz et al., 2020; Fu et al., 2020; Majumdar et al., 2020). VLN is applicable to real-world robotic navigation even without the preset mapping when it is combined with the external SLAM and autonomous mobile modules (Anderson et al., 2020). Recent embodied AI environments (Savva et al., 2019; Weihs et al., 2020) are also applicable for real-world robotic navigation. Symbol emergence of Taniguchi et al. (2016) might be one clue to close the gap between machine learning-based embodied agents, human interactions, and robotics.\nIn this paper, we propose the first approach to directly utilize the vision-conditioned language modeling for navigation. Our generative policy applies the vision-conditioned language modeling for scoring the instructions, and therefore our policy implicitly maps the visual information to the language information as visualized in Sec. 5.3. This implicit mapping from visual to language information with the language modeling may contribute to the performance of our generative policy, especially in the navigation of unseen conditions." }, { "heading": "B RELATIONSHIPS OF NOTATIONS", "text": "In the formalism of the generative language-grounded policy, we denote the instruction as X , past and current observations as s:t and past actions as a:t−1 at time step t. Figure 4 illustrates the relationship between these notations. ht includes the current and past observations and past actions." }, { "heading": "C DETAILS OF R2R AND R4R DATASETS", "text": "R2R has in total 21,567 instructions which are 29 words long on average. The training set has 14,025 instructions, while the validation-seen and validation-unseen datasets have 1,020 and 2,349 instructions respectively. Each trajectory in the R2R dataset has three or four instructions. We use the released augmentation dataset in R2R. This augmentation dataset includes 178.3K trajectories with a single instruction for each. In the R4R dataset, training set, validation seen and validation unseen datasets contains 233k, 1k and 45k instructions respectively.3 We don’t use augmented datasets during the R4R training." }, { "heading": "D TRAINING DETAILS OF LANGUAGE-GROUNDED POLICIES", "text": "D.1 FIDELITY-BASED TRAINING FOR R4R DATASETS\nTo let the agent follow the instruction, even if the agent is out of the reference path during studentforcing learning (Anderson et al., 2018b), we introduce a simple heuristics to determine the reference\n3We follow https://github.com/google-research/google-research/tree/master/ r4r and generate the R4R dataset from R2R.\nactions. Given the reference path R = (r1, ..., ri, ..., r|R|) as the sequence of reference places ri and the current agent trajectory Pt = (p1, ..., pt′ , ..., pt) as the sequence of visited places pt′ at time step t, (1) if the current place pt satisfies pt ∈ R, the reference action here is the action to follow the reference path including the stop action.4 (2) if the agent is out of the reference path at time step t and was on the reference path at t′, we choose the temporal goal place from the reference path as arg minr PL(x,R\n′) where R′ = (ri, ri+1, ..., ri+t−t′). PL(x, y) is the shortest path length between the places x and y. Here ri is the place the agent was last on R at t′-th step. ri is also inferred as the same way with the footnote if R has multiple x′t. The reference action here is the action to lead the agent to the temporal goal place in the shortest path length.\nThe key idea of this heuristic is that when the agent is out of the reference path, we choose the temporal goal on place from the reference path. However, we disallow the agent to choose the temporal goal place which is far from the instruction and the visiting orders of the reference path." }, { "heading": "E DETAILS OF EVALUATION METRICS", "text": "We use the following four metrics that are commonly used in evaluation for R2R navigation:\nTrajectory Length (TL) is the length of the agent trajectory in meters. Navigation Error (NE) is the shortest path distance in meters from the point the agent stops to the\ngoal point.\nSuccess Rate (SR) is the proportion of successes among all the trials. The task is successful when the agent stops within 3m from the goal point (Anderson et al., 2018b).\nSPL is short for Success weighted by (normalized inverse) Path Length introduced in Anderson et al. (2018a). SPL is a variation of SR and is penalized by the trajectory length.\nWe analyze how well the trajectories followed by the proposed approach agree with the instructions using CLS (Jain et al., 2019), nDTW and SDTW (Ilharco et al., 2019). These three metrics are defined as:\nCLS Coverage weighted by Length Score is the product of the path coverage of the reference path and the length score which penalize the longer or shorter trajectory length than the reference path length.\nnDTW Normalized Dynamic Time Warping computes the fidelity of the trajectory given the reference path.\nSDTW Success weighted by normalized Dynamic Time Warping is equal to nDTW for task success cases and otherwise 0.\n4We also consider the case that R has multiple pt in the R4R training set. When an agent visits the place pt for the n-th time and R include m times of pt visiting, we assume this is the m′-th visiting of xt on R and m′ = n if n < m otherwise m′ = m. We assume that the agent is at m′-th step of the reference path (pt = rm′ ).\nIn the R2R dataset, each instruction is based on the shortest path (Anderson et al., 2018b). The trajectory paths are specified only in the instructions and therefore these metrics evaluate how closely the models follow the instructions. Suppose that there are two completely different routes in the navigation: the shortest path with the instruction and a different path that result in a slightly longer path length. When an agent ignores the instruction and reaches the goal on a different route, SPL will be close to 1 because of the similar path length. However, CLS and SDTW are penalized due to the completely different trajectory.\nWe do not use SPL for the R4R evaluation metric because SPL depends on the shortest path length from the start node to the goal node (Anderson et al., 2018a) and the reference path in R4R is not the shortest path in general. In R4R, the shortest path length from the start node to the goal node can become 0. To consider the fidelity-based path length, we need to re-define a new SPL’ based on the reference path length instead of the shortest path length for R4R. However, if we do so, such SPL’ is incompatible with SPL reported in the previous paper. Therefore we don’t use SPL for R4R performance comparisons. See Jain et al. (2019) for further discussions of SPL on R4R." }, { "heading": "F DETAILS OF BASELINE MODELS", "text": "F.1 R2R BASELINES\nWe compare our approach against the following previous baselines in R2R. All of these, except for the random agent, follow the discriminative approach.\nRandom An agent that moves to one random direction for five steps (Anderson et al., 2018b). Seq2Seq An LSTM-based sequence-to-sequence model (Anderson et al., 2018b). RPA Combination of model-free and model-based reinforcement learning with a look-ahead mod-\nule (Wang et al., 2018).\nFollower An agent with panoramic view and trained with data augmentation (Fried et al., 2018). Self-Monitoring An agent that integrates visual and textual matching trained with progress monitor\nregularizer (Ma et al., 2019).\nRCM An agent that enforces cross-modal grounding of language and vision features (Wang et al., 2019).\nEnvDrop An agent trained with combination of imitation learning and reinforcement learning after pretraining using environmental dropout and back translation for environmental data augmentation (Tan et al., 2019).\nFAST An agent that exploits the fusion score of the local action selector and the progress monitor. This agent is able to back-track to visited nodes (Ke et al., 2019).\nPRESS An agent with the pretrained language encoder of BERT and the capability to incorporate multiple introductions for one trajectory (Li et al., 2019). We compare our model against their model trained with a single instruction.\nF.2 R4R BASELINES\nWe compare our policies with models that are trained without augmented data if they are available.\nRCM An RCM agent that enforces cross-modal grounding of language and vision features reported in Jain et al. (2019).\nnDTW An agent that with reinforcement learning with nDTW-based rewards (Ilharco et al., 2019). BabyWalk An agent that exploits the proposed BABY-STEPs to follow micro-instructions. BABY-\nSTEPs are shorter navigation tasks and trained with other learning regimes such as imitation learning (IL), reinforcement learning (RL) and curriculum reinforcement learning (Cur.).\nR4R is a dataset to measure the agent fidelity to the given instruction. Therefore we choose the fidelity-oriented agents in comparison and we develop our policies with supervised learning or fidelity-oriented training." }, { "heading": "G AGREEMENT OF GENERATIVE AND DISCRIMINATIVE POLICIES", "text": "We present the agreement rate of action prediction between generative and discriminative policies at the bottom of Figure 6. The agreement drops over time, which implies that these policies behaves differently from each other, capturing different aspects of VLN. The agreement become lower in later time steps and we hence consider the combination of Gen and Disc can work better than either model." }, { "heading": "H PERFORMANCE COMPARISON ON LONGER TRAJECTORIES", "text": "We present further comparisons of the discriminative and generative policies on longer trajectories with the R4R validation unseen set. Figure 6 presents SR, CLS, and SDTW for navigation trials with the total steps in the gold trajectories and the navigation trajectories. In the left pane, the horizontal axis is the number of steps to reach the goal node following the reference trajectory. As seen in Figure 3 of Jain et al. (2019), most of the R4R goal trajectories have 9 to 15 steps. The CLS and SDTW graphs suggest that our generative policy closely follows the goal trajectory compared to the discriminative policy especially when the gold trajectories are long. In the right pane, the horizontal axis is the number of steps when agents stop their navigation in the trials. Although the difference between the generative policy and the discriminative policy is not large in the success rates, the generative policy achieves better in CLS and SDTW with the long navigation trajectories. This suggests that the discriminative policy doesn’t follow the given instruction even though it achieves similar success rates to the generative policy." } ]
2,021
null
SP:6cad092c66273cdb0065834ee4459f1b76f8929d
[ "The authors propose LiSP, a model-based planning method that performs model-predictive control using learned skills rather than actions. The skills are learned using DADS, with a modified reward function that additionally encourages all skills to stay within the support of training data to avoid sink states. The experiment results show stable learning progress on reset-free and ever-changing targets, compared to other baselines.", "This paper presents a lifelong reinforcement learning framework in a non-stationary environment with non-episodic interactions. The proposed approach is to 1) learn \"skills\" - a world model - to maximize the intrinsic rewards using both online and offline data, and to 2) make best plans based on the learned world model. This approach is evaluated with Hopper and Ant tasks." ]
The objective of lifelong reinforcement learning (RL) is to optimize agents which can continuously adapt and interact in changing environments. However, current RL approaches fail drastically when environments are non-stationary and interactions are non-episodic. We propose Lifelong Skill Planning (LiSP), an algorithmic framework for non-episodic lifelong RL based on planning in an abstract space of higher-order skills. We learn the skills in an unsupervised manner using intrinsic rewards and plan over the learned skills using a learned dynamics model. Moreover, our framework permits skill discovery even from offline data, thereby reducing the need for excessive real-world interactions. We demonstrate empirically that LiSP successfully enables long-horizon planning and learns agents that can avoid catastrophic failures even in challenging non-stationary and non-episodic environments derived from gridworld and MuJoCo benchmarks1.
[ { "affiliations": [], "name": "SKILL-SPACE PLANNING" }, { "affiliations": [], "name": "Kevin Lu" }, { "affiliations": [], "name": "Aditya Grover" } ]
[ { "authors": [ "Joshua Achiam", "Harrison Edwards", "Dario Amodei", "Pieter Abbeel" ], "title": "Variational option discovery algorithms", "venue": "arXiv preprint arXiv:1807.10299,", "year": 2018 }, { "authors": [ "Rishabh Agarwal", "Dale Schuurmans", "Mohammad Norouzi" ], "title": "An optimistic perspective on offline reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Arthur Argenson", "Gabriel Dulac-Arnold" ], "title": "Model-based offline planning", "venue": "arXiv preprint arXiv:2008.05556,", "year": 2020 }, { "authors": [ "Pierre-Luc Bacon", "Jean Harb", "Doina Precup" ], "title": "The option-critic architecture", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Vı́ctor Campos", "Alexander Trott", "Caiming Xiong", "Richard Socher", "Xavier Giro-i Nieto", "Jordi Torres" ], "title": "Explore, discover and learn: Unsupervised discovery of state-covering skills", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "arXiv preprint arXiv:1805.12114,", "year": 2018 }, { "authors": [ "John D Co-Reyes", "Suvansh Sanjeev", "Glen Berseth", "Abhishek Gupta", "Sergey Levine" ], "title": "Ecological reinforcement learning", "venue": "arXiv preprint arXiv:2006.12478,", "year": 2020 }, { "authors": [ "Eyal Even-Dar", "Sham M Kakade", "Yishay Mansour" ], "title": "Reinforcement learning in pomdps without resets", "venue": null, "year": 2005 }, { "authors": [ "Benjamin Eysenbach", "Shixiang Gu", "Julian Ibarz", "Sergey Levine" ], "title": "Leave no trace: Learning to reset for safe and autonomous reinforcement learning", "venue": "arXiv preprint arXiv:1711.06782,", "year": 2017 }, { "authors": [ "Benjamin Eysenbach", "Abhishek Gupta", "Julian Ibarz", "Sergey Levine" ], "title": "Diversity is all you need: Learning skills without a reward function", "venue": "arXiv preprint arXiv:1802.06070,", "year": 2018 }, { "authors": [ "Chelsea Finn", "Aravind Rajeswaran", "Sham Kakade", "Sergey Levine" ], "title": "Online meta-learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Justin Fu", "Aviral Kumar", "Ofir Nachum", "George Tucker", "Sergey Levine" ], "title": "D4rl: Datasets for deep data-driven reinforcement learning", "venue": "arXiv preprint arXiv:2004.07219,", "year": 2020 }, { "authors": [ "Karol Gregor", "Danilo Jimenez Rezende", "Daan Wierstra" ], "title": "Variational intrinsic control", "venue": "arXiv preprint arXiv:1611.07507,", "year": 2016 }, { "authors": [ "William H Guss", "Cayden Codel", "Katja Hofmann", "Brandon Houghton", "Noboru Kuno", "Stephanie Milani", "Sharada Mohanty", "Diego Perez Liebana", "Ruslan Salakhutdinov", "Nicholay Topin" ], "title": "The minerl competition on sample efficient reinforcement learning using human priors", "venue": "arXiv preprint arXiv:1904.10079,", "year": 2019 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Ian Fischer", "Ruben Villegas", "David Ha", "Honglak Lee", "James Davidson" ], "title": "Learning latent dynamics for planning from pixels", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Weiqiao Han", "Sergey Levine", "Pieter Abbeel" ], "title": "Learning compound multi-step controllers under unknown dynamics", "venue": "In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2015 }, { "authors": [ "Steven Hansen", "Will Dabney", "Andre Barreto", "Tom Van de Wiele", "David Warde-Farley", "Volodymyr Mnih" ], "title": "Fast task inference with variational intrinsic successor features", "venue": null, "year": 1906 }, { "authors": [ "Mikael Henaff", "Alfredo Canziani", "Yann LeCun" ], "title": "Model-predictive policy learning with uncertainty regularization for driving in dense traffic", "venue": null, "year": 1901 }, { "authors": [ "Michael Janner", "Justin Fu", "Marvin Zhang", "Sergey Levine" ], "title": "When to trust your model: Modelbased policy optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Gregory Kahn", "Adam Villaflor", "Vitchyr Pong", "Pieter Abbeel", "Sergey Levine" ], "title": "Uncertainty-aware reinforcement learning for collision avoidance", "venue": "arXiv preprint arXiv:1702.01182,", "year": 2017 }, { "authors": [ "Maximilian Karl", "Maximilian Soelch", "Philip Becker-Ehmck", "Djalel Benbouzid", "Patrick van der Smagt", "Justin Bayer" ], "title": "Unsupervised real-time control through variational empowerment", "venue": "arXiv preprint arXiv:1710.05101,", "year": 2017 }, { "authors": [ "Khimya Khetarpal", "Martin Klissarov", "Maxime Chevalier-Boisvert", "Pierre-Luc Bacon", "Doina Precup" ], "title": "Options of interest: Temporal abstraction with interest functions", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Rahul Kidambi", "Aravind Rajeswaran", "Praneeth Netrapalli", "Thorsten Joachims" ], "title": "Morel: Modelbased offline reinforcement learning", "venue": "arXiv preprint arXiv:2005.05951,", "year": 2020 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the national academy of sciences,", "year": 2017 }, { "authors": [ "Alexander S Klyubin", "Daniel Polani", "Chrystopher L Nehaniv" ], "title": "Empowerment: A universal agentcentric measure of control", "venue": "IEEE Congress on Evolutionary Computation,", "year": 2005 }, { "authors": [ "Aviral Kumar", "Aurick Zhou", "George Tucker", "Sergey Levine" ], "title": "Conservative q-learning for offline reinforcement learning", "venue": "arXiv preprint arXiv:2006.04779,", "year": 2020 }, { "authors": [ "Thanard Kurutach", "Ignasi Clavera", "Yan Duan", "Aviv Tamar", "Pieter Abbeel" ], "title": "Model-ensemble trust-region policy optimization", "venue": "arXiv preprint arXiv:1802.10592,", "year": 2018 }, { "authors": [ "Erwan Lecarpentier", "Emmanuel Rachelson" ], "title": "Non-stationary markov decision processes, a worstcase approach using model-based reinforcement learning, extended version", "venue": null, "year": 1904 }, { "authors": [ "Sergey Levine", "Aviral Kumar", "George Tucker", "Justin Fu" ], "title": "Offline reinforcement learning: Tutorial, review, and perspectives on open problems", "venue": "arXiv preprint arXiv:2005.01643,", "year": 2020 }, { "authors": [ "Kendall Lowrey", "Aravind Rajeswaran", "Sham Kakade", "Emanuel Todorov", "Igor Mordatch" ], "title": "Plan online, learn offline: Efficient learning and exploration via model-based control", "venue": "arXiv preprint arXiv:1811.01848,", "year": 2018 }, { "authors": [ "Kevin Lu", "Igor Mordatch", "Pieter Abbeel" ], "title": "Adaptive online planning for continual lifelong learning", "venue": "arXiv preprint arXiv:1912.01188,", "year": 2019 }, { "authors": [ "Aleksandr Mikhailovich Lyapunov" ], "title": "The general problem of the stability of motion", "venue": null, "year": 1992 }, { "authors": [ "Nikhil Mishra", "Pieter Abbeel", "Igor Mordatch" ], "title": "Prediction and control with temporal segment models", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Ofir Nachum", "Shixiang Gu", "Honglak Lee", "Sergey Levine" ], "title": "Data-efficient hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1805.08296,", "year": 2018 }, { "authors": [ "Ofir Nachum", "Haoran Tang", "Xingyu Lu", "Shixiang Gu", "Honglak Lee", "Sergey Levine" ], "title": "Why does hierarchy (sometimes) work so well in reinforcement learning", "venue": null, "year": 1909 }, { "authors": [ "Anusha Nagabandi", "Chelsea Finn", "Sergey Levine" ], "title": "Deep online learning via meta-learning: Continual adaptation for model-based rl", "venue": "arXiv preprint arXiv:1812.07671,", "year": 2018 }, { "authors": [ "Anusha Nagabandi", "Gregory Kahn", "Ronald S Fearing", "Sergey Levine" ], "title": "Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Anusha Nagabandi", "Kurt Konolige", "Sergey Levine", "Vikash Kumar" ], "title": "Deep dynamics models for learning dexterous manipulation", "venue": "In Conference on Robot Learning,", "year": 2020 }, { "authors": [ "Alex Ray", "Joshua Achiam", "Dario Amodei" ], "title": "Benchmarking safe exploration in deep reinforcement learning", "venue": "arXiv preprint arXiv:1910.01708,", "year": 2019 }, { "authors": [ "David Rolnick", "Arun Ahuja", "Jonathan Schwarz", "Timothy P Lillicrap", "Greg Wayne" ], "title": "Experience replay for continual learning", "venue": "arXiv preprint arXiv:1811.11682,", "year": 2018 }, { "authors": [ "Christoph Salge", "Cornelius Glackin", "Daniel Polani" ], "title": "Empowerment–an introduction", "venue": "In Guided Self-Organization: Inception,", "year": 2014 }, { "authors": [ "Jonathan Schwarz", "Jelena Luketina", "Wojciech M. Czarnecki", "Agnieszka Grabska-Barwinska", "Yee Whye Teh", "Razvan Pascanu", "Raia Hadsell" ], "title": "Progress & compress: A scalable framework for continual learning, 2018", "venue": null, "year": 2018 }, { "authors": [ "Archit Sharma", "Shixiang Gu", "Sergey Levine", "Vikash Kumar", "Karol Hausman" ], "title": "Dynamics-aware unsupervised discovery of skills", "venue": "arXiv preprint arXiv:1907.01657,", "year": 2019 }, { "authors": [ "Archit Sharma", "Michael Ahn", "Sergey Levine", "Vikash Kumar", "Karol Hausman", "Shixiang Gu" ], "title": "Emergent real-world robotic skills via unsupervised off-policy reinforcement learning", "venue": "arXiv preprint arXiv:2004.12974,", "year": 2020 }, { "authors": [ "Harshit Sikchi", "Wenxuan Zhou", "David Held" ], "title": "Learning off-policy with online planning", "venue": "arXiv preprint arXiv:2008.10066,", "year": 2020 }, { "authors": [ "Richard Sutton", "Doina Precup", "Satinder Singh" ], "title": "Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement", "venue": null, "year": 1999 }, { "authors": [ "Richard S. Sutton", "Andrew G. Barto" ], "title": "Reinforcement Learning: An Introduction", "venue": null, "year": 2018 }, { "authors": [ "Arun Venkatraman", "Martial Hebert", "J Bagnell" ], "title": "Improving multi-step prediction of learned time series models", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Tingwu Wang", "Jimmy Ba" ], "title": "Exploring model-based planning with policy networks", "venue": "arXiv preprint arXiv:1906.08649,", "year": 2019 }, { "authors": [ "David Warde-Farley", "Tom Van de Wiele", "Tejas Kulkarni", "Catalin Ionescu", "Steven Hansen", "Volodymyr Mnih" ], "title": "Unsupervised control through non-parametric discriminative rewards", "venue": "arXiv preprint arXiv:1811.11359,", "year": 2018 }, { "authors": [ "Grady Williams", "Andrew Aldrich", "Evangelos Theodorou" ], "title": "Model predictive path integral control using covariance variable importance sampling", "venue": "arXiv preprint arXiv:1509.01149,", "year": 2015 }, { "authors": [ "Yifan Wu", "George Tucker", "Ofir Nachum" ], "title": "Behavior regularized offline reinforcement learning", "venue": "arXiv preprint arXiv:1911.11361,", "year": 2019 }, { "authors": [ "Annie Xie", "James Harrison", "Chelsea Finn" ], "title": "Deep reinforcement learning amidst lifelong nonstationarity", "venue": "arXiv preprint arXiv:2006.10701,", "year": 2020 }, { "authors": [ "Tianhe Yu", "Garrett Thomas", "Lantao Yu", "Stefano Ermon", "James Zou", "Sergey Levine", "Chelsea Finn", "Tengyu Ma" ], "title": "Mopo: Model-based offline policy optimization", "venue": "arXiv preprint arXiv:2005.13239,", "year": 2020 }, { "authors": [ "Jesse Zhang", "Brian Cheung", "Chelsea Finn", "Sergey Levine", "Dinesh Jayaraman" ], "title": "Cautious adaptation for reinforcement learning in safety-critical settings", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Ruihan Zhao", "Kevin Lu", "Pieter Abbeel", "Stas Tiomkin" ], "title": "Efficient empowerment estimation for unsupervised stabilization", "venue": "In International Conference on Learning Representations,", "year": 2021 }, { "authors": [ "Henry Zhu", "Justin Yu", "Abhishek Gupta", "Dhruv Shah", "Kristian Hartikainen", "Avi Singh", "Vikash Kumar", "Sergey Levine" ], "title": "The ingredients of real-world robotic reinforcement learning", "venue": null, "year": 2004 }, { "authors": [ "Nagabandi" ], "title": "2019) improves upon some of the weaknesses of Chua et al. (2018) by planning in the parameter space of policies, solving some MuJoCo tasks asymptotically, but is still not accurate enough to plan over horizons long enough for environments that require a long sequence of coordinated actions", "venue": "Nagabandi et al", "year": 2020 }, { "authors": [ "Hafner" ], "title": "2019) try to improve planning accuracy with multi-step prediction losses, which is complementary to our work and could improve performance. Furthermore, even though value functions for short horizon planning is not particularly promising, we note that terminal value functions still have other benefits that can improve performance when combined with long-horizon planning, such as improved exploration or long-term reasoning (Lowrey et", "venue": null, "year": 2019 }, { "authors": [ "Hierarchical RL" ], "title": "Our work is somewhat similar to hierarchical option-critic architectures that “plan", "venue": "MPC (Sutton et al.,", "year": 1999 }, { "authors": [ "Sharma" ], "title": "the intrinsic reward is calculated under the model, which changes and has inaccuracies, high intrinsic reward under the model is not always the best indicator, whereas it is a more reliable metric when learned from real world transitions as in Sharma et al. (2020). Also, the intrinsic reward is sampled according to an expectation given by the skill-practice", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Intelligent agents, such as humans, continuously interact with the real world and make decisions to maximize their utility over the course of their lifetime. This is broadly the goal of lifelong reinforcement learning (RL), which seeks to automatically learn artificial agents that can mimic the continuous learning capabilities of real-world agents. This goal is challenging for current RL algorithms as real-world environments can be non-stationary, requiring the agents to continuously adapt to changing goals and dynamics in robust fashions. In contrast to much of prior work in lifelong RL, our focus is on developing RL algorithms that can operate in non-episodic or “reset-free” settings and learn from both online and offline interactions. This setup approximates real-world learning where we might have plentiful logs of offline data but resets to a fixed start distribution are not viable and our goals and environment change. Performing well in this setting is key for developing autonomous agents that can learn without laborious human supervision in non-stationary, high-stakes scenarios.\nHowever, the performance of standard RL algorithms drops significantly in non-episodic settings. To illustrate this issue, we first pre-train agents to convergence in the episodic Hopper environment (Brockman et al., 2016) with state-of-the-art model-free and model-based RL algorithms: Soft Actor Critic (SAC) (Haarnoja et al., 2018) and Model-Based Policy Optimization (MBPO) (Janner et al., 2019), respectively. These agents are then trained further in a reset-free setting, representing a real-world scenario where agents seek to improve generalization via continuing to adapt at a test time where resets are more expensive. The learning curves are shown in Figure 1. In spite of near-perfect initialization, all agents proceed to fail catastrophically, suggesting that current gradientbased RL methods are inherently unstable in non-episodic settings.\n1Project website and materials: https://sites.google.com/berkeley.edu/reset-free-lifelong-learning\nThis illustrative experiment complements prior work highlighting other failures of RL algorithms in non-stationary and non-episodic environments: Co-Reyes et al. (2020) find current RL algorithms fail to learn in a simple gridworld environment without resets and Lu et al. (2019) find modelfree RL algorithms struggle to learn and adapt to nonstationarity even with access to the ground truth dynamics model. We can attribute these failures to RL algorithms succumbing to sink states. Intuitively, these are states from which agents struggle to escape, have low rewards, and suggest a catastrophic halting of learning progress (Lyapunov, 1992). For example, an upright walking agent may fall over and fail to return to a standing position, possibly because of underactuated joints. A less obvious notion of sink state we use is that the agent simply fails to escape from it due to low learning signal, which is almost equally undesirable. A lifelong agent must seek to avoid such disabling sink states, especially in the absence of resets.\nWe introduce Lifelong Skill Planning (LiSP), an algorithmic framework for reset-free, lifelong RL that uses long-horizon, decision-time planning in an abstract space of skills to overcome the above challenges. LiSP employs a synergistic combination of model-free policy networks and model-based planning, wherein we use a policy to execute certain skills, planning directly in the skill space. This combination offers two benefits: (1) skills constrain the search space to aid the planner in finding solutions to long-horizon problems and (2) skills mitigate errors in the dynamics model by constraining the distribution of behaviors. We demonstrate that agents learned via LiSP can effectively plan for longer horizons than prior work, enabling better long-term reasoning and adaptation.\nAnother key component of the LiSP framework is the flexibility to learn skills from both online and offline interactions. For online learning, we extend Dynamics-Aware Discovery of Skills (DADS), an algorithm for unsupervised skill discovery (Sharma et al., 2019), with a skill-practice proposal distribution and a primitive dynamics model for generating rollouts for training. We demonstrate that the use of this proposal distribution significantly amplifies the signal for learning skills in resetfree settings. For offline learning from logged interactions, we employ a similar approach as above but with a modification of the reward function to correspond to the extent of disagreement amongst the models in a probabilistic ensemble (Kidambi et al., 2020).\nOur key contributions can be summarized as follows:\n• We identify skills as a key ingredient for overcoming the challenges to achieve effective lifelong RL in reset-free environments.\n• We propose Lifelong Skill Planning (LiSP), an algorithmic framework for reset-free lifelong RL with two novel components: (a) a skill learning module that can learn from both online and offline interactions, and (b) a long-horizon, skill-space planning algorithm.\n• We propose new challenging benchmarks for reset-free, lifelong RL by extending gridworld and MuJoCo OpenAI Gym benchmarks (Brockman et al., 2016). We demonstrate the effectiveness of LiSP over prior approaches on these benchmarks in a variety of nonstationary, multi-task settings, involving both online and offline interactions." }, { "heading": "2 BACKGROUND", "text": "Problem Setup. We represent the lifelong environment as a sequence of Markov decision processes (MDPs). The lifelong MDPM is the concatenation of several MDPs (Mi, Ti), where Ti denotes the length of time for which the dynamics of Mi are activated. Without loss of generality, we assume the sum of the Ti (i.e., the total environment time) is greater than the agent’s lifetime. The properties of the MDPMi are defined by the tuple (S,A,Pi, ri, γ), where S is the state space, A is the action space, Pi : S ×A×S → R are the transition dynamics, ri : S ×A → R is the reward function, and γ ∈ [0, 1) is the discount factor. Consistent with prior work, we assume ri is always known to the agent specifying the task; it is also easy to learn for settings where it is not known. We use P and r as shorthand to refer to the currentMi with respect to the agent. The agent is denoted by a policy π : S → A and seeks to maximize its expected return starting from the current state s0: argmaxπ Est+1∼P,at∼π[ ∑∞ t=0 γ\ntr(st, at)]. The policy π may be implemented as a parameterized function or an action-generating procedure. We expect the agent to optimize for the currentMi, rather than trying to predict the future dynamics; e.g., a robot may be moved to an arbitrary new MDP and expected to perform well, without anticipating this change in advance.\nSkill Discovery. Traditional single-task RL learns a single parametrized policy πθ(·|s). For an agent to succeed at multiple tasks, we can increase the flexibility of the agent by introducing a set of latent skills z ∈ [−1, 1]dim(z) and learning a skill conditional policy πθ(·|s, z). As in standard latent variable modeling, we assume a fixed, simple prior over the skills p(z), e.g., uniform. The learning objective of the skill policy is to maximize some intrinsic notion of reward. We denote the intrinsic reward as r̃(s, z, s′) to distinguish it from the task-specific reward defined previously. One such intrinsic reward, proposed in DADS (Sharma et al., 2019), can be derived from a variational approximation to the mutual information between the skills and next states I(s′; z|s) as:\nr̃(s, z, s′) = log qν(s ′|s, z) 1 L ∑L i=1 qν(s ′|s, zi) where zi ∼ p(z). (1)\nHere qν(s′|s, z) is a tractable variational approximation for the intractable posterior p(s′|s, z). Intuitively, this r̃ learns predictable (via the numerator) and distinguishable (via the denominator) skills. Due to the mutual information objective, DADS also learns skills with high empowerment, which is useful for constraining the space of options for planning; we discuss this in Appendix C.\nModel-Based Planning. Whereas RL methods act in the environment according to a parameterized policy, model-based planning methods learn a dynamics model fφ(st+1|st, at) to approximate P and use Model Predictive Control (MPC) to generate an action via search over the model (Nagabandi et al., 2018b; Chua et al., 2018). At every timestep, MPC selects the policy π that maximizes the predicted H-horizon expected return from the current state s0 for a specified reward function r:\nπMPC = argmax π Eat∼π,st+1∼fφ [ H−1∑ t=0 γtr(st, at) ] . (2)\nWe use Model Path Predictive Integral (MPPI) (Williams et al., 2015) as our optimizer. MPPI is a gradient-free optimization method that: (1) samples policies according to Gaussian noise on the optimization parameters, (2) estimates the policy returns, and (3) reweighs policies according to a Boltzmann distribution on the predicted returns. For more details, see Nagabandi et al. (2020).\nFor all dynamics models used in this work, we use a probabilistic ensemble ofN models {fφi}N−1i=0 , where each model predicts the mean and variance of the next state. For MPC planning, the returns are estimated via trajectory sampling (Chua et al., 2018), where each policy is evaluated on each individual model for the entire H-length rollout and the returns are averaged. For policy optimization, each transition is generated by sampling from a member of the ensemble uniformly at random." }, { "heading": "3 LIFELONG SKILL-SPACE PLANNING", "text": "In this section, we present Lifelong Skill Planning (LiSP), our proposed approach for reset-free lifelong RL. We provide an outline of LiSP in Algorithm 1. The agent initially learns a dynamics model fφ and a skill policy πθ from any available offline data. Thereafter, the agent continuously updates the model and policy based on online interactions in the reset-free lifelong environment. The agent uses skill-space planning to act in the environment and avoid sink states. In particular, LiSP learns the following distributions as neural networks:\n• A primitive dynamics model fφ used for both planning and policy optimization\n• A low-level skill policy πθ trained from generated model rollouts on intrinsic rewards\n• A discriminator qν for learning the intrinsic reward using Sharma et al. (2019)\n• A skill-practice distribution pψ to generate a curriculum for training skills" }, { "heading": "3.1 MODEL-BASED SKILL DISCOVERY", "text": "Our goal is to learn a skill-conditioned policy πθ(a|s, z). In order to minimize interactions with the environment, we first learn a model fφ to generate synthetic rollouts for policy training. Since there is no start state distribution, the initialization of these rollouts is an important design choice.\nAlgorithm 1: Lifelong Skill Planning (LiSP)\nInitialize true replay buffer D, generated replay buffer D̂, dynamics model ensemble {fφi}N−1i=0 , policy πθ, discriminator qν , and skill-practice distribution pψ if performing offline pretraining then Learn dynamics model fφ and train policy πθ with UpdatePolicy until convergence while agent is alive at current state s do Update dynamics model to maximize the log probability of transitions of D Update policy models with UpdatePolicy (D, D̂, fφ, πθ, qν , pψ) Execute action from GetAction (s, fφ, πθ) and add environment transition to D\nWe propose to learn a skill-practice distribution pψ(z|s) to define which skills to use at a particular state. pψ acts as a “teacher” for πθ, automatically generating a curriculum for skill learning. We include visualizations of the learned skills on 2D gridworld environments in Appendix E.2.\nTo actually learn the policy, we use the model to generate short rollouts, optimizing πθ with SAC, similar to model-based policy learning works that find long rollouts to destabilize learning due to compounding model errors (Janner et al., 2019). To initialize the rollout, we sample a state from the replay buffer D and a skill to practice via pψ . The next state is predicted by the dynamics model fφ, where the model used to make the prediction is uniformly sampled from the ensemble. The transition is added to a generated replay buffer D̂; gradients do not propagate through fφ. Given minibatches sampled from D̂, both πθ and pψ are independently trained using SAC to optimize the intrinsic reward r̃adjusted. Intuitively pψ only selects skills which are most useful from the current state, instead of arbitrary skills. This is summarized in Algorithm 2.\nAlgorithm 2: Learning Latent Skills Hyperparameters: number of rollouts M , disagreement threshold αthres Function UpdatePolicy(replay buffer D, generated replay buffer D̂, dynamics model fφ, policy πθ, discriminator qν , skill-practice distribution pψ):\nfor i = 1 to M do Sample si0 uniformly from D and latent zi from skill-practice pψ(·|si0) Generate si1 := fφ(·|si0, πθ(·|si0, zi)) and add transition (si0, ai, zi, si1) to D̂ Update discriminator qν on {si0, ai, zi, si1}Mi=1 to maximize log qν(s1|s0, z) Calculate intrinsic rewards r̃adjusted for D̂ with qν , αthres using Equations 1 and 3 Update πθ, qν , pψ using SAC with minibatches from D̂" }, { "heading": "3.1.1 OFFLINE SKILL LEARNING", "text": "A key problem in offline RL is avoiding value overestimation, typically tackled via constraining actions to the support of the dataset. We can use the same algorithm to learn skills offline with a simple adjustment to r̃ based on the model disagreement (Kidambi et al., 2020). For hyperparameters κ, αthres ∈ R+, we replace the intrinsic reward r̃ with r̃adjusted, penalizing r̃ with −κ if the model disagreement exceeds than αthres. This penalty encourages the policy to stay within the support of the dataset by optimizing against an MDP which underestimates the value function. We approximate the expected `2 disagreement in the mean prediction of the next state, denoted µφi for model i, with a sample. This penalty captures the epistemic uncertainty and is shown in Equation 3.\nr̃adjusted =\n{ r̃ dis(s, a) = Ei 6=j [ ∥∥µφi(s, a)− µφj (s, a)∥∥22] ≤ αthres −κ dis(s, a) > αthres\n(3)" }, { "heading": "3.2 PLANNING IN SKILL SPACE FOR RESET-FREE ACTING", "text": "As described in Section 1, we argue that the failures of RL in the lifelong setting arise chiefly from the naive application of model-free RL. In particular, it is imperative that the model not only be used\nfor sample-efficient policy optimization (as in MBPO), but also that the model be used to safely act in the environment. The method in which acting is performed is important, serving both to exploit what the agent has learned and crucially to maintain the agent’s safety in reset-free settings.\nIn this work, we propose to use model-based planning via MPC. We differ from prior MPC works in that we plans with the set of skills from Section 3.1, which allow us to utilize the broad capabilities of model-free RL while still enabling the benefits of model-based planning, namely exploration and long-horizon reasoning. The skills act as a prior for interesting options that the planner can utilize, seeking meaningful changes in the state. Also, the skills constrain the planner to a subset of the action space which the policy is confident in. The model-free policy learns to act in a reliable manner and is consequently more predictable and robust than naive actions. As a result, we are able to perform accurate planning for longer horizons than before (Chua et al., 2018; Nagabandi et al., 2020). Note that the complexity of our proposed approach is the same as prior MPC works, i.e. it is linear in the length of the horizon. We summarize this subroutine in Algorithm 3.\nAlgorithm 3: Skill-Space Planning Hyperparameters: population size S, planning horizon H , planning iterations P , discount γ Function GetAction(current state s0, dynamics model fφ, policy πθ):\nfor P planning iterations do Sample skills {zi}Si=1 ∼ [−1, 1]dim(z)×H based on distribution of previous iteration Estimate returns R = { ∑H−1 t=0 γ\ntr(sit, πθ(·|sit, zit), sit+1)}Si=1 using trajectory sampling, with states si sampled from fφ for skills zi\nUse MPPI update rule on R and z to generate new distribution of skills {zt}H−1t=0 return a ∼ πθ(·|s, z0)\nWe summarize the LiSP subroutines in Figure 2. Skills are first learned via Algorithm 2, wherein the skill discriminator generates the intrinsic rewards and the skill-practice distribution generates a skill curriculum. We then plan using the skill policy and the dynamics model as per Algorithm 3." }, { "heading": "4 EXPERIMENTAL EVALUATIONS", "text": "We wish to investigate the following questions with regards to the design and performance of LiSP:\n• Does LiSP learn effectively in lifelong benchmarks with sink states and nonstationarity? • What are the advantages of long-horizon skill-space planning over action-space planning? • Why is the skill-practice distribution important for learning in this setting? • Does LiSP learn a suitable set of skills fully offline for future use?\nLifelong learning benchmarks. We extend several previously studied environments to the lifelong RL setting and distinguish these benchmarks with the prefix Lifelong. The key differences are that these environments have an online non-episodic phase where the agent must learn without resets and (optionally) an offline pretraining starting phase. The agent’s replay buffer D is initialized with a starting dataset of transitions, allowing the agent to have meaningful behavior from the outset. More details are in Appendix E and we open source the implementations for wider use. Unless stated otherwise, we run 3 seeds for each experiment and our plots show the mean and standard deviation.\nComparisons. We compare against select state-of-the-art algorithms to help illustrate the value of different design choices. The closest baseline to ours is action-space MPC, also known as\nPETS (Chua et al., 2018), which directly ablates for the benefit of using skills for planning. We also compare to SAC (Haarnoja et al., 2018), representing the performance of model-free RL on these tasks. Finally we consider MOReL (Kidambi et al., 2020), the offline model-based algorithm which proposed the disagreement penalty discussed in Section 3.1.1. Note that the latter two are principally single-task algorithms, while MPC is more suited for multiple reward functions. We believe these represent a diverse set of algorithms that could be considered for the reset-free setting." }, { "heading": "4.1 EVALUATION ON LIFELONG BENCHMARKS", "text": "We begin by evaluating the overall LiSP framework (Algorithm 1) in non-episodic RL settings, particularly how LiSP interacts with sink states and its performance in nonstationary reset-free settings.\nNonstationary Mujoco locomotion tasks. We evaluate LiSP on Hopper and Ant tasks from Gym (Brockman et al., 2016); we call these Lifelong Hopper and Lifelong Ant. The agents seek to achieve a target forward x-velocity, which changes over the agent’s lifetime. Learning curves are shown in Figure 3. Most of the LiSP seeds remain stable despite sink states and adapt instantly to the current tasks. As predicted by Figure 1, the SAC and MOReL agents are unstable and do poorly, fundamentally because their gradient updates are not stable. The long-horizon planning capability of LiSP is also crucial, enabling LiSP to outperform SAC and MOReL which lack the ability to plan. Furthermore, planning over the space of skills is necessary to achieve improvement over MPC, which is not capable of accurate planning on its own. LiSP outperforms all of the baselines tested.\nMinimizing resets with permanent sink states. We evaluate each method in a lifelong 2D volcano gridworld environment where the agent navigates to reach goals while avoiding pitfalls which permanently trap the agent if there is no intervention. Every 100 timesteps, the pitfalls and goals rearrange, which allow the agent to get untrapped; we consider this a “reset” if the agent was stuck as it required this intervention. We use limited pretraining for this en-\nvironment, i.e. we train the model but not the policies, reflecting the agent’s ability to act safely while not fully trained. Figure 4 shows the number of times the agent got stuck during training. We find all the model-based methods, including LiSP, are easily capable of avoiding resets, suggesting model-based algorithms are naturally suited for these settings, incorporating data to avoid failures." }, { "heading": "4.2 ADVANTAGES OF LONG HORIZON SKILL-SPACE PLANNING", "text": "Next, we seek to clearly show the advantages of planning in the skill space, demonstrating the benefits of using skills in Algorithm 3 and arguing for the use of skills more broadly in lifelong RL.\nConstraining model error. We can interpret skill-space planning as constraining the MPC optimization to a more accurate subset of the action space which the policy operates in; this added accuracy is crucial for the improvement of LiSP vs MPC in the MuJoCo tasks. In Figure 5, we consider Lifelong Hopper and look at the one-step dynamics model error when evaluated on actions generated by the skill policy vs uniformly from the action space. Note this requires privileged access to the simulator so we cannot use these signals directly for planning. While most samples from both\nhave similar model error, the variance in the residual error for random actions is high, exhibiting a long tail. Even if the model is accurate for most samples, MPC can fatally overestimate on this tail.\nConstraining planning search space. This advantage in the one-step model error extends to longhorizon planning. In the Lifelong Hopper environment, accurate long-horizon planning is critical in order to correctly execute a hopping motion, which has traditionally made MPC difficult in this environment. For good performance, we use a long horizon of 180 for planning. In Figure 6, we ablate LiSP by planning with actions instead of skills (i.e. MPC). Accurate planning with actions is completely infeasible due to the accumulated unconstrained model errors.\nModel updates are extremely stable. Furthermore, as described in Figure 1, we found that SAC and MBPO fail in reset-free settings when denied access to resets if they continue gradient updates, which we attributed to gradient-based instability. Here we try to understand the stability of RL algorithms. In particular, since we found that policy/critic updates are unstable, we can instead consider how model updates affect planning. To do so, we consider SAC as before, as well as action-space MPC over a short 5-step horizon trying to optimize the same SAC critic (tied to the policy); we note this is similar to Sikchi et al. (2020). The results are shown in Figure 7. If only the model is updated, planning is very stable, showing the resilience of planning to model updates; this best resembles LiSP, which does not rely on a policy critic for planning. Alternatively, value function updates make planning unstable, supporting the idea that long-horizon planning improves stability vs relying on a value function. However, even the short 5-step planner avoids catastrophic failures, as the additional model stability quickly neutralizes value instability." }, { "heading": "4.3 ONLINE SKILL LEARNING WITH A MODEL", "text": "In this section, we show the importance of the skill-practice distribution from Algorithm 2 for learning setups that require hierarchical skills and involve short model-based rollouts. The skill-practice distribution is crucial for improving learning signal, generating useful skills for planning.\nHierarchical skill learning. We consider a 2D Minecraft environment where the agent must learn hierarchical skills to build tools, which in turn are used to create higher level tools. We ablate against minimizing r̃adjusted and a baseline of not using a practice distribution; the former acts as a sanity check whereas the latter represents the previous convention in skill discovery literature. Note that we do not compare against MOReL here since the asymptotic performance will be similar to SAC, as they both rely on Q-learning policy updates. Similar to the volcano environment, we only do limited pretraining on a small set of starting data. The learning curves are in Figure 8. Learning skills with directed practice sampling is necessary for learning useful skills. Without the skill-practice curriculum, the asymptotic performance of LiSP is significantly limited. Additionally, the fact that MPC performs significantly worse than LiSP suggests that planning in the skill space – namely, planning in a space of expressive skills – greatly aids the search process and exploration.\nLearning with short rollouts. Algorithm 2 learns skills using one-step rollouts from arbitrary states. These short rollouts significantly reduce learning signal relative to sampling a skill from a fixed start state distribution and running the skill for an entire episode: rather than being able to “associate” all the states along an episode with a skill, the agent will be forced to associate all states with all skills. Intuitively, the skill-practice distribution can alleviate this, associating states with certain skills which are important, making the skill learning process easier. We investigate this by analyzing DADS-Off (Sharma et al., 2020) – an off-policy improved version of DADS – and resampling skills every K timesteps. This simulates K-step rollouts from arbitrary starting states, i.e. an oracle version of LiSP with data generated from the real environment, without a skill-practice distribution. We show the learned skills with various K for Lifelong Ant in Figure 9. Despite DADS-Off using off-policy data, resampling skills hurts performance, leading to a loss of diversity in skills. When generating transitions with a model as in LiSP, it is unstable to run long model rollouts due to compounding errors, making the skill-practice distribution critical." }, { "heading": "4.4 LEARNING SKILLS ENTIRELY FROM OFFLINE DATA", "text": "Finally, we evaluate the entire LiSP framework (Algorithm 1) in a fully offline setting without testtime updates, considering LiSP solely as an offline RL algorithm. Again, the most direct baseline/ablation to LiSP is action-space MPC, also known as PETS (Chua et al., 2018), which can learn\noffline for multiple test-time tasks. We consider two versions: one that uses a long horizon for planning (H = 180, MPC-Long) and one with a short horizon (H = 25, MPC-Short). The former is a direct ablation, while the latter is closer to prior work and the PETS paper.\nLearning skills offline for multiple tasks. We consider three Lifelong Hopper tasks, where tasks define target velocities. We use the same dataset as Section 4.1, collected from a forward hop task. Our results are in Table 1. LiSP successfully learns a set of skills offline for planning for all tasks." }, { "heading": "5 DISCUSSION AND RELATED WORK", "text": "In this section, we provide an overview of other works related to LiSP, in similar fields to lifelong RL. An in-depth discussion of the related works is in Appendix A.\nNon-Episodic RL. The traditional approach to non-episodic RL is to explicitly learn a policy to reset the environment (Even-Dar et al., 2005; Han et al., 2015; Eysenbach et al., 2017). This requires somewhat hard-to-define notions of what a reset should accomplish and still uses manual resets. We do not learn a reset policy, instead focusing on naturally safe acting via learning from offline data and effective planning. Zhu et al. (2020) and Co-Reyes et al. (2020) propose solutions to lack of learning signal in reset-free settings but only consider environments without sink states; the latter requires control over the environment to form a training curriculum. Lu et al. (2019) and Lowrey et al. (2018) highlight the benefits of both planning and model-free RL for lifelong agents but require highly accurate world models. Offline RL learns policies exclusively from offline data, which naturally lacks resets (Levine et al., 2020; Agarwal et al., 2020; Wu et al., 2019; Yu et al., 2020; Kumar et al., 2020), but most work is restricted to single-task, stationary settings.\nModel-Based Planning. Existing works in model-based planning are restricted to short horizons (Chua et al., 2018; Wang & Ba, 2019; Nagabandi et al., 2020). Similarly to LiSP, some works (Kahn et al., 2017; Henaff et al., 2019) try to reduce model error for planning by penalizing deviations outside the training set. We found embedding uncertainty into the cost function causes poor cost shaping; these works also still have relatively short horizons. Mishra et al. (2017) learns action priors used to generate actions for MPC that embed past actions in a latent space for constrained planning, similarly to how we skill constraint, but only considers a fairly simple manipulation task. Some works (Lowrey et al., 2018; Lu et al., 2019; Sikchi et al., 2020), including in offline RL (Argenson & Dulac-Arnold, 2020), use terminal value functions to aid planning, allowing for successful shorthorizon planning; as discussed previously, this does not directly translate to good performance in the lifelong setting due to instability and nontriviality in learning this function.\nSkill Discovery. The key difference from prior skill discovery work is the lack of episodes. The most relevant work to ours is DADS (Sharma et al., 2019). Unlike DADS, we plan over the model with action predictions, which allows for accurate planning even if the discriminator is not accurate. Most other works (Achiam et al., 2018; Warde-Farley et al., 2018; Hansen et al., 2019; Campos et al., 2020) are complementary methods that can help learn a set of skills. Some seek to use skills as pretraining for episodic learning, as opposed to our focus on safe reset-free acting." }, { "heading": "6 CONCLUSION", "text": "We presented LiSP, an algorithm for lifelong learning based on skill discovery and long-horizon skill-space planning. To encourage future work in this space, we proposed new benchmarks that capture the key difficulties of lifelong RL in reset-free, nonstationary settings. Our experiments showed that LiSP effectively learns in non-episodic settings with sink states, vastly improving over prior RL methods, and analyzed ablations to show the benefits of each proposed design choice." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We would like to thank Archit Sharma for advice on implementing DADS." }, { "heading": "A FURTHER DISCUSSION AND RELATED WORK", "text": "Model-Based Planning. Chua et al. (2018) propose to use an ensemble of probabilistic dynamics models for planning, which we use, but by itself struggles to scale to higher-dimensional Mujoco tasks due to limited model accuracy; the use of model ensembling to mitigate errors has also been previously explored by Nagabandi et al. (2018b) and Kurutach et al. (2018). Wang & Ba (2019) improves upon some of the weaknesses of Chua et al. (2018) by planning in the parameter space of policies, solving some MuJoCo tasks asymptotically, but is still not accurate enough to plan over horizons long enough for environments that require a long sequence of coordinated actions. Nagabandi et al. (2020) shows that an improved optimizer can improve MPC performance for shorthorizon tasks. We note that all of these methods struggle to plan for long horizons due to compounding model errors, and none have been able to perform MPC in the Hopper environment without a terminal value function (as in Sikchi et al. (2020) and Argenson & Dulac-Arnold (2020)), which is a novelty of our work.\nWang & Ba (2019), Lu et al. (2019), Argenson & Dulac-Arnold (2020), and Sikchi et al. (2020) initialize the actions for MPC with a prior policy, but this is not the same as constraining the space. Consequently, they do not have as strong long-horizon benefits, instead using other methods for attaining strong performance. Sometimes early termination and low Gaussian noise for random shooting is used to attempt to approximate a constraint (Sikchi et al., 2020), but this is not very effective in higher dimensional environments. Some works (Venkatraman et al., 2015; Mishra et al., 2017; Hafner et al., 2019) try to improve planning accuracy with multi-step prediction losses, which is complementary to our work and could improve performance. Furthermore, even though value functions for short horizon planning is not particularly promising, we note that terminal value functions still have other benefits that can improve performance when combined with long-horizon planning, such as improved exploration or long-term reasoning (Lowrey et al., 2018; Lu et al., 2019).\nHierarchical RL. Our work is somewhat similar to hierarchical option-critic architectures that “plan” without MPC (Sutton et al., 1999; Bacon et al., 2017; Nachum et al., 2018; Khetarpal et al., 2020), which is sometimes referred to as “background-time planning” (Sutton & Barto, 2018). However, our skills are learned with an unsupervised reward and composed via MPC, which has the promise of more explicit long-term hierarchical benefits, whereas there is some evidence (Nachum et al., 2019) that policy-based hierarchy only aids in exploration – not long-term reasoning – and is replaceable with monolithic exploration methods. In contrast planning adds more explicit and interpretable “decision-time planning”, which may have better scaling properties in more complex environments, as long as planning can be done accurately.\nOur skill-practice distribution somewhat resembles the interest function of Khetarpal et al. (2020); both are designed to associate certain skills with certain states, forming a type of curriculum. In particular, they note that using a small number of skills improves learning early but can be asymptotically limiting, which has some similarity to curriculums generated from the skill-practice distribution. We show the entropy of the skills generated for the 2D Minecraft environment in Figure 10; the skill-practice distribution automatically generates a curriculum where it purposefully practices less skills (low entropy) early in training, and more skills (high entropy) later. Unlike Khetarpal et al. (2020), our skill-practice distribution is not part of the policy update and is only used to generate\na curriculum. We also note that while this means that transitions generated from the model are not drawn from the prior distribution in the objective, since the critic Q(s, z, a) learns a target for the next state using the same z as Q(s′, z, π(a|s′, z)), it is correct without importance sampling.\nSafety. Zhang et al. (2020) perform risk-averse planning to embed safety constraints, but require a handcrafted safety function. This is similar to arguments from the Safety Gym work (Ray et al., 2019), which argues that the most meaningful way to formulate safe RL is with a constrained optimization problem. However, the safety in our work deals more with stability and control, rather than avoiding a cost function, so our notion of safety is generally different from these ideas. More concretely, the type of uncertainty and risk that are crucial to our setting are more epistemic, rather than aleatoric, so these types of works and distributional RL are not as promising. Consequently, we found that long-horizon planning and trying to perform accurate value estimation at greater scale to be more effective than constraining against a cost function or using explicit risk aversion. We note that some “empowerment”-style methods (Gregor et al., 2016; Karl et al., 2017; Eysenbach et al., 2018; Sharma et al., 2019) optimize an objective that aims to resemble a notion of stability; however, this metric is hard to estimate accurately in general, so we do not explicitly use it for any form of constrained planning. Zhao et al. (2021) learn a more accurate estimator of the empowerment, which could be useful for future work in safety.\nNonstationarity. Much work in nonstationarity focuses on catastrophic forgetting (Rusu et al., 2016; Kirkpatrick et al., 2017; Schwarz et al., 2018), which is not a primary goal of our work as we are primarily concerned with acting competently in the current MDP without explicitly trying to remember all previous or any arbitrary MDPs. Our approach to nonstationary lifelong learning follows Lu et al. (2019), but we do not require access to the ground truth dynamics P at world changes. Lecarpentier & Rachelson (2019) tackles nonstationarity in zero-shot for simple discrete MDPs by using risk-averse planning; our experiments generally suggested that accurate and scalable planning was more important than explicit risk aversion for our settings. Other works (Nagabandi et al., 2018a; Finn et al., 2019; Rolnick et al., 2018) are typically nonstationary at the abstraction of episodes, which generally does not require competencies such as online safe adaptation, and instead try to adapt quickly, minimize regret, or avoid catastrophic forgetting as previously discussed. Xie et al. (2020) is closer to our setting, as they consider nonstationarity within an episode, but they still require manual resets except in a 2D environment which does not contain sink states." }, { "heading": "B FURTHER PLOTS FOR LEARNING DYNAMICS", "text": "In this section, we provide additional plots, seeking to give more insights on the learning of LiSP from the MuJoCo experiments from Section 4.1.\nIn particular, we can consider the intrinsic reward, which is a proxy for the diversity of skills. Since the intrinsic reward is calculated under the model, which changes and has inaccuracies, high intrinsic reward under the model is not always the best indicator, whereas it is a more reliable metric when learned from real world transitions as in Sharma et al. (2020). Also, the intrinsic reward is sampled according to an expectation given by the skill-practice distribution, so these numbers cannot be directly compared with those given by the Sharma et al. (2020) paper.\nThe intrinsic reward during the offline phases on the MuJoCo tasks are:" }, { "heading": "C EPISODIC EVALUATION OF EMPOWERMENT", "text": "In this section, we consider LiSP as an episodic learning algorithm (i.e. in the standard skill discovery setting). In particular, to measure performance here we consider how LiSP maximizes empowerment (Klyubin et al., 2005; Salge et al., 2014; Gregor et al., 2016), which roughly corresponds to how stable the agent is, or how much “potential” it has. Empowerment is mathematically defined by the mutual information between the current state and future states given actions, and has motivated many skill discovery objectives, including the one we studied in this paper. In fact, this can give benefits to reset-free learning as well: if all the skills maintain high empowerment, then this ensures that the agent only executes stable skills. Although we find we are not able to solely use this type of constraint for safe acting – instead having to rely on long horizons – these types of skills help to reduce the search space for planning. For example, learning good skills to reduce the search space for planning was very beneficial in the 2D Minecraft environment, so this is another experiment yielding insight to this property.\nWe compare LiSP to DADS (Sharma et al., 2019); note that the main differences will be due to using model-generated rollouts for training the skill policy and the skill-practice distribution. We perform training in the standard episodic Hopper setting (without demos) for skill discovery algorithms, and approximate the empowerment by measuring the average z-coordinate of the Hopper, corresponding to the agent’s height, as proposed in Zhao et al. (2021). To make this evaluation, we remove the planning component of LiSP, simply running the skill policy as in DADS, and maintain the standard LiSP algorithm otherwise, where both the skill policy and discriminator train exclusively from model rollouts. The changes to our hyperparameters are:\n• Every 20 epochs, sample 2000 environment steps\n• Generated replay buffer size of 20000\n• Generate 20000 model rollouts per epoch\n• Take 128 discriminator gradient steps per epoch\n• Take 256 skill policy gradient steps per epoch\n• Take 32 skill-practice distribution gradient steps per epoch\n• We do not use a disagreement penalty\nOur results are shown in Figure 12. LiSP acts as a strong model-based skill learning algorithm, achieving a ≈ 5× sample efficiency increase over DADS in this metric. The sample efficiency of LiSP is competitive to model free algorithms on the Hopper environment, demonstrating that it is possible to gain significant learning signal from few samples by utilizing model rollouts, competitive with state-of-the-art algorithms that focus on only learning one task." }, { "heading": "D OFFLINE LEARNING WITH LIMITED DATA", "text": "In this section, we repeat the offline RL experiment from Section 4.4 where we perform offline training of LiSP on a dataset and then use LiSP at test time for multiple tasks without gradient updates, but on a smaller dataset. As described in Appendix E, the original experiment used one million timesteps for training, representing a similar dataset size the to D4RL suite of tasks (Fu et al., 2020). Here, we evaluate on the same dataset but using only the first 200000 timesteps, which represents a small dataset for this task (the replay buffer at the beginning of training), and roughly corresponds to “medium-replay” in D4RL.\nThe results are shown in Table 2. We include the original results for comparison. Despite the reduced dataset size, LiSP still stays stable although has somewhat worse task performance. Since the size and composition of the dataset was not a focus of our work, we don’t include more extensive results, but believe this experiment indicates that LiSP would be promising for varying datasets." }, { "heading": "E ENVIRONMENT DETAILS", "text": "In this section, we provide additional details about our lifelong experiments and the environment setup, including the modifications to the reward functions, the datasets used for pretraining, and the metrics for performance used.\nE.1 LIFELONG MUJOCO ENVIRONMENTS\nOur environment settings were designed such that the optimal behaviors are similar to the optimal behaviors in the Gym version of these environments, and so that the difficulty of learning was similar (although the learning dynamics are different). This is generally accomplished by having some part of the reward for stability, in addition to the standard reward for accomplishing the task, which also emphasizes the health of an agent by considering the reward; standard benchmarks lack this because the signal comes from a termination function. While this extra shaping of the reward gives more signal than normal, these settings are still hard for standard RL algorithms as highlighted above, and even episodic RL without early termination using the standard rewards tends to fail when episodes are long relative to time-to-failure.\nIn this work, we considered two MuJoCo tasks: Lifelong Hopper and Lifelong Ant. The reward functions we use for these environments are as follows, where z is the height of the agent, xvel is the x-velocity, and x̂vel is a target x-velocity:\n• Lifelong Hopper: −5(z − 1.8)2 − |xvel − x̂vel|+ |x̂vel| • Lifelong Ant: −(z − 0.7)2 − |xvel − x̂vel|+ |x̂vel|\nFor the experiments in Section 4.1, we change the target velocity every 1000 timesteps according to the following ordering (chosen arbitrarily):\n• Lifelong Hopper: [0, 1,−1, 2,−1] • Lifelong Ant: [1,−1, 1]\nThe dataset we used for both tasks was the replay buffer generated from a SAC agent trained to convergence, which we set as one million timesteps per environment. This is the standard dataset size used in offline RL (Fu et al., 2020), although of higher quality. Despite this, our setting is still\ndifficult and highly nontrivial due to sink states and gradient instability. We hope future works will continue to explore non-episodic settings with varying datasets.\nE.1.1 PERFORMANCE METRICS\nTo summarize performance easily in our experiments, we use a performance metric such that it is possible to achieve an optimal performance of 1, and a representative “worst-case” performance is set to 0. This allows for easy comparison between tasks and represents difficulty in harder tasks. The weighting of the performance is similar to the standard reward functions, prioritizing stability with the height term and rewarding competence with a task-specific term, and easily represents the overall gap to a relatively perfect policy/desired behavior with a single number.\nFor Lifelong Hopper, take the average height zavg of the evaluation duration and the average x velocity xvel. Note that it is impossible to make a similar normalization to 1 with returns, since it is impossible to attain the optimal height and velocity from every timestep, but the averages can be optimal. For some target velocity x̂vel, the performance we use is given by:\n1− 0.8 · 1 1.32 (zavg − 1.3)2 − 0.2 · 1 4 |xvel − x̂vel|\nThe height is derived from approximately the height of an optimal SAC agent on the standard Hopper benchmark, representing the behavior we aim to achieve.\nFor Lifelong Ant, we use a similar metric:\n1− 0.5 · 1 0.52 (zavg − 0.7)2 − 0.5 · 1 4 |xvel − x̂vel|\nE.2 2D GRIDWORLD ENVIRONMENTS AND SKILL VISUALIZATIONS\nIn this work, we considered two continuous 2D gridworld tasks: volcano and 2D Minecraft. The volcano environment allows us to consider the safety of agents, and Minecraft is a well-known video game setting that is attractive for hierarchical RL due to the nature of tools (Guss et al., 2019); our version is a simple minimal 2D version that is faster to train. We visualize both these environments and skills learned on them below:\nE.2.1 VOLCANO ENVIRONMENT\nIn the volcano environment, there is lava (shown in dark red), which the agent is penalized for standing in but does not inhibit its movement. The green tile, located on the right side in the picture, denotes the target goal. The reward is given by a negative L2 distance to this goal. The white squares denote free tiles. The primary square of interest is the brown tile, representing a pitfall, where the agent will be trapped if it falls in. The state is the agent’s x-y position, the position of the pitfall, as well as the position of the goal. We can see that the learned skills correspond to avoiding the tile, representing how the skills themselves can embed safety, whereas planning in action space would\nnot achieve the same effect. This behavior follows from an empowerment objective (see Appendix C); in particular, LiSP is capable of offline evaluation of empowerment. We initialize the dataset with 100k demonstrations of a trained SAC agent with noise applied to actions.\nE.2.2 2D MINECRAFT ENVIRONMENT\nIn the 2D Minecraft environment, the agent maintains an inventory of up to one of each item, and receives a reward whenever obtaining a new item (either through mining or crafting) in increasing magnitudes depending on the hierarchy of the item (how many items were needed to obtain beforehand to obtain the item). The state is the position of the agent, the position of the blocks it can interact with, as well as its inventory, represented by a vector of 0’s or 1’s to represent ownership of each item. There are four tiles of interest:\n• A crafting table (bottom right), where the agent will craft all possible tools using the current materials in its inventory\n• A wood block (bottom left), where the agent will obtain a wood for visiting • A stone block (top left), where the agent will obtain a stone for visiting if it has a wooden\npickaxe, consuming the pickaxe • An iron block (top right), where the agent will obtain an iron for visiting if it has a stone\npickaxe, consuming the pickaxe and yielding the maximum reward in the environment\nThe skill progression is representing by the following ordering:\n1. Walk to wood to mine 2. Bring wood to craft table to craft a stick 3. Craft wooden pickaxe with both a wood and a stick 4. Walk to stone to mine using a wooden pickaxe 5. Craft stone pickaxe with both a stone and a stick 6. Walk to iron to mine using a stone pickaxe\nThere is skill reuse in that lower-level skills must be executed multiple times in order to accomplish higher-level skills, forming a hierarchy of skills in the environment. The dataset consists of 2000 steps of manually provided demonstrations, which is a very low amount of demonstrations." }, { "heading": "F HYPERPARAMETERS", "text": "Our code can be found at: https://github.com/kzl/lifelong rl.\nFor all algorithms (as applicable) and environments, we use the following hyperparameters, roughly based on common values for the parameters in other works (note we classify the skill-practice distribution as a policy/critic here):\n• Discount factor γ equal to 0.99 • Replay buffer D size of 106\n• Dynamics model with three hidden layers of size 256 using tanh activations • Dynamics model ensemble size of 4 and learning rate 10−3, training every 250 timesteps • Policy and critics with two hidden layers of size 256 using ReLU activations • Discriminator with two hidden layers of size 512 using ReLU activations • Policy, critic, discriminator learning rates of 3× 10−4 training every timestep • Automatic entropy tuning for SAC • Batch size of 256 for gradient updates • MPC population size S set to 400 • MPC planning iterations P set to 10\n• MPC number of particles for expectation calculation set to 20 • MPC temperature of 0.01 • MPC noise standard deviation of 1 • For PETS, we use a planning horizon of either 25 or 180, as mentioned • For DADS, we find it helpful to multiply the intrinsic reward by a factor of 5 for learning\n(for both DADS and LiSP usage)\nFor LiSP specifically, our hyperparameters are:\n• Planning horizon of 180 • Repeat a skill for three consecutive timesteps for planning (not for environment interaction) • Replan at every timestep • Number of rollouts per iteration M set to 400\n• Generated replay buffer D̂ size set to 5000 • Number of prior samples set to 16 in the denominator of the intrinsic reward • Number of discriminator updates per iteration set to 4 • Number of policy updates per iteration set to 8 • Number of skill-practice updates per iteration set to 4 • Disagreement threshold αthres set to 0.05 for Hopper, 0.1 for Ant • Disagreement penalty κ set to 30" } ]
2,021
null
SP:a17218a21d8f69f2848a248c8658df81c8a68924
[ "The work applies and adjusts contrastive learning in the subject area of pre-training language models. The work first identifies the challenges with the current landscape of Masked Language Models with limits to learning sentence-level representations and semantic alignments in sentences of different languages. To take care of these gaps, the authors propose using HCTL as an approach that can learn more universal representations for sentences across different languages. The work builds on top of the BERT models, with the adjusted contrastive learning objective goal.", "The paper proposes a pre-trained language model variant which extends XLM-R (multilingual masked model) with two new objectives. The main difference to most other models is that the new losses are contrastive losses (however, as pointed out by the authors, other contrastive losses had been used before in e.g. ELECTRA). The first additional loss is a sentence-level one - where a [CLS] token is trained to be close to the positive sample, the paired sentence, with other sentences as negative samples. The same is done at word level, where the bag of words constructed from two sentences becomes the set of positive samples and other vocabulary words are negative samples. " ]
Recent studies have demonstrated the overwhelming advantage of cross-lingual pre-trained models (PTMs), such as multilingual BERT and XLM, on crosslingual NLP tasks. However, existing approaches essentially capture the cooccurrence among tokens through involving the masked language model (MLM) objective with token-level cross entropy. In this work, we extend these approaches to learn sentence-level representations and show the effectiveness on crosslingual understanding and generation. Specifically, we propose a Hierarchical Contrastive Learning (HICTL) method to (1) learn universal representations for parallel sentences distributed in one or multiple languages and (2) distinguish the semantically-related words from a shared cross-lingual vocabulary for each sentence. We conduct evaluations on two challenging cross-lingual tasks, XTREME and machine translation. Experimental results show that the HICTL outperforms the state-of-the-art XLM-R by an absolute gain of 4.2% accuracy on the XTREME benchmark as well as achieves substantial improvements on both of the highresource and low-resource English→X translation tasks over strong baselines.
[ { "affiliations": [], "name": "Xiangpeng Wei" }, { "affiliations": [], "name": "Rongxiang Weng" }, { "affiliations": [], "name": "Yue Hu" }, { "affiliations": [], "name": "Luxi Xing" }, { "affiliations": [], "name": "Heng Yu" }, { "affiliations": [], "name": "Weihua Luo" } ]
[ { "authors": [ "Mikel Artetxe", "Holger Schwenk" ], "title": "Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Mikel Artetxe", "Sebastian Ruder", "Dani Yogatama" ], "title": "On the cross-lingual transferability of monolingual representations", "venue": "arXiv preprint arXiv:1910.11856,", "year": 2019 }, { "authors": [ "Samuel R. Bowman", "Gabor Angeli", "Christopher Potts", "Christopher D. Manning" ], "title": "A large annotated corpus for learning natural language inference", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "Samuel R. Bowman", "Luke Vilnis", "Oriol Vinyals", "Andrew M. Dai", "Rafal Józefowicz", "Samy Bengio" ], "title": "Generating sentences from a continuous space", "venue": "In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning,", "year": 2016 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "In Proceedings of Machine Learning and Systems", "year": 2020 }, { "authors": [ "Zewen Chi", "Li Dong", "Furu Wei", "Nan Yang", "Saksham Singhal", "Wenhui Wang", "Xia song", "Xian-Ling Mao", "Heyan Huang", "Ming Zhou" ], "title": "Infoxlm: An information-theoretic framework for crosslingual language model pre-training", "venue": "URL https://arxiv", "year": 2007 }, { "authors": [ "Jonathan H Clark", "Eunsol Choi", "Michael Collins", "Dan Garrette", "Tom Kwiatkowski", "Vitaly Nikolaev", "Jennimaria Palomaki" ], "title": "Tydi qa: A benchmark for information-seeking question answering in typologically diverse languages", "venue": "arXiv preprint arXiv:2003.05002,", "year": 2020 }, { "authors": [ "Kevin Clark", "Minh-Thang Luong", "Quoc V. Le", "Christopher D. Manning" ], "title": "ELECTRA: pretraining text encoders as discriminators rather than generators", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Alexis Conneau", "Guillaume Lample" ], "title": "Cross-lingual language model pretraining", "venue": "In Proc. of NIPS 2019,", "year": 2019 }, { "authors": [ "Alexis Conneau", "Ruty Rinott", "Guillaume Lample", "Adina Williams", "Samuel Bowman", "Holger Schwenk", "Veselin Stoyanov" ], "title": "XNLI: Evaluating cross-lingual sentence representations", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Alexis Conneau", "Kartikay Khandelwal", "Naman Goyal", "Vishrav Chaudhary", "Guillaume Wenzek", "Francisco Guzmán", "Edouard Grave", "Myle Ott", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Unsupervised cross-lingual representation learning at scale", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2019 }, { "authors": [ "Li Dong", "Nan Yang", "Wenhui Wang", "Furu Wei", "Xiaodong Liu", "Yu Wang", "Jianfeng Gao", "Ming Zhou", "Hsiao-Wuen Hon" ], "title": "Unified language model pre-training for natural language understanding and generation", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Sergey Edunov", "Alexei Baevski", "Michael Auli" ], "title": "Pre-trained language model representations for language generation", "venue": null, "year": 2019 }, { "authors": [ "Andreas Eisele", "Chen Yu" ], "title": "Multiun: A multilingual corpus from united nation documents", "venue": "In International Conference on Language Resources & Evaluation,", "year": 2010 }, { "authors": [ "Yuwei Fang", "Shuohang Wang", "Zhe Gan", "Siqi Sun", "Jingjing Liu" ], "title": "FILTER: an enhanced fusion method for cross-lingual language understanding", "venue": "CoRR, abs/2009.05166,", "year": 2020 }, { "authors": [ "Junliang Guo", "Zhirui Zhang", "Linli Xu", "Hao-Ran Wei", "Boxing Chen", "Enhong Chen" ], "title": "Incorporating bert into parallel sequence decoding with adapters", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Raia Hadsell", "Sumit Chopra", "Yann LeCun" ], "title": "Dimensionality reduction by learning an invariant mapping", "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition", "year": 2006 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross B. Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": null, "year": 1911 }, { "authors": [ "Junjie Hu", "Sebastian Ruder", "Aditya Siddhant", "Graham Neubig", "Orhan Firat", "Melvin Johnson" ], "title": "XTREME: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization", "venue": "CoRR, abs/2003.11080,", "year": 2020 }, { "authors": [ "Haoyang Huang", "Yaobo Liang", "Nan Duan", "Ming Gong", "Linjun Shou", "Daxin Jiang", "Ming Zhou" ], "title": "Unicoder: A universal language encoder by pre-training with multiple cross-lingual tasks", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Dan Iter", "Kelvin Guu", "Larry Lansing", "Dan Jurafsky" ], "title": "Pretraining with contrastive sentence objectives improves discourse performance of language models", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4859–4870,", "year": 2020 }, { "authors": [ "Mandar Joshi", "Danqi Chen", "Yinhan Liu", "Daniel S. Weld", "Luke Zettlemoyer", "Omer Levy" ], "title": "Spanbert: Improving pre-training by representing and predicting spans", "venue": "Trans. Assoc. Comput. Linguistics,", "year": 2020 }, { "authors": [ "Lingpeng Kong", "Cyprien de Masson d’Autume", "Lei Yu", "Wang Ling", "Zihang Dai", "Dani Yogatama" ], "title": "A mutual information maximization perspective of language representation learning", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Anoop Kunchukuttan", "Pratik Mehta", "Pushpak Bhattacharyya" ], "title": "The IIT bombay english-hindi parallel corpus", "venue": "In Proceedings of the Eleventh International Conference on Language Resources and Evaluation,", "year": 2018 }, { "authors": [ "Anoop Kunchukuttan", "Pratik Mehta", "Pushpak Bhattacharyya" ], "title": "The IIT Bombay English-Hindi parallel corpus", "venue": "In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018),", "year": 2018 }, { "authors": [ "Zhenzhong Lan", "Mingda Chen", "Sebastian Goodman", "Kevin Gimpel", "Piyush Sharma", "Radu Soricut" ], "title": "ALBERT: A lite BERT for self-supervised learning of language representations", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Mike Lewis", "Yinhan Liu", "Naman Goyal", "Marjan Ghazvininejad", "Abdelrahman Mohamed", "Omer Levy", "Veselin Stoyanov", "Luke Zettlemoyer" ], "title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Patrick Lewis", "Barlas Oğuz", "Ruty Rinott", "Sebastian Riedel", "Holger Schwenk" ], "title": "Mlqa: Evaluating cross-lingual extractive question answering", "venue": "arXiv preprint arXiv:1910.07475,", "year": 2019 }, { "authors": [ "Xiaodong Liu", "Kevin Duh", "Liyuan Liu", "Jianfeng Gao" ], "title": "Very deep transformers for neural machine translation", "venue": "arXiv preprint arXiv:2008.07772,", "year": 2020 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized BERT pretraining approach", "venue": "URL http://arxiv.org/abs/1907.11692", "year": 1907 }, { "authors": [ "Yinhan Liu", "Jiatao Gu", "Naman Goyal", "Xian Li", "Sergey Edunov", "Marjan Ghazvininejad", "Mike Lewis", "Luke Zettlemoyer" ], "title": "Multilingual denoising pre-training for neural machine translation", "venue": "CoRR, abs/2001.08210,", "year": 2020 }, { "authors": [ "Fuli Luo", "Wei Wang", "Jiahao Liu", "Yijia Liu", "Bin Bi", "Songfang Huang", "Fei Huang", "Luo Si" ], "title": "VECO: variable encoder-decoder pre-training for cross-lingual understanding and generation", "venue": "CoRR, abs/2010.16046,", "year": 2020 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Gregory S. Corrado", "Jeffrey Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "Andriy Mnih", "Koray Kavukcuoglu" ], "title": "Learning word embeddings efficiently with noise-contrastive estimation", "venue": "In Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "Joakim Nivre", "Mitchell Abrams", "Zeljko Agic", "Lars Ahrenberg", "Lene Antonsen" ], "title": "Universal Dependencies 2.2, 2018. URL https://hal.archives-ouvertes.fr/ hal-01930733. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (ÚFAL)", "venue": "Faculty of Mathematics and Physics, Charles University", "year": 1930 }, { "authors": [ "Aäron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "CoRR, abs/1807.03748,", "year": 2018 }, { "authors": [ "Xiaoman Pan", "Boliang Zhang", "Jonathan May", "Joel Nothman", "Kevin Knight", "Heng Ji" ], "title": "Crosslingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2017 }, { "authors": [ "Matthew Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training", "venue": "URL https://s3-us-west-2. amazonaws. com/openaiassets/researchcovers/languageunsupervised/language understanding paper", "year": 2018 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J Liu" ], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": "arXiv preprint arXiv:1910.10683,", "year": 2019 }, { "authors": [ "Nikunj Saunshi", "Orestis Plevrakis", "Sanjeev Arora", "Mikhail Khodak", "Hrishikesh Khandeparkar" ], "title": "A theoretical analysis of contrastive unsupervised representation learning", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Richard Socher", "Alex Perelygin", "Jean Wu", "Jason Chuang", "Christopher D. Manning", "Andrew Ng", "Christopher Potts" ], "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "venue": "In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing,", "year": 2013 }, { "authors": [ "Kaitao Song", "Xu Tan", "Tao Qin", "Jianfeng Lu", "Tie-Yan Liu" ], "title": "MASS: masked sequence to sequence pre-training for language generation", "venue": "Proceedings of the 36th International Conference on Machine Learning, ICML 2019,", "year": 2019 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "Computer Vision - ECCV 2020 16th European Conference,", "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Xiangpeng Wei", "Heng Yu", "Yue Hu", "Yue Zhang", "Rongxiang Weng", "Weihua Luo" ], "title": "Multiscale collaborative deep models for neural machine translation", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Rongxiang Weng", "Heng Yu", "Shujian Huang", "Shanbo Cheng", "Weihua Luo" ], "title": "Acquiring knowledge from pre-trained model to neural machine translation", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Guillaume Wenzek", "Marie-Anne Lachaux", "Alexis Conneau", "Vishrav Chaudhary", "Francisco Guzman", "Armand Joulin", "Edouard Grave" ], "title": "Ccnet: Extracting high quality monolingual datasets from web crawl data", "venue": null, "year": 1911 }, { "authors": [ "Adina Williams", "Nikita Nangia", "Samuel Bowman" ], "title": "A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", "venue": "Association", "year": 2018 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "Stella X. Yu", "Dahua Lin" ], "title": "Unsupervised feature learning via nonparametric instance discrimination", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Jiacheng Yang", "Mingxuan Wang", "Hao Zhou", "Chengqi Zhao", "Weinan Zhang", "Yong Yu", "Lei Li" ], "title": "Towards making the most of BERT in neural machine translation", "venue": "In The Thirty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Russ R Salakhutdinov", "Quoc V Le" ], "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Mang Ye", "Xu Zhang", "Pong C. Yuen", "Shih-Fu Chang" ], "title": "Unsupervised embedding learning via invariant and spreading instance feature", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Xingxing Zhang", "Furu Wei", "Ming Zhou" ], "title": "HIBERT: Document level pre-training of hierarchical bidirectional transformers for document summarization", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Yuan Zhang", "Jason Baldridge", "Luheng He" ], "title": "PAWS: Paraphrase adversaries from word scrambling", "venue": null, "year": 2019 }, { "authors": [ "Han Zhao", "Junjie Hu", "Andrej Risteski" ], "title": "On learning language-invariant representations for universal machine translation", "venue": "Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Wenzhao Zheng", "Zhaodong Chen", "Jiwen Lu", "Zhou Jie" ], "title": "Hardness-aware deep metric learning", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ming Zhong", "Pengfei Liu", "Yiran Chen", "Danqing Wang", "Xipeng Qiu", "Xuanjing Huang" ], "title": "Extractive summarization as text matching", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Jinhua Zhu", "Yingce Xia", "Lijun Wu", "Di He", "Tao Qin", "Wengang Zhou", "Houqiang Li", "Tie-Yan Liu" ], "title": "Incorporating BERT into neural machine translation", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Pierre Zweigenbaum", "Serge Sharoff", "Reinhard Rapp" ], "title": "Overview of the second BUCC shared task: Spotting parallel sentences in comparable corpora", "venue": "In Proceedings of the 10th Workshop on Building and Using Comparable Corpora,", "year": 2017 }, { "authors": [ "Conneau" ], "title": "2020) to build a Common-Crawl Corpus using the CCNet (Wenzek et al., 2019) tool5 for monolingual texts. Table 7 reports the language codes", "venue": "During pre-training,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Pre-trained models (PTMs) like ELMo (Peters et al., 2018), GPT (Radford et al., 2018) and BERT (Devlin et al., 2019) have shown remarkable success of effectively transferring knowledge learned from large-scale unlabeled data to downstream NLP tasks, such as text classification (Socher et al., 2013) and natural language inference (Bowman et al., 2015; Williams et al., 2018), with limited or no training data. To extend such pretraining-finetuning paradigm to multiple languages, some endeavors such as multilingual BERT (Devlin et al., 2019) and XLM (Conneau & Lample, 2019) have been made for learning cross-lingual representation. More recently, Conneau et al. (2020) present XLM-R to study the effects of training unsupervised cross-lingual representations at a huge scale and demonstrate promising progress on cross-lingual tasks.\nHowever, all of these studies only perform a masked language model (MLM) with token-level (i.e., subword) cross entropy, which limits PTMs to capture the co-occurrence among tokens and consequently fail to understand the whole sentence. It leads to two major shortcomings for current cross-lingual PTMs, i.e., the acquisition of sentence-level representations and semantic alignments among parallel sentences in different languages. Considering the former, Devlin et al. (2019) introduced the next sentence prediction (NSP) task to distinguish whether two input sentences are continuous segments from the training corpus. However, this simple binary classification task is not enough to model sentence-level representations (Joshi et al., 2020; Yang et al., 2019; Liu et al., 2019; Lan et al., 2020; Conneau et al., 2020). For the latter, (Huang et al., 2019) defined the cross-lingual paraphrase classification task, which concatenates two sentences from different languages as input\n∗Work done at Alibaba Group. Yue Hu and Heng Yu are the co-corresponding authors. We also made an official submission to XTREME (https://sites.research.google/xtreme), with several improved techniques used in (Fang et al., 2020; Luo et al., 2020).\nand classifies whether they are with the same meaning. This task learns patterns of sentence-pairs well but fails to distinguish the exact meaning of each sentence.\nIn response to these problems, we propose to strengthen PTMs through learning universal representations among semantically-equivalent sentences distributed in different languages. We introduce a novel Hierarchical Contrastive Learning (HICTL) framework to learn language invariant sentence representations via self-supervised non-parametric instance discrimination. Specifically, we use a BERT-style model to encode two sentences separately, and the representation of the first token (e.g., [CLS] in BERT) will be treated as the sentence representation. Then, we conduct instance-wise comparison at both sentence-level and word-level, which are complementary to each other. At the sentence level, we maximize the similarity between two parallel sentences while minimizing which among non-parallel ones. At the word-level, we maintain a bag-of-words for each sentence-pair, each word in which is considered as a positive sample while the rest words in vocabulary are negative ones. To reduce the space of negative samples, we conduct negative sampling for word-level contrastive learning. With the HICTL framework, the PTMs are encouraged to learn language-agnostic representation, thereby bridging the semantic discrepancy among cross-lingual sentences.\nThe HICTL is conducted on the basis of XLM-R (Conneau et al., 2020) and experiments are performed on several challenging cross-lingual tasks: language understanding tasks (e.g., XNLI, XQuAD, and MLQA) in the XTREME (Hu et al., 2020) benchmark, and machine translation in the IWSLT and WMT benchmarks. Extensive empirical evidence demonstrates that our approach can achieve consistent improvements over baselines on various tasks of both cross-lingual language understanding and generation. In more detail, our HICTL obtains absolute gains of 4.2% (up to 6.0% on zero-shot sentence retrieval tasks, e.g. BUCC and Tatoeba) accuracy on XTREME over XLM-R. For machine translation, our HICTL achieves substantial improvements over baselines on both low-resource (IWSLT English→X) and high-resource (WMT English→X) translation tasks." }, { "heading": "2 RELATED WORK", "text": "Pre-trained Language Models. Recently, substantial work has shown that pre-trained models (PTMs) (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019) on the large corpus are beneficial for downstream NLP tasks. The application scheme is to fine-tune the pre-trained model using the limited labeled data of specific target tasks. For cross-lingual pre-training, both Devlin et al. (2019) and Conneau & Lample (2019) trained a transformer-based model on multilingual Wikipedia which covers various languages, while XLM-R (Conneau et al., 2020) studied the effects of training unsupervised cross-lingual representations on a very large scale.\nFor sequence-to-sequence pre-training, UniLM (Dong et al., 2019) fine-tuned BERT with an ensemble of masks, which employs a shared Transformer network and utilizing specific self-attention mask to control what context the prediction conditions on. Song et al. (2019) extended BERT-style models by jointly training the encoder-decoder framework. XLNet (Yang et al., 2019) trained by predicting masked tokens auto-regressively in a permuted order, which allows predictions to condition on both left and right context. Raffel et al. (2019) unified every NLP problem as a text-to-text problem and pre-trained a denoising sequence-to-sequence model at scale. Concurrently, BART (Lewis et al., 2020) pre-trained a denoising sequence-to-sequence model, in which spans are masked from the input but the complete output is auto-regressively predicted.\nPrevious works have explored using pre-trained models to improve text generation, such as pretraining both the encoder and decoder on several languages (Song et al., 2019; Conneau & Lample, 2019; Raffel et al., 2019) or using pre-trained models to initialize encoders (Edunov et al., 2019; Zhang et al., 2019a; Guo et al., 2020). Zhu et al. (2020) and Weng et al. (2020) proposed a BERTfused NMT model, in which the representations from BERT are treated as context and fed into all layers of both the encoder and decoder. Zhong et al. (2020) formulated the extractive summarization task as a semantic text matching problem and proposed a Siamese-BERT architecture to compute the similarity between the source document and the candidate summary, which leverages the pre-trained BERT in a Siamese network structure. Our approach also belongs to the contextual pre-training so it could be applied to various downstream NLU and NLG tasks.\nContrastive Learning. Contrastive learning (CTL) (Saunshi et al., 2019) aims at maximizing the similarity between the encoded query q and its matched key k+ while keeping randomly sampled\nkeys {k−0 , k − 1 , k − 2 , ...} faraway from it. With similarity measured by a score function s(q, k), a form of a contrastive loss function, called InfoNCE (Oord et al., 2018), is considered in this paper:\nLctl = − log exp(s(q, k+)) exp(s(q, k+)) + ∑ i exp(s(q, k − i )) , (1)\nwhere the score function s(q, k) is essentially implemented as the cosine similarity q T k\n‖q‖·‖k‖ . q and k are often encoded by a learnable neural encoder, such as BERT (Devlin et al., 2019) or ResNet (He et al., 2016). k+ and k− are typically called positive and negative samples. In addition to the form illustrated in Eq. (1), contrastive losses can also be based on other forms, such as margin-based loses (Hadsell et al., 2006) and variants of NCE losses (Mnih & Kavukcuoglu, 2013).\nContrastive learning is at the core of several recent work on unsupervised or self-supervised learning from computer vision (Wu et al., 2018; Oord et al., 2018; Ye et al., 2019; He et al., 2019; Chen et al., 2020; Tian et al., 2020) to natural language processing (Mikolov et al., 2013; Mnih & Kavukcuoglu, 2013; Devlin et al., 2019; Clark et al., 2020b; Feng et al., 2020; Chi et al., 2020). Kong et al. (2020) improved language representation learning by maximizing the mutual information between a masked sentence representation and local n-gram spans. Clark et al. (2020b) utilized a discriminator to predict whether a token is replaced by a generator given its surrounding context. Iter et al. (2020) proposed to pre-train language models with contrastive sentence objectives that predict the surrounding sentences given an anchor sentence. In this paper, we propose HICTL to encourage parallel cross-lingual sentences to have the identical semantic representation and distinguish whether a word is contained in them as well, which can naturally improve the capability of cross-lingual understanding and generation for PTMs." }, { "heading": "3 METHODOLOGY", "text": "" }, { "heading": "3.1 HIERARCHICAL CONTRASTIVE LEARNING", "text": "We propose hierarchical contrastive learning (HICTL), a novel comparison learning framework that unifies cross-lingual sentences as well as related words. HICTL can learn from both non-parallel and parallel multilingual data, and the overall architecture of HICTL is illustrated in Figure 1. We represent a training batch of the original sentences as x = {x1, x2, ..., xn} and its aligned counterpart is denoted as y = {y1, y2, ..., yn}, where n is the batch size. For each pair 〈xi, yi〉, yi is either the translation in the other language of xi when using parallel data or the perturbation through reordering tokens in xi when only monolingual data is available.x\\i is denoted as a modified version of x where the i-th instance is removed.\nSentence-Level CTL. As illustrated in Figure 1a, we apply the XLM-R as the encoder to represent sentences into hidden representations. The first token of every sequence is always a special token\n(e.g., [CLS]), and the final hidden state corresponding to this token is used as the aggregate sentence representation for pre-training, that is, rx = f ◦ g(M(x)) where g(·) is the aggregate function and f(·) is a linear projection, ◦ denotes the composition of operations. To obtain universal representation among semantically-equivalent sentences, we encourage rxi (the query, denoted as q) to be as similar as possible to ryi (the positive sample, denoted as k\n+) but dissimilar to all other instances (i.e., y\\i ∪ x\\i, considered as a series of negative samples, denoted as {k−1 , k − 2 , ..., k − 2n−2}) in a training batch. Formally, the sentence-level contrastive loss for xi is defined as\nLsctl(xi) = − log exp ◦s(q, k+) exp ◦s(q, k+) + ∑|y\\i∪x\\i| j=1 exp ◦s(q, k − j ) . (2)\nSymmetrically, we also expect ryi (the query, denoted as q̃) to be as similar as possible to rxi (the positive sample, denoted as k̃+) but dissimilar to all other instances in the same training batch, thus,\nLsctl(yi) = − log exp ◦s(q̃, k̃+) exp ◦s(q̃, k̃+) + ∑|y\\i∪x\\i| j=1 exp ◦s(q̃, k̃ − j ) . (3)\nThe sentence-level contrastive loss over the training batch can be formulated as\nLS = 1\n2n n∑ i=1 { Lsctl(xi) + Lsctl(yi) } . (4)\nFor sentence-level contrastive learning, we treat other instances contained in the training batch as negative samples for the current instance. However, such randomly selected negative samples are often uninformative, which poses a challenge of distinguishing very similar but nonequivalent samples. To address this issue, we employ smoothed linear interpolation (Bowman et al., 2016; Zheng et al., 2019) between sentences in the embedding space to alleviate the lack of informative samples for pre-training, as shown in Figure 2. Given a training batch {〈xi, yi〉}ni=1, where n is the batch size. In this context, having obtained the embeddings of a triplet, an anchor q and a positive k+ as well as a negative k− (supposing q, k+ and k− are representations of sentences xi, yi and y−i ∈ x\\i ∪ y\\i, respectively), we construct a harder negative sample k̂− to replace k − j :\nk̂− =\n{ q + λ(k− − q),λ ∈ ( d +\nd− , 1] if d − > d+;\nk− if d− ≤ d+. (5)\nwhere d+ =‖ k+− q ‖2 and d− =‖ k−− q ‖2. For the first condition, the hardness of k̂− increases when λ becomes smaller. To this end, we intuitively set λ as\nλ =\n( d+\nd−\n)ζ·p+avg , ζ ∈ (0, 1) (6)\nwhere p+avg = 1\n100 ∑ ∈[−100,−1] e\n−L()S is the average log-probability over the last 100 training batches and LS formulated in Eq. (4) is the sentence-level contrastive loss of one training batch. During pre-training, when the model tends to distinguish positive samples easily, which means negative samples are not informative already. At this time, p+avg ↑ and d +\nd− ↓, which leads λ ↓ and harder negative samples are adaptively synthesized in the following training steps, vice versa. As hard negative samples usually result in significant changes of the model parameters, we introduce the slack coefficient ζ to prevent the model from being trained in the wrong direction, when it accidentally switch from random negative samples to very hard ones. In practice, we empirically set ζ = 0.9.\nWord-Level CTL. Intuitively, predicting the related words in other languages for each sentence can bridge the representations of words in different languages. As shown in Figure 1b, we concatenate the sentence pair 〈xi, yi〉 as xi ◦ yi: [CLS] xi [SEP] yi [SEP] and the bag-of-words of which is denoted as B. For word-level contrastive learning, the final state of the first token is treated as the query (q̄), each word wt ∈ B is considered as the positive sample and all the other words (V\\B, i.e., the words in V that are not in B where V indicates the overall vocabulary of all languages) are negative samples. As the vocabulary usually with large space, we propose to only use a subset S ⊂ V\\B sampled according to the normalized similarities between q̄ and the embeddings of the words. As a result, the subset S naturally contains the hard negative samples which are beneficial for learning high-quality representations (Ye et al., 2019). Specifically, the word-level contrastive loss for 〈xi, yi〉 is defined as\nLwctl(xi, yi) = − 1\n|B| |B|∑ t=1 log exp ◦s(q̄, e(wt)) exp ◦s(q̄, e(wt)) + ∑ wj∈S exp ◦s(q̄, e(wj)) . (7)\nwhere e(·) is the embedding lookup function and |B| is the number of unique words in the concatenated sequence xi ◦ yi. The overall word-level contrastive loss can be formulated as:\nLW = 1\nn n∑ i=1 Lwctl(xi, yi). (8)\nMulti-Task Pre-training. Both MLM and translation language model (TLM) are combined with HICTL by default, as the prior work (Conneau & Lample, 2019) has verified the effectiveness of them in XLM. In summary, the model can be optimized by minimizing the entire training loss:\nL = LLM + LS + LW , (9) where LLM is implemented as either the TLM when using parallel data or the MLM when only monolingual data is available to recover the original words of masked positions given the contexts." }, { "heading": "3.2 CROSS-LINGUAL FINE-TUNING", "text": "Language Understanding. The representations produced by HICTL can be used in several ways for language understanding tasks whether they involve single text or text pairs. Concretely, (i) the [CLS] representation of single-sentence in sentiment analysis or sentence pairs in paraphrasing and entailment is fed into an extra output-layer for classification. (ii) The pre-trained encoder can be used to assign POS tags to each word or to locate and classify all the named entities in the sentence for structured prediction, as well as (iii) to extract answer spans for question answering.\nLanguage Generation. We also explore using HICTL to improve machine translation. In the previous work, Conneau & Lample (2019) has shown that the pre-trained encoders can provide a better initialization of both supervised and unsupervised NMT systems. Liu et al. (2020b) has shown that NMT models can be improved by incorporating pre-trained sequence-to-sequence models on various language pairs but highest-resource settings. As illustrated in Figure 3, we use the model pre-trained by HICTL as the encoder, and add a new set of decoder parameters that are learned from scratch. To prevent pre-trained weights from being washed out by supervised training,\nwe train the encoder-decoder model in two steps. In the first step, we freeze the pre-trained encoder and only update the decoder. In the second step, we train all parameters for a relatively small number of iterations. In both cases, we compute the similarities between the [CLS] representation of the encoder and all target words in advance. Then we aggregate them with the logits before the softmax of each decoder step through an element-wise additive operation. The encoder-decoder model is optimized by maximizing the log-likelihood of bitext at both steps." }, { "heading": "4 EXPERIMENTS", "text": "We consider two evaluation benchmarks: nine cross-lingual language understanding tasks in the XTREME benchmark and machine translation tasks (IWSLT’14 English↔German, IWSLT’14 English→Spanish, WMT’16 Romanian→English, IWSLT’17 English→{French, Chinese} and WMT’14 English→{German, French}). In this section, we describe the data and training details, and provide detailed evaluation results." }, { "heading": "4.1 DATA AND MODEL", "text": "During pre-training, we follow Conneau et al. (2020) to build a Common-Crawl Corpus using the CCNet (Wenzek et al., 2019) tool1 for monolingual texts. Table 7 (see appendix A) reports the language codes and data size in our work. For parallel data, we use the same (English-to-X) MT dataset as (Conneau & Lample, 2019), which are collected from MultiUN (Eisele & Yu, 2010) for French, Spanish, Arabic and Chinese, the IIT Bombay corpus (Kunchukuttan et al., 2018a) for Hindi, the OpenSubtitles 2018 for Turkish, Vietnamese and Thai, the EUbookshop corpus for German, Greek and Bulgarian, Tanzil for both Urdu and Swahili, and GlobalVoices for Swahili. Table 8 (see appendix A) shows the statistics of the parallel data.\nWe adopt the Transformer-Encoder (Vaswani et al., 2017) as the backbone with 12 layers and 768 hidden units for HICTLBase, and 24 layers and 1024 hidden units for HICTL. We initialize the parameters of HICTL with XLM-R (Conneau et al., 2020). Hyperparameters for pre-training and fine-tuning are shown in Table 9 (see appendix B). We run the pre-training experiments on 8 V100 GPUs, batch size 1024. The number of negative samplesm=512 for word-level contrastive learning." }, { "heading": "4.2 EXPERIMENTAL EVALUATION", "text": "Cross-lingual Language Understanding (XTREME) There are nine tasks in XTREME that can be grouped into four categories: (i) sentence classification consists of Cross-lingual Natural Language Inference (XNLI) (Conneau et al., 2018) and Cross-lingual Paraphrase Adversaries from\n1https://github.com/facebookresearch/cc_net\nWord Scrambling (PAWS-X) (Zhang et al., 2019b). (ii) Structured prediction includes POS tagging and NER. We use POS tagging data from the Universal Dependencies v2.5 (Nivre et al., 2018) treebanks. Each word is assigned one of 17 universal POS tags. For NER, we use the Wikiann dataset (Pan et al., 2017). (iii) Question answering includes three tasks: Cross-lingual Question Answering (XQuAD) (Artetxe et al., 2019), Multilingual Question Answering (MLQA) (Lewis et al., 2019), and the gold passage version of the Typologically Diverse Question Answering dataset (TyDiQA-GoldP) (Clark et al., 2020a). (iv) Sentence retrieval includes two tasks: BUCC (Zweigenbaum et al., 2017) and Tatoeba (Artetxe & Schwenk, 2019), which aims to extract parallel sentences between the English corpus and target languages. As XTREME provides no training data, thus we directly evaluate pre-trained models on test sets.\nTable 1 provides detailed results on four categories in XTREME. First, compared to the state of the art XLM-R baseline, HICTL further achieves significant gains of 1.43% and 2.80% on average on nine tasks with cross-lingual zero-shot transfer and translate-train-all settings, respectively. Second, mining hard negative samples via smoothed linear interpolation play an important role in contrastive learning, which significantly improves accuracy by 1.6 points on average. Third, HICTL with hardness aware augmentation delivers large improvements on zero-shot sentence retrieval tasks (scores 5.8 and 6.0 points higher on BUCC and Tatoeba, respectively). Following (Hu et al., 2020), we directly evaluate pre-trained models on test sets without any extra labeled data or fine-tuning techniques used in (Fang et al., 2020; Luo et al., 2020). These results demonstrate the capacity of HICTL on learning cross-lingual representations. We also compare our best model with two existing models: FILTER (Fang et al., 2020) and VECO (Luo et al., 2020). The results demonstrate that HICTL achieves the best performance on most tasks with less monolingual data.\nAblation experiments are present at Table 3. Comparing the full model, we can draw several conclusions: (1) removing the sentence-level CTL objective hurts performance consistently and significantly, (2) the word-level CTL objective has least drop compared to others, and (3) the parallel (MT) data has a large impact on zero-shot multilingual sentence retrieval tasks. Moreover, Table 2 provides the comparisons between HICTL and existing methods.\nMachine Translation The main idea of HICTL is to summarize cross-lingual parallel sentences into a shared representation that we term as semantic embedding, using which semantically related words can be distinguished from others. Thus it is natural to apply this global embedding to text generation. We fine-tune the pre-trained HICTL with the base setting on machine translation tasks with both low-resource and high-resource settings. For the low-resource scenario, we choose IWSLT’14 English↔German (En↔De)2, IWSLT’14 English→Spanish (En→Es), WMT’16\n2We split 7k sentence pairs from the training dataset for validation and concatenate dev2010, dev2012, tst2010, tst2011, tst2012 as the test set.\nRomanian→English (Ro→En), IWSLT’17 English→French (En→Fr) and English→Chinese (En→Zh) translation3. There are 160k, 183k, 236k, 235k, 0.6M bilingual sentence pairs for En↔De, En→Es, En→Fr, En→Zh and Ro→En tasks. For the rich-resource scenario, we work on WMT’14 En→{De, Fr}, the corpus sizes are 4.5M and 36M respectively. We concatenate newstest 2012 and newstest 2013 as the validation set and use newstest 2014 as the test set.\nDuring fine-tuning, we use the pre-trained model to initialize the encoder and introduce a randomly initialized decoder. We develop a shallower decoder with 4 identical layers to reduce the computation overhead. At the first fine-tune step, we concatenate the datasets of all language pairs in either low-resource or high-resource settings to optimize the decoder only until convergence4. Then we tune the whole encoder-decoder model using a per-language corpus at the second step. The initial learning rate is 2e-5 and inverse sqrt learning rate (Vaswani et al., 2017) scheduler is also adopted. For WMT’14 En→De, we use beam search with width 4 and length penalty 0.6 for inference. For other tasks, we use width 5 and a length penalty of 1.0. We use multi-bleu.perl to evaluate IWSLT’14 En↔De and WMT tasks, but sacreBLEU for the remaining tasks, for fair comparison with previous work.\nResults on both high-resource and low-resource tasks are reported in Table 4 and Table 5, respectively. We implemented standard Transformer (apply the base and big setting for IWSLT and WMT tasks respectively) as baseline. The proposed HICTL can improve the BLEU scores of the eight tasks by 3.34, 2.95, 3.24, 3.45, 2.8, 6.37, 4.4, and 3.4. In addition, our approach also outperforms the BERT-fused model (Yang et al., 2020), a method treats BERT as an extra context\n3https://wit3.fbk.eu/mt.php?release=2017-01-ted-test 4Zhao et al. (2020) conducted a theoretical investigation on learning universal representations for the task\nof multilingual MT, while we directly use a shared encoder and decoder across languages for simplicity.\nand fuses the representations extracted from BERT with each encoder and decoder layer. Note we achieve new state-of-the-art results on IWSLT’14 En→De, IWSLT’17 En→{Fr, Zh} translations. These improvements show that mapping different languages into a universal representation space is beneficial for both low-resource and high-resource translations.\nWe also evaluate our model on tasks where no bi-text is available for the target language pair. Following mBART (Liu et al., 2020b), we adopt the setting of language transfer. That is, no bi-text for the target pair is available, but there is bi-text for translating from some other language into the target language. For explanation, supposing there is no parallel data for the target language pair Italian→English (It→En), but we can transfer knowledge learned from Czech→English (Cs→En, a high-resource language pair) to It→En. We consider X→En translation, covering Indic languages (Ne, Hi, Si, Gu) and European languages (Ro, It, Cs, Nl). For European languages, we fine-tune on Cs→En translation, the parallel data is from WMT’19 that contains 11M sentence pairs. We test on {Cs, Ro, It, Nl}→En, in which test sets are from previous WMT (Cs, Ro) or IWSLT (It, Nl) competitions. For Indic languages, we fine-tune on Hi→En translation (1.56M sentence pairs are from IITB (Kunchukuttan et al., 2018b)), and test on {Ro, It, Cs, Nl}→En translations. Results are shown in Table 6. We can always obtain reasonable transferring scores at low-resource pairs over different fine-tuned models. However, our experience shows that the randomly initialized models without pre-training always achieve near 0 BLEU. The underlying scenario is that multilingual pre-training produces universal representations across languages so that once the model learns to translate one language, it learns to translate all languages with similar representations. Moreover, a failure happened in Gu→En translation, we conjecture that we only use 0.3GB monolingual data for pre-training, which is difficult to learn informative representations for Gujarati." }, { "heading": "5 CONCLUSION", "text": "We have demonstrated that pre-trained language models (PTMs) trained to learn commonsense knowledge from large-scale unlabeled data highly benefit from hierarchical contrastive learning (HICTL), both in terms of cross-lingual understanding and generation. Learning universal representations at both word-level and sentence-level bridges the semantic discrepancy across languages. As a result, our HICTL sets a new level of performance among cross-lingual PTMs, improving on the state of the art by a large margin." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank the anonymous reviewers for the helpful comments. We also thank Jing Yu for the instructive suggestions. This work is supported by the National Key R&D Program of China under Grant No.2017YFB0803301 and No. 2018YFB1403202." }, { "heading": "A PRE-TRAINING DATA", "text": "During pre-training, we follow Conneau et al. (2020) to build a Common-Crawl Corpus using the CCNet (Wenzek et al., 2019) tool5 for monolingual texts. Table 7 reports the language codes and data size in our work. For parallel data, we use the same (English-to-X) MT dataset as (Conneau & Lample, 2019), which are collected from MultiUN (Eisele & Yu, 2010) for French, Spanish, Arabic and Chinese, the IIT Bombay corpus (Kunchukuttan et al., 2018a) for Hindi, the OpenSubtitles 2018 for Turkish, Vietnamese and Thai, the EUbookshop corpus for German, Greek and Bulgarian, Tanzil for both Urdu and Swahili, and GlobalVoices for Swahili. Table 8 shows the statistics of the parallel data." }, { "heading": "B HYPERPARAMETERS FOR PRE-TRAINING AND FINE-TUNING", "text": "As shown in Table 9, we present the hyperparameters for pre-training HICTL. We use the same vocabulary as well as the sentence-piece model with XLM-R (Conneau et al., 2020). During finetuning on XTREME, we search the learning rate over {5e-6, 1e-5, 1.5e-5, 2e-5, 2.5e-5, 3e-5} and batch size over {16, 32} for BASE-size models. And we select the best LARGE-size model by searching the learning rate over {3e-6, 5e-6, 1e-5} as well as batch size over {32, 64}.\n5https://github.com/facebookresearch/cc_net" }, { "heading": "C RESULTS FOR EACH DATASET AND LANGUAGE", "text": "Below, we provide detailed results for each dataset and language on XTREME, as shown in Table 10- 14. Results of XLM-R are from our implementation.\nD VISUALIZATION OF SENTENCE EMBEDDINGS\nWe collect 10 sets of samples from WMT’14-19, each of them contains 100 parallel sentences distributed in 5 languages. As the t-SNE visualization in Figure 4, a set of sentences under the same meaning are clustered more densely for HICTL than XLM-R, which reveals the strong capability\nModel af ar bg de el en es et eu fa fi fr he hi hu id it\nTranslate-train-all\nXLM-R 90.6 67.4 89.1 89.9 86.8 96.3 89.6 87.1 74.0 70.8 86.0 87.7 68.6 77.4 82.8 72.6 91.1 HICTL, Wiki-15 + MT 91.0 69.3 89.1 89.4 87.8 97.6 88.2 88.2 74.8 72.0 86.7 87.9 70.2 79.0 84.2 74.3 90.8\nHICTL, CCNet-100 + MT 91.8 70.2 90.7 90.8 89.0 98.3 89.7 90.1 76.2 73.0 88.5 90.2 70.7 80.0 86.4 74.5 92.0 +HARD NEGATIVE SAMPLES 92.2 71.0 91.5 91.3 90.0 97.7 91.0 89.4 75.7 73.5 88.8 90.1 71.1 79.7 85.4 75.1 91.7\nja kk ko mr nl pt ru ta te th tl tr ur vi yo zh avg\nTranslate-train-all\nXLM-R 17.3 78.3 55.5 82.1 89.8 88.9 89.8 65.7 87.0 48.6 92.9 77.9 71.7 56.8 24.7 27.2 74.6 HICTL, Wiki-15 + MT 28.4 79.2 54.2 80.7 90.9 88.4 90.5 67.3 89.1 48.7 92.2 77.6 72.0 58.8 27.2 27.1 75.5\nHICTL, CCNet-100 + MT 30.2 80.4 55.1 82.1 91.2 90.2 90.7 68.1 90.1 50.3 95.2 78.7 73.3 59.2 27.8 27.9 76.8 +HARD NEGATIVE SAMPLES 31.9 80.9 57.0 83.5 91.7 91.0 91.2 69.5 90.8 50.3 94.8 79.4 73.4 59.5 28.6 28.7 77.2\nof HICTL on learning universal representations across different languages. Note that the t-SNE visualization of HICTL still demonstrates some noises, we attribute them to the lack of hard negative examples for sentence-level contrastive learning and leave this to future work for consideration.\nModel af ar bg bn de el es et eu fa fi fr he hi hu id it ja\nTranslate-train-all\nXLM-R 59.7 50.5 72.2 45.4 89.5 61.3 77.6 51.7 38.6 71.7 72.8 76.9 66.3 73.1 65.1 77.5 68.5 63.1 HICTL, Wiki-15 + MT 61.5 51.4 76.1 47.9 92.1 63.4 80.5 55.9 37.8 74.6 76.7 78.0 68.4 74.5 68.8 80.4 70.2 63.9\nHICTL, CCNet-100 + MT 63.0 50.9 76.8 47.0 94.6 68.8 80.9 59.3 41.5 77.3 78.2 80.3 70.2 77.9 72.1 81.3 73.7 66.2 +HARD NEGATIVE SAMPLES 68.9 57.7 83.2 55.4 98.2 74.5 88.5 62.4 47.7 80.2 82.9 85.5 79.1 85.0 76.8 90.3 80.8 72.7\njv ka kk ko ml mr nl pt ru sw ta te th tl tr ur vi zh\nXLM-R 15.8 53.3 51.2 63.1 66.2 59.0 81.0 84.4 76.9 19.8 28.3 37.8 28.9 36.7 68.9 26.6 77.9 69.8 HICTL, Wiki-15 + MT 18.7 55.8 51.0 65.5 67.3 61.2 82.9 84.4 78.3 22.2 28.6 41.4 33.5 41.6 71.2 26.7 80.2 73.6\nHICTL, CCNet-100 + MT 19.6 57.3 54.6 68.0 71.8 62.0 88.1 88.9 77.7 26.1 32.9 39.5 32.9 43.2 71.2 27.8 79.9 74.7 +HARD NEGATIVE SAMPLES 27.2 63.0 61.5 72.6 75.3 67.8 92.8 92.8 85.4 32.0 36.7 47.8 41.5 49.8 77.0 34.3 84.3 81.3" } ]
2,021
ON LEARNING UNIVERSAL REPRESENTATIONS ACROSS LANGUAGES
SP:cc6c0eb769a3da3f0e311fe6a4b96286f1f98d01
[ "The paper proposes DAC, an actor-critic method exploiting the replay buffer to do policy entropy regularisation. The main idea of DAC is to use the data from the replay buffer to induce a distribution $q(\\cdot, s_t)$ and replace the entropy part of the Soft Actor-Critic objective with a convex combination of $q$ and $\\pi$. This results positively on exploration properties and leads to sample-efficiency gains on some of the considered MuJoCo benchmarks.", "This paper considers the exploration efficiency issues in off-policy deep reinforcement learning (DRL). The authors identify a sample efficiency limitation in the classical entropy regularization, which does not take into account the existing samples in the replay buffer. To avoid repeated sampling of previously seen scenarios/actions, the authors propose to replace the current policy in the entropy term with a mixture of the empirical policy estimation from the replay buffer and the current policy, and term this approach as sample-aware entropy regularization. The authors then propose a theoretical algorithm called sample-aware entropy regularized policy iteration, which is a generalization of the soft policy iteration (SPI) algorithm, and show that it converges assuming that the empirical policy estimation is fixed. A practical algorithm based on the sample-aware entropy regularized policy iteration, called Diversity Actor-Critic (DAC), is then proposed. This algorithm is a generalization of the well-known soft actor-critic (SAC) algorithm. Finally, numerical experiments show that DAC outperforms SAC and other SOTA RL algorithms, and some ablation studies are also provided to demonstrate the effect of hyper-parameter choices in DAC." ]
Policy entropy regularization is commonly used for better exploration in deep reinforcement learning (RL). However, policy entropy regularization is sampleinefficient in off-policy learning since it does not take the distribution of previous samples stored in the replay buffer into account. In order to take advantage of the previous sample distribution from the replay buffer for sample-efficient exploration, we propose sample-aware entropy regularization which maximizes the entropy of weighted sum of the policy action distribution and the sample action distribution from the replay buffer. We formulate the problem of sample-aware entropy regularized policy iteration, prove its convergence, and provide a practical algorithm named diversity actor-critic (DAC) which is a generalization of soft actor-critic (SAC). Numerical results show that DAC significantly outperforms SAC baselines and other state-of-the-art RL algorithms.
[]
[ { "authors": [ "Joshua Achiam", "Shankar Sastry" ], "title": "Surprise-based intrinsic motivation for deep reinforcement learning", "venue": "arXiv preprint arXiv:1703.01732,", "year": 2017 }, { "authors": [ "Philip Agre", "Stanley J Rosenschein" ], "title": "Computational theories of interaction and agency", "venue": null, "year": 1996 }, { "authors": [ "Zafarali Ahmed", "Nicolas Le Roux", "Mohammad Norouzi", "Dale Schuurmans" ], "title": "Understanding the impact of entropy on policy optimization", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Gianluca Baldassarre", "Marco Mirolli" ], "title": "Intrinsically motivated learning in natural and artificial systems", "venue": null, "year": 2013 }, { "authors": [ "Marc Bellemare", "Sriram Srinivasan", "Georg Ostrovski", "Tom Schaul", "David Saxton", "Remi Munos" ], "title": "Unifying count-based exploration and intrinsic motivation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Amos Storkey", "Oleg Klimov" ], "title": "Exploration by random network distillation", "venue": "arXiv preprint arXiv:1810.12894,", "year": 2018 }, { "authors": [ "Nuttapong Chentanez", "Andrew G Barto", "Satinder P Singh" ], "title": "Intrinsically motivated reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2005 }, { "authors": [ "Thomas Degris", "Martha White", "Richard S Sutton" ], "title": "Off-policy actor-critic", "venue": "arXiv preprint arXiv:1205.4839,", "year": 2012 }, { "authors": [ "Roy Fox", "Ari Pakman", "Naftali Tishby" ], "title": "Taming the noise in reinforcement learning via soft updates", "venue": "arXiv preprint arXiv:1512.08562,", "year": 2015 }, { "authors": [ "Scott Fujimoto", "Herke van Hoof", "Dave Meger" ], "title": "Addressing function approximation error in actor-critic methods", "venue": "arXiv preprint arXiv:1802.09477,", "year": 2018 }, { "authors": [ "Tanmay Gangwani", "Qiang Liu", "Jian Peng" ], "title": "Learning self-imitating diverse policies", "venue": "arXiv preprint arXiv:1805.10309,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Shixiang Gu", "Timothy Lillicrap", "Zoubin Ghahramani", "Richard E Turner", "Sergey Levine" ], "title": "Q-prop: Sample-efficient policy gradient with an off-policy critic", "venue": "arXiv preprint arXiv:1611.02247,", "year": 2016 }, { "authors": [ "Shixiang Shane Gu", "Timothy Lillicrap", "Richard E Turner", "Zoubin Ghahramani", "Bernhard Schölkopf", "Sergey Levine" ], "title": "Interpolated policy gradient: Merging on-policy and off-policy gradient estimation for deep reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yijie Guo", "Junhyuk Oh", "Satinder Singh", "Honglak Lee" ], "title": "Generative adversarial self-imitation learning", "venue": "arXiv preprint arXiv:1812.00950,", "year": 2018 }, { "authors": [ "Tuomas Haarnoja", "Haoran Tang", "Pieter Abbeel", "Sergey Levine" ], "title": "Reinforcement learning with deep energy-based policies", "venue": "arXiv preprint arXiv:1702.08165,", "year": 2017 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Offpolicy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Kristian Hartikainen", "George Tucker", "Sehoon Ha", "Jie Tan", "Vikash Kumar", "Henry Zhu", "Abhishek Gupta", "Pieter Abbeel" ], "title": "Soft actor-critic algorithms and applications", "venue": "arXiv preprint arXiv:1812.05905,", "year": 2018 }, { "authors": [ "Seungyul Han", "Youngchul Sung" ], "title": "Dimension-wise importance sampling weight clipping for sample-efficient reinforcement learning", "venue": "International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Elad Hazan", "Sham Kakade", "Karan Singh", "Abby Van Soest" ], "title": "Provably efficient maximum entropy exploration", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Zhang-Wei Hong", "Tzu-Yun Shann", "Shih-Yang Su", "Yi-Hsiang Chang", "Tsu-Jui Fu", "Chun-Yi Lee" ], "title": "Diversity-driven exploration strategy for deep reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Rein Houthooft", "Xi Chen", "Yan Duan", "John Schulman", "Filip De Turck", "Pieter Abbeel" ], "title": "Vime: Variational information maximizing exploration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Kyungjae Lee", "Sungyub Kim", "Sungbin Lim", "Sungjoon Choi", "Songhwai Oh" ], "title": "Tsallis reinforcement learning: A unified framework for maximum entropy reinforcement learning", "venue": null, "year": 1902 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Manuel Lopes", "Tobias Lang", "Marc Toussaint", "Pierre-Yves Oudeyer" ], "title": "Exploration in modelbased reinforcement learning by empirically estimating learning progress", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Jarryd Martin", "Suraj Narayanan Sasikumar", "Tom Everitt", "Marcus Hutter" ], "title": "Count-based exploration in feature space for reinforcement learning", "venue": "arXiv preprint arXiv:1706.08090,", "year": 2017 }, { "authors": [ "Bogdan Mazoure", "Thang Doan", "Audrey Durand", "R Devon Hjelm", "Joelle Pineau" ], "title": "Leveraging exploration in off-policy algorithms via normalizing flows", "venue": null, "year": 1905 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Ofir Nachum", "Mohammad Norouzi", "Kelvin Xu", "Dale Schuurmans" ], "title": "Bridging the gap between value and policy based reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ofir Nachum", "Mohammad Norouzi", "Kelvin Xu", "Dale Schuurmans" ], "title": "Trust-pcl: An off-policy trust region method for continuous control", "venue": "arXiv preprint arXiv:1707.01891,", "year": 2017 }, { "authors": [ "Frank Nielsen" ], "title": "On the jensen–shannon symmetrization of distances relying on abstract", "venue": "means. Entropy,", "year": 2019 }, { "authors": [ "Brendan O’Donoghue", "Remi Munos", "Koray Kavukcuoglu", "Volodymyr Mnih" ], "title": "Combining policy gradient and q-learning", "venue": "arXiv preprint arXiv:1611.01626,", "year": 2016 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A Efros", "Trevor Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2017 }, { "authors": [ "Konrad Rawlik", "Marc Toussaint", "Sethu Vijayakumar" ], "title": "On stochastic optimal control and reinforcement learning by approximate inference", "venue": "In Twenty-Third International Joint Conference on Artificial Intelligence,", "year": 2013 }, { "authors": [ "John Schulman", "Xi Chen", "Pieter Abbeel" ], "title": "Equivalence between policy gradients and soft qlearning", "venue": "arXiv preprint arXiv:1704.06440,", "year": 2017 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Alexander L Strehl", "Michael L Littman" ], "title": "An analysis of model-based interval estimation for markov decision processes", "venue": "Journal of Computer and System Sciences,", "year": 2008 }, { "authors": [ "R.S. Sutton", "A.G. Barto" ], "title": "Reinforcement learning: An introduction", "venue": null, "year": 1998 }, { "authors": [ "Haoran Tang", "Rein Houthooft", "Davis Foote", "Adam Stooke", "OpenAI Xi Chen", "Yan Duan", "John Schulman", "Filip DeTurck", "Pieter Abbeel" ], "title": "exploration: A study of count-based exploration for deep reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Emanuel Todorov" ], "title": "General duality between optimal control and estimation", "venue": "IEEE Conference on Decision and Control,", "year": 2008 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In Intelligent Robots and Systems (IROS),", "year": 2012 }, { "authors": [ "Marc Toussaint" ], "title": "Robot trajectory optimization using approximate inference", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Ziyu Wang", "Victor Bapst", "Nicolas Heess", "Volodymyr Mnih", "Remi Munos", "Koray Kavukcuoglu", "Nando de Freitas" ], "title": "Sample efficient actor-critic with experience", "venue": "replay. arXiv preprint arXiv:1611.01224,", "year": 2016 }, { "authors": [ "Yuhuai Wu", "Elman Mansimov", "Roger B Grosse", "Shun Liao", "Jimmy Ba" ], "title": "Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Zeyu Zheng", "Junhyuk Oh", "Satinder Singh" ], "title": "On learning intrinsic rewards for policy gradient methods", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Brian D Ziebart" ], "title": "Modeling purposeful adaptive behavior with the principle of maximum causal entropy", "venue": "PhD thesis, figshare,", "year": 2010 }, { "authors": [ "Brian D Ziebart", "Andrew Maas", "J Andrew Bagnell", "Anind K Dey" ], "title": "Maximum entropy inverse reinforcement learning", "venue": null, "year": 2008 } ]
[ { "heading": null, "text": "Policy entropy regularization is commonly used for better exploration in deep reinforcement learning (RL). However, policy entropy regularization is sampleinefficient in off-policy learning since it does not take the distribution of previous samples stored in the replay buffer into account. In order to take advantage of the previous sample distribution from the replay buffer for sample-efficient exploration, we propose sample-aware entropy regularization which maximizes the entropy of weighted sum of the policy action distribution and the sample action distribution from the replay buffer. We formulate the problem of sample-aware entropy regularized policy iteration, prove its convergence, and provide a practical algorithm named diversity actor-critic (DAC) which is a generalization of soft actor-critic (SAC). Numerical results show that DAC significantly outperforms SAC baselines and other state-of-the-art RL algorithms." }, { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) aims to maximize the expectation of the discounted reward sum under Markov decision process (MDP) environments (Sutton & Barto, 1998). When the given task is complex, i.e. the environment has high action-dimensions or sparse rewards, it is important to well explore state-action pairs for high performance (Agre & Rosenschein, 1996). For better exploration, recent RL considers various methods: maximizing the policy entropy to take actions more uniformly (Ziebart et al., 2008; Fox et al., 2015; Haarnoja et al., 2017), maximizing diversity gain that yields intrinsic rewards to explore rare states by counting the number of visiting states (Strehl & Littman, 2008; Lopes et al., 2012), maximizing information gain (Houthooft et al., 2016; Hong et al., 2018), maximizing model prediction error (Achiam & Sastry, 2017; Pathak et al., 2017), and so on. In particular, based on policy iteration for soft Q-learning, (Haarnoja et al., 2018a) considered an offpolicy actor-critic framework for maximum entropy RL and proposed the soft actor-critic (SAC) algorithm, which has competitive performance for challenging continuous control tasks.\nIn this paper, we reconsider the problem of policy entropy regularization in off-policy learning and propose a generalized approach to policy entropy regularization. In off-policy learning, we store and reuse old samples to update the current policy (Mnih et al., 2015), and it is preferable that the old sample distribution in the replay buffer is uniformly distributed for better performance. However, the simple policy entropy regularization tries to maximize the entropy of the current policy irrespective of the distribution of previous samples. Since the uniform distribution has maximum entropy, the current policy will choose previously less-sampled actions and more-sampled actions with the same probability and hence the simple policy entropy regularization is sample-unaware and sample-inefficient. In order to overcome this drawback, we propose sample-aware entropy regularization, which tries to maximize the weighted sum of the current policy action distribution and the sample action distribution from the replay buffer. We will show that the proposed sampleaware entropy regularization reduces to maximizing the sum of the policy entropy and the α-skewed Jensen-Shannon divergence (Nielsen, 2019) between the policy distribution and the buffer sample action distribution, and hence it generalizes SAC. We will also show that properly exploiting the sample action distribution in addition to the policy entropy over learning phases will yield far better performance." }, { "heading": "2 RELATED WORKS", "text": "Entropy regularization: Entropy regularization maximizes the sum of the expected return and the policy action entropy. It encourages the agent to visit the action space uniformly for each given state, and the regularized policy is robust to modeling error (Ziebart, 2010). Entropy regularization is considered in various domains for better optimization: inverse reinforcement learning (Ziebart et al., 2008), stochastic optimal control problems (Todorov, 2008; Toussaint, 2009; Rawlik et al., 2013), and off-policy reinforcement learning (Fox et al., 2015; Haarnoja et al., 2017). (Lee et al., 2019) shows that Tsallis entropy regularization that generalizes usual Shannon-entropy regularization is helpful. (Nachum et al., 2017a) shows that there exists a connection between value-based and policybased RL under entropy regularization. (O’Donoghue et al., 2016) proposed an algorithm combining them, and it is proven that they are equivalent (Schulman et al., 2017a). The entropy of state mixture distribution is better for pure exploration than a simple random policy (Hazan et al., 2019).\nDiversity gain: Diversity gain is used to provide a guidance for exploration to the agent. To achieve diversity gain, many intrinsically-motivated approaches and intrinsic reward design methods have been considered, e.g., intrinsic reward based on curiosity (Chentanez et al., 2005; Baldassarre & Mirolli, 2013), model prediction error (Achiam & Sastry, 2017; Pathak et al., 2017; Burda et al., 2018), divergence/information gain (Houthooft et al., 2016; Hong et al., 2018), counting (Strehl & Littman, 2008; Lopes et al., 2012; Tang et al., 2017; Martin et al., 2017), and unification of them (Bellemare et al., 2016). For self-imitation learning, (Gangwani et al., 2018) considered the Steinvariational gradient decent with the Jensen-Shannon kennel.\nOff-policy learning: Off-policy learning can reuse any samples generated from behaviour policies for the policy update (Sutton & Barto, 1998; Degris et al., 2012), so it is sample-efficient as compared to on-policy learning. In order to reuse old samples, a replay buffer that stores trajectories generated by previous policies is used for Q-learning (Mnih et al., 2015; Lillicrap et al., 2015; Fujimoto et al., 2018; Haarnoja et al., 2018a). To enhance both stability and sample-efficiency, several methods are considered, e.g., combining on-policy and off-policy (Wang et al., 2016; Gu et al., 2016; 2017), and generalization from on-policy to off-policy (Nachum et al., 2017b; Han & Sung, 2019).\nIn order to guarantee the convergence of Q-learning, there is a key assumption: Each state-action pair must be visited infinitely often (Watkins & Dayan, 1992). If the policy does not visit diverse state-action pairs many times, it converges to local optima. Therefore, exploration for visiting different state-action pairs is important for RL, and the original policy entropy regularization encourages exploration (Ahmed et al., 2019). However, we found that the simple policy entropy regularization can be sample-inefficient in off-policy RL, so we aim to propose a new entropy regularization method that significantly enhances the sample-efficiency for exploration by considering the previous sample distribution in the buffer." }, { "heading": "3 BACKGROUND", "text": "In this section, we briefly introduce the basic setup and the soft actor-critic (SAC) algorithm." }, { "heading": "3.1 SETUP", "text": "We assume a basic RL setup composed of an environment and an agent. The environment follows an infinite horizon Markov decision process (S,A, P, γ, r), where S is the state space, A is the action space, P is the transition probability, γ is the discount factor, and r : S × A → R is the reward function. In this paper, we consider a continuous state-action space. The agent has a policy distribution π : S × A → [0,∞) which selects an action at for a given state st at each time step t, and the agent interacts with the environment and receives reward rt := r(st, at) from the environment. Standard RL aims to maximize the discounted return Es0∼p0,τ0∼π[ ∑∞ t=0 γ\ntrt], where τt = (st, at, st+1, at+1 · · · ) is an episode trajectory." }, { "heading": "3.2 SOFT ACTOR-CRITIC", "text": "Soft actor-critic (SAC) (Haarnoja et al., 2018a) includes a policy entropy regularization term in the objective function for better exploration by visiting the action space uniformly for each given state.\nThe entropy-augmented policy objective function of SAC is given by\nJSAC(π) = Eτ0∼π [ ∞∑ t=0 γt(rt + βH(π(·|st))) ] , (1)\nwhere H is the entropy function and β ∈ (0,∞) is the entropy coefficient. SAC is a practical offpolicy actor-critic based on soft policy iteration (SPI) that alternates soft policy evaluation to estimate the true soft Q-function and soft policy improvement to find the optimal policy that maximizes (1). In addition, SPI theoretically guarantees convergence to the optimal policy that maximizes (1)." }, { "heading": "4 THE DIVERSITY ACTOR-CRITIC ALGORITHM", "text": "" }, { "heading": "4.1 MOTIVATION OF THE SAMPLE-AWARE ENTROPY", "text": "As explained in Section 2, the policy should visit diverse samples to learn the policy without converging to the local optima. In off-policy learning, we can reuse previous samples stored in the replay buffer to learn the policy, so it is efficient to draw diverse samples while avoiding frequently selected samples before. The policy entropy maximization enhances exploration to yield better performance, but it is sample-inefficient for off-policy RL because it does not take advantage of the previous sample action distribution obtainable from the replay buffer: If we assume bounded action space, the simple policy entropy maximization will choose all actions with the equal probability without considering the previous action samples because maxπH(π) = minπ DKL(π||U) is achieved when π = U , where U is a uniform distribution and DKL is the Kullback-Leibler (KL) divergence. In order to overcome the limitation of the simple policy entropy maximization, we consider maximizing a sample-aware entropy defined as the entropy of a mixture distribution of the policy distribution π and the current sample action distribution q in the replay buffer. Here, q is defined as\nq(·|s) := ∑ a∈DN(s, a)δa(·)∑ a′∈DN(s, a ′) , (2)\nwhere D is the replay buffer that stores previous samples (st, at, rt, st+1) at each time t, δa(·) is the Dirac measure at a ∈ A, and N(s, a) is the number of state-action pair (s, a) in D. Then, we define a target distribution qπ,αtarget as the mixture distribution of π and q, which is expressed as qπ,αtarget := απ + (1 − α)q, where α ∈ [0, 1] is the weighting factor. Note that we draw samples from policy π and store them in the replay buffer, so the target distribution can be viewed as the updated sample action distribution in the future replay buffer. Then, maximizing the sample-aware entropy H(qπ,αtarget) can encourage sample-efficient exploration because π will choose actions rare in the buffer with high probability and actions stored many times in the buffer with low probability in order to make the target distribution uniform. We provide a simple example below:\nLet us consider a simple 1-step MDP in which s0 is the unique initial state, there exist Na actions (A = {A0, · · · , ANa−1}), s1 is the terminal state, and r is a deterministic reward function. Then, there exist Na state-action pairs in total and let us assume that we already have Na − 1 state-action samples in the replay buffer as R = {(s0, A0, r(s0, A0)), · · · , (s0, ANa−2, r(s0, ANa−2))}. In order to estimate the Q-function for all state-action pairs, the policy should sample the last action ANa−1 (After then, we can reuse all samples infinitely to estimate Q). Here, we will compare two exploration methods.\n1) First, if we consider the simple entropy maximization, the policy that maximizes its entropy will choose all actions with equal probability 1/Na (uniformly). Then, Na samples should be taken on average by the policy to visit the action ANa−1.\n2) Consider the sample-aware entropy maximization. Here, the sample action distribution q in the buffer becomes q(a0|s0) = 1/(Na − 1) for a0 ∈ {A0, · · · , ANa−2} and q(ANa−1|s0) = 0, the target distribution becomes qπ,αtarget = απ + (1 − α)q, and we set α = 1/Na. Then, the policy that maximizes the sample-aware entropy becomes π(ANa−1|s0) = 1 to make q π,α target uniform because maxπH(qπ,αtarget) = minπ DKL(q π,α target||U). In this case, we only needs one sample to visit the\naction ANa−1. In this way, the simple entropy maximization is sample-inefficient for off-policy RL, and the proposed sample-aware entropy maximization can enhance the sample-efficiency for exploration by using the previous sample distribution and choosing a proper α. With this motivation, we propose the sample-aware entropy regularization for off-policy RL and the corresponding αadaptation method." }, { "heading": "4.2 SAMPLE-AWARE ENTROPY REGULARIZATION", "text": "Our approach is to maximize the return while maximizing the sample-aware entropy. Under this approach, previously many times sampled actions will be given low probabilities and previously less taken actions will be given high probabilities by the current policy π for sample-efficient exploration as shown in Section 4.1. Hence, we set the objective function for the proposed sample-aware entropy regularization as\nJ(π) = Eτ0∼π [ ∞∑ t=0 γt(rt + βH(qπ,αtarget(·|st))) ] . (3)\nHere, the sample-aware entropyH(qπ,αtarget) for given st can be decomposed as\nH(qπ,αtarget) = − ∫ a∈A (απ + (1− α)q) log(απ + (1− α)q) = αH(π) +DαJS(π||q) + (1− α)H(q), (4)\nwhereDαJS(π||q) := αDKL(π||q π,α target)+(1−α)DKL(q||q π,α target) is the α skew-symmetric JensenShannon (JS) divergence (Nielsen, 2019). Note that DαJS reduces to the standard JS divergence for α = 12 and to zero for α = 0 or 1. Hence, for α = 1, (4) reduces to the simple entropy, but for α 6= 1, it is a generalization incorporating the distribution q. Thus, our objective function aims to maximize the return while simultaneously maximizing the discounted sum of policy entropy, sample entropy, and the divergence between π and q. In this way, the policy will choose more diverse actions that are far from the samples stored in the replay buffer while maintaining its entropy for better exploration." }, { "heading": "4.3 DIVERSE POLICY ITERATION WITH THE PROPOSED OBJECTIVE", "text": "In this section, we derive the diverse policy evaluation and diverse policy improvement to maximize the objective function with the sample-aware entropy regularization (3). Note that the sample action distribution q is updated as the iteration goes on. However, it changes very slowly since the buffer size is much larger than the time steps of one iteration. Hence, for the purpose of proof we regard the action distribution q as a fixed distribution in this section.\nFirst, we define the true diverse Q-function Qπ as Qπ(st, at) := 1β rt + Eτt+1∼π [∑∞ l=t+1 γ l−t−1 ( 1 β rl + αH(π(·|sl)) +D α JS(π(·|sl)||q(·|sl)) + (1− α)H(q(·|sl)) )] .\nWe defined the sample distribution q in equation (2), but we do not want to compute actual q, which requires a method such as discretization and counting for continuous samples. Even if q is obtained by counting, a generalization of q for arbitrary state-action pairs is needed again to estimate Qπ . We circumvented this difficulty by defining the ratio Rπ,α of απ to qπ,αtarget as\nRπ,α(st, at) = απ(at|st)\nαπ(at|st) + (1− α)q(at|st) , (5)\nand we will show later that all objective (or loss) functions for practical implementation can be represented by using the ratio only, without using the explicit q in Appendix B.\nThen, we can decompose DαJS(π(·|sl)||q(·|sl)) as\nDαJS(π||q) = αEal∼π(·|sl)[logR π,α(sl, al)] + (1− α)Eal∼q(·|sl)[log(1−R π,α(sl, al))] +H(α), (6)\nwhere H(α) = −α logα− (1− α) log(1− α) is the binary entropy function. The modified Bellman backup operator for Qπ estimation is given by\nT πQ(st, at) := 1\nβ rt + γEst+1∼P [V (st+1)], (7)\nwhere V (st) = Eat∼π[Q(st, at) + α logRπ,α(st, at)− α logαπ(at|st)] + (1− α)Eat∼q[log(1− Rπ,α(st, at))− log(1− α)q(at|st)] is an estimated diverse state value function, Q : S ×A → R is an estimated diverse state-action value function.\nProof of the convergence of diverse policy evaluation that estimates Qπ by repeating the Bellman operator (7) is provided in Appendix A. Then, the policy is updated from πold to πnew as πnew = arg maxπ Jπold(π), where Jπold(π) is the objective of π estimated under Q πold defined as1\nJπold(π(·|st)) := β{Eat∼π [Qπold(st, at) + α logRπ,α(st, at)− α logαπ(at|st)] + (1− α)Eat∼q [log(1−Rπ,α(st, at))− log(1− α)q(at|st)]}. (8)\nThe monotone improvement of this step is proved in Appendix A. Now, we can find the optimal policy that maximizes J(π)(= Jπ(π)) by the following theorem:\nTheorem 1 (Diverse Policy Iteration) By repeating iteration of the diverse policy evaluation and the diverse policy improvement, any initial policy converges to the optimal policy π∗ s.t. Qπ ∗ (st, at) ≥ Qπ ′ (st, at), ∀ π′ ∈ Π, ∀ (st, at) ∈ S × A. Also, such π∗ achieves maximum J , i.e., Jπ∗(π∗) ≥ Jπ(π) for any π ∈ Π.\nProof. See Appendix A.1.\nNote that Jπold(π) for diverse policy iteration above requires the ratio function R π,α of the current policy π, but we can only estimate Rπold,α for the previous policy πold in practice. Thus, we circumvent this difficulty by defining a practical objective function Jπold(π) given by\nJ̃πold(π(·|st)) := βEat∼π[Qπold(st, at) + α logRπold,α(st, at)− α log π(at|st)], (9)\nRegarding the practically computable objective function J̃πold(π), we have the following result:\nTheorem 2 Suppose that the policy is parameterized with parameter θ. For parameterized policy πθ, two objective functions Jπθold (πθ(·|st)) and J̃πθold (πθ(·|st)) have the same gradient direction for θ at θ = θold for all st ∈ S.\nProof. See Appendix A.2.\nBy Theorem 2, we can replace the objective function Jπold(π) of policy improvement with the practically computable objective function J̃πold(π) for parameterized policy without loss of optimality." }, { "heading": "4.4 DIVERSITY ACTOR CRITIC IMPLEMENTATION", "text": "We first define Rα as an estimate for the ratio function Rπold,α. For implementation, we parameterize π, Rα, Q, and V by neural network parameters θ, η, φ, and ψ, respectively. Then, we setup the practical objective (or loss) functions Ĵπ(θ), ĴRα(η), L̂Q(φ), and L̂V (ψ) for the parameter update. Detailed DAC implementation based on Section 4 is provided in Appendix B. The proposed DAC algorithm is summarized in Appendix C. Note that DAC becomes SAC when α = 1, and becomes standard off-policy RL without entropy regularization when α = 0.\n5 α-ADAPTATION\nIn the proposed sample-aware entropy regularization, the weighting factor α plays an important role in controlling the ratio between the policy distribution π and the sample action distribution q. However, it is difficult to estimate optimal α directly. Hence, we further propose an adaptation method for α based on max-min principle widely considered in game theory, robust learning, and decision making problems (Chinchuluun et al., 2008). Since we do not know optimal α, an alternative formulation is that we maximize the return while maximizing the worst-case sample-aware entropy, i.e., minαH(qπ,αtarget). Then, the max-min approach can be formulated as follows:\nmax π Eτ0∼π [∑ t γt(rt + βmin α [H(qπ,αtarget)− αc]) ] (10)\n1Note that if we replace πold with π and view every state st as an initial state, then (8) reduces to J(π).\nwhere c is a control hyperparameter for α adaptation. We learn α to minimize H(qπ,αtarget) − αc, so the role of c is to maintain the target entropy at a certain level to explore the state-action well. Detailed implementation for α-adaptation is given in Appendix B.1." }, { "heading": "6 EXPERIMENTS", "text": "In this section, we evaluate the proposed DAC algorithm on various continuous-action control tasks and provide ablation study. In order to see the superiority of the sample-aware entropy regularization, we here focus on comparison with two SAC baselines: SAC and SAC-Div. SAC-Div is SAC combined with the method in (Hong et al., 2018) that diversifies policies from buffer distribution by simply maximizing J(π) + αdD(π||q) for J(π) in (1) and some divergence D. Note that the key difference between SAC-Div and DAC is that SAC-Div simply adds the single divergence term to the policy objective function J(π), whereas DAC considers the discounted sum of target entropy terms as seen in (3). For SAC-Div, we consider KL divergence (MSE if the policy is Gaussian) and adaptive scale αd with δd = 0.2 for the divergence term as suggested in (Hong et al., 2018). In order to rule out the influence of factors other than exploration, we use the common simulation setup for DAC and SAC baselines except for the parts about entropy or divergence.\nIn addition, we provide comparison of DAC to random network distillation (RND) (Burda et al., 2018) and MaxEnt (Hazan et al., 2019), which are the recent exploration methods based on finding rare states in Appendix F.2, and to other recent RL algorithms in Appendix F.3. The result shows that DAC yields the best performance for all considered tasks as compared to recent RL algorithms. We also provide the source code of DAC implementation that requires Python Tensorflow. Detailed simulation setup for experiments is summarized in Appendix E." }, { "heading": "6.1 PURE EXPLORATION COMPARISON", "text": "In order to see the exploration performance of DAC (α = 0.5) as compared to the SAC baselines, we compare state visitation on a 100 × 100 continuous 4-room maze task. The maze environment is made by modifying a continuous grid map available at https://github.com/huyaoyu/ GridMap, and it is shown in Fig. 1(a). State is (x, y) position in the maze, action is (dx, dy) bounded by [−1, 1], and the agent location after the action becomes (x + dx, y + dy). The agent starts from the left lower corner (0.5, 0.5) and explores the maze without any reward, and Fig. 1(b) shows the mean number of new state visitations over 30 seeds, where the number of state visitation is obtained for each integer interval. As seen in Fig. 1(b), DAC visited much more states than SAC/SAC-Div, which means that the exploration performance of DAC is superior to that of the SAC baselines. In addition, Fig. 1(c) shows the corresponding state visit histogram of all seeds. Here, as the color of the state becomes brighter, the state is visited more times. Note that SAC/SAC-Div rarely visit the right upper room even at 500k time steps for all seeds, but DAC starts visiting the\nright upper room at 5k time steps and frequently visit the right upper room at 500k time steps. Thus, Fig. 1(c) clearly shows that DAC has better sample-efficiency for exploration than SAC/SAC-Div." }, { "heading": "6.2 PERFORMANCE COMPARISON WITH THE SAC BASELINES", "text": "The final goal of RL is to achieve high scores for given tasks. For this, exploration techniques are needed to ensure that the policy does not converge to local optima, as explained in Section 2. We first showed the improvement of the exploration performance in a pure exploration task (continuous 4-room maze), and experiments in this section will show that DAC has better return performance than SAC baselines on several sparse-rewarded tasks. Note that having high scores on the sparsereward tasks means that the policy can get rewards well without falling into local optima, which implies that the agent successfully explores more state-action pairs that have positive (or diverse) rewards. Therefore, the performance comparison on sparse tasks fits well to the motivation and also note that sparse-rewarded tasks has been widely used as a verification method of exploration in many previous exploration studies (Hong et al., 2018; Mazoure et al., 2019; Burda et al., 2018).\nFixed α case: In order to see the advantage of the sample-aware entropy regularization for rewarded tasks, we compare the performance of DAC with α = 0.5 and the SAC baselines on simple MDP tasks: SparseMujoco tasks. SparseMujoco is a sparse version of Mujoco and the reward is 1 if the agent exceeds the x-axis threshold, otherwise 0 (Hong et al., 2018; Mazoure et al., 2019).\nThe performance results averaged over 10 random seeds are shown in Fig. 2. As seen in Fig. 2, DAC has significant performance gain for most tasks as compared to SAC. On the other hand, SAC-Div also enhances the convergence speed compared to SAC for some tasks, but it fails to enhance the final performance. Fig. 3 shows the α-skewed JS divergence curve (α = 0.5) of DAC and SAC/SAC-Div for sparse Mujoco tasks and we provide Fig. F.1 in Appendix F.1 that shows the corresponding mean number of discretized state visitation curve on sparse Mujoco tasks. For SAC/SAC-Div, the ratio function R is estimated separately from (B.2) in Appendix B and the divergence is computed from R. The performance table for all tasks is given by Table F.1 in Appendix F.1. As seen in Fig. 3, the divergence of DAC is much higher than that of SAC/SAC-Div throughout the learning time. It means that the policy of DAC choose more diverse actions from the distribution far away from the sample action distribution q, then DAC visits more diverse states than the SAC baselines as seen in Fig. F.1. Thus, DAC encourages better exploration and it yields better performance. Thus, we can conclude that the proposed sample-aware entropy regularization is superior to the simple policy entropy regularization of SAC and single divergence regularization of SAC-Div in terms of exploration and the convergence.\nAdaptive α case: Now, we compare the performance of DAC with α = 0.5, 0.8, α-adaptation, and the SAC baselines to see the need of α-adaptation. To maintain controllability and prevent saturation\nof Rαη , we used regularization for α learning and restricted the range of α as 0.5 ≤ α ≤ 0.99 for α adaptation so that a certain level of entropy regularization is enforced. Here, we consider more complicated tasks: HumanoidStandup and delayed Mujoco tasks (DelayedHalfCheetah, DelayedHopper, DelayedWalker2d, and DelayedAnt). HumanoidStandup is one of high-action dimensional Mujoco tasks. Delayed Mujoco tasks suggested by (Zheng et al., 2018; Guo et al., 2018) have the same state-action spaces with original Mujoco tasks but reward is sparsified. That is, rewards for D time steps are accumulated and the accumulated sum is delivered to the agent once every D time steps, so the agent receives no reward during the accumulation time. The performance results averaged over 5 random seeds are shown in Fig. 4. The result of the max average return of these Mujoco tasks for DAC and SAC/SAC-Div is provided in Table F.2 in Appendix F.1. As seen in Fig. 4, all versions of DAC outperform SAC. Here, SAC-Div also outperforms SAC for several tasks, but the performance gain by DAC is much higher. In addition, it is seen that the best α depends on the tasks in the fixed α case. For example, α = 0.8 is the best for DelayedHalfCheetah, but α = 0.5 is the best for DelayedAnt. Thus, we need to adapt α for each task. Finally, DAC with α-adaptation has the top-level performance for most tasks and the best performance for HumanoidStandup and DelayedHopper tasks. Further consideration for α is provided in Section 6.3." }, { "heading": "6.3 ABLATION STUDY", "text": "In this section, we provide ablation study for important parameters in the sample-aware entropy regularization on the DelayedHalfCheetah task. Ablation studies on the other DelayedMucoco tasks are provided in Appendix G.\nWeighting factor α: As seen in Section 6.2, α-adaptation is necessary because one particular value of α is not best for all environments. Although the proposed α-adaptation in Section 5 is suboptimal, it shows good performance across all the considered tasks. Thus, we study more on the proposed α-adaptation and the sensible behavior of sample-awareness in entropy regularization.\nFig. 5(a) shows the averaged learning curve of α, α-skewed JS divergenceDJS(π||q) and the policy entropy H(π) for DAC with the proposed α-adaptation method on DelayedHalfCheetah. Here, we fix the control coefficient c as −2.0dim(A). As seen in (3), the return, the policy entropy and the JS divergence are intertwined in the cost function, so their learning curves are also intertwined over time steps. Here, the learned policy entropy term decreases and the learned α increases to one as time step goes on. Then, the initially nonzero JS divergence term DJS(π||q) diminishes to zero, which means that the sample action distribution is exploited for roughly initial 2.5M time steps, and then DAC operates like SAC. This adaptive exploitation of the sample-aware entropy leads to better overall performance across time steps as seen in Fig. 4, so DAC with α-adaptation seems to properly exploit both the policy entropy and the sample action distribution depending on the learning stage.\nControl coefficient c: In the proposed α-adaptation (10), the control coefficient c affects the learning behavior of α. Since H(π) and DαJS are proportional to the action dimension, we tried a few values such as 0, −0.5d, −1.0d and −2.0d where d = dim(A). Fig. 5(b) shows the corresponding performance of DAC with α-adaptation on DelayedHalfCheetah. As seen in Fig. 5(b), the performance depends on the change of c as expected, and c = −2.0 ·dim(A) performs well. We observed that −2.0d performed well for all considered tasks, thus we set c = −2.0d in (B.8). Entropy coefficient β: As mentioned in (Haarnoja et al., 2018a), the performance of SAC depends on β. It is expected that the performance of DAC depends on β too. Fig 5(c) shows the performance of DAC with fixed α = 0.5 for three different values of β: β = 0.1, 0.2 and 0.4 on DelayedHalfCheetah. It is seen that the performance of DAC indeed depends on β. Although there exists performance difference for DAC depending on β, the performance of DAC is much better than SAC for a wide range of β. One thing to note is that the coefficient of pure policy entropy regularization term for DAC is αβ, as seen in (3). Thus, DAC with α = 0.5 and β = 0.4 has the same amount of pure policy entropy regularization as SAC with β = 0.2. However, DAC with α = 0.5 and β = 0.4 has much higher performance than SAC with β = 0.2, as seen in Fig. 5(c). So, we can see that the performance improvement of DAC comes from joint use of policy entropyH(π) and the sample action distribution from the replay buffer via DαJS(π||q). The effect of JS divergence: In order to see the effect of the JS divergence on the performance, we also provide an additional ablation study that we consider a single JS divergence for SAC-Div by using the ratio function in Section 4.3. 5(d) shows the performance comparison of SAC, SACDiv(KL), SAC-Div(JS), and DAC. For SAC-Div(JS), we used δd = 0.5 for adaptive scaling in (Hong et al., 2018). As a result, there was no significant difference in performance between SAC-Div with JS divergence and SAC-Div with KL divergence. On the other hand, the DAC still shows a greater performance increase than both SAC-Div(KL) and SAC-Div(JS), and this means that the DAC has more advantages than simply using JS divergence." }, { "heading": "7 CONCLUSION AND FUTURE WORKS", "text": "In this paper, we have proposed a sample-aware entropy framework for off-policy RL to overcome the limitation of simple policy entropy for sample-efficient exploration. With the sample-aware entropy regularization, we can achieve diversity gain by exploiting sample history in the replay buffer in addition to policy entropy. For practical implementation of sample-aware entropy regularized policy optimization, we have proposed the DAC algorithm with convergence proof. We have also provided an adaptation method for DAC to control the ratio of the sample action distribution to the policy action entropy. DAC is an actor-critic algorithm for sample-aware regularized policy optimization and generalizes SAC. Numerical results show that DAC significantly outperforms SAC baselines in Maze exploration and various Mujoco tasks.\nFor further study, we consider a generalization of our method in order to deal with the entropy of the state-action distribution. Currently, many recent papers only consider one of the entropy of state distribution dπ(s) or that of action distribution π(a|s) only since they have much different properties (e.g. the state-based entropy is non-convex on π and the action-based entropy is convex on π). However, both entropies can be handled simultaneously as one fused entropy that deals with the entropy of the state-action distribution, factorized as log dπ(s, a) = log dπ(s) + log π(a|s). Then, the generalization of our method for the fused entropy may be able to further enhance the exploration performance by considering the exploration on the entire state-action space." }, { "heading": "A PROOFS", "text": "A.1 PROOF OF THEOREM 1\nFor a fixed policy π, Qπ can be estimated by repeating the Bellman backup operator by Lemma 1. Lemma 1 is based on usual policy evaluation but has a new ingredient of the ratio condition in the sample-aware case.\nLemma 1 (Diverse Policy Evaluation) Define a sequence of diverse Q-functions as Qk+1 = T πQk, k ≥ 0, where π is a fixed policy and Q0 is a real-valued initial Q. Assume that the action space is bounded, and Rπ,α(st, at) ∈ (0, 1) for all (st, at) ∈ S ×A. Then, the sequence {Qk} converges to the true diverse state-action value Qπ .\nProof. Let rπ,t := 1β rt+γEst+1∼P [Eat+1∼π[α logR π,α(st+1, at+1)−α logαπ(at+1|st+1)]+(1− α)Eat+1∼q[log(1 − Rπ,α(st+1, at+1)) − log(1 − α)q(at+1|st+1)]]. Then, we can formulate the standard Bellman equation form for the true Qπ as\nT πQ(st, at) = rπ,t + γEs+1∼P, at+1∼π [Q(st+1, at+1)] (A.1)\nUnder the assumption of a bounded action space and Rπ,α ∈ (0, 1), the reward rπ,t is bounded and the convergence is guaranteed as the usual policy evaluation (Sutton & Barto, 1998; Haarnoja et al., 2018a).\nNow, we prove diverse policy improvement in Lemma 2 and diverse policy iteration in Theorem 1 by using Jπold(π) in a similar way to usual RL or SAC.\nLemma 2 (Diverse Policy Improvement) Let πnew be the updated policy obtained by solving πnew = arg max\nπ∈Π Jπold(π). Then, Q πnew(st, at) ≥ Qπold(st, at), ∀ (st, at) ∈ S ×A.\nProof. We update the policy to maximize Jπold(π), so Jπold(πnew) ≥ Jπold(πold). Hence,\nEat∼πnew [Qπold(st, at) + α logRπnew,α(st, at)− α logαπnew(at|st)] + (1− α)Eat∼q[log(1−Rπnew,α(st, at))− log(1− α)q(at|st)]\n≥Eat∼πold [Qπold(st, at) + α logRπold,α(st, at)− α logαπold(at|st)] + (1− α)Eat∼q[log(1−Rπold,α(st, at))− log(1− α)q(at|st)] =V πold(st) (A.2)\nBy repeating the Bellman equation (7) and (A.2) at Qπold ,\nQπold(st, at) = 1\nβ rt + γEst+1∼P [V πold(st+1)]\n≤ 1 β rt + γEst+1∼P [Eat+1∼πnew [Qπold(st+1, at+1) + α logRπnew,α(st+1, at+1)\n− α logαπnew(at+1|st+1)] + (1− α)Eat+1∼q[log(1−Rπnew,α(st+1, at+1)) − log(1− α)q(at+1|st+1)]]\n... ≤ Qπnew(st, at), (A.3)\nfor each (st, at) ∈ S ×A.\nTheorem 1 (Diverse Policy Iteration) By repeating iteration of the diverse policy evaluation and the diverse policy improvement, any initial policy converges to the optimal policy π∗ s.t. Qπ ∗ (st, at) ≥ Qπ ′ (st, at), ∀ π′ ∈ Π, ∀ (st, at) ∈ S × A. Also, such π∗ achieves maximum J , i.e., Jπ∗(π∗) ≥ Jπ(π) for any π ∈ Π.\nProof. Let {πi : i ≥ 0, πi ∈ Π} be a sequence of policies s.t. πi+1 = arg maxπ∈Π Jπi(π). For arbitrary state action pairs (s, a) ∈ S×A, {Qπi(s, a)}monotonically increases by Lemma 2 and each Qπi(s, a) is bounded. Also, πi+1 is obtained by the policy improvement that maximizes Jπi(π(·|s)), so Jπi(πi+1(·|s)) ≥ Jπi(πi(·|s)) as stated in the proof of Lemma 2. From the definition of Jπold(π) in (8), all terms are the same for Jπi+1(πi+1(·|s)) and Jπi(πi+1(·|s)) except βEa∼πi+1 [Qπi+1(s, a)] in Jπi+1(πi+1(·|s)) and βEa∼πi+1 [Qπi(s, a)] in Jπi(πi+1(·|s)). Since {Qπi(s, a)} monotonically increases, Jπi+1(πi+1(·|s)) ≥ Jπi(πi+1(·|s)). Finally, Jπi+1(πi+1(·|s)) ≥ Jπi(πi+1(·|s)) ≥ Jπi(πi(·|s)) for any state s ∈ S, so the sequence {Jπi(πi(·|s)} also monotonically increases, and each Jπi(πi(·|s)) is bounded because Q-function and the target entropy are bounded. By the monotone convergence theorem, {Qπi} and {Jπi(πi)} pointwisely converge to their optimal functions Q∗ : S ×A → R and J∗ : S → R, respectively. Here, note that J∗(s) ≥ Jπi(πi(·|s)) for any i because the sequence {Jπi(πi)} is monotonically increasing. From the definition of convergent sequence, for arbitrary > 0, there is a large N ≥ 0 s.t. Jπi(πi(·|s)) ≥ J∗(s)− (1−γ) γ satisfies for all i ≥ N and any s ∈ S.\nNow, we can easily show that Jπk(πk(·|s)) ≥ Jπk(π(·|s))− (1−γ) γ for any k > N , any policy π ∈ Π, and any s ∈ S. (If not, Jπk(πk+1) = maxπ′ Jπk(π′) ≥ Jπk(π), and then Jπk+1(πk+1(·|s′)) ≥ Jπk(πk+1(·|s′)) ≥ Jπk(π(·|s′)) > Jπk(πk(·|s′)) + (1−γ) γ ≥ J\n∗(s′) for some s′ ∈ S . Clearly, it contradicts the monotone increase of the sequence {Jπi(πi)}.) Then, by the similar way with (A.3),\nQπk(st, at) = 1\nβ rt + γEst+1∼P [V πk(st+1)] =\n1 β rt + γEst+1∼P [Jπk(πk(·|st+1))]\n≥ 1 β rt + γEst+1∼P\n[ Jπk(π(·|st+1))−\n(1− γ) γ ] = 1\nβ rt + γEst+1∼P [Eat+1∼π[Qπk(st+1, at+1) + α logRπ,α(st+1, at+1)− α logαπ(at+1|st+1)]\n+ (1− α)Eat+1∼q[log(1−Rπ,α(st+1, at+1))− log(1− α)q(at+1|st+1)]]− (1− γ) ... ≥ Qπ(st, at)− . (A.4)\nNote that the state action pair (s, a), the policy π, and > 0 were arbitrary, so we can conclude that Qπ∞(s, a) ≥ Qπ(s, a) for any π ∈ Π and (s, a) ∈ S×A. In addition, we show that Jπk(πk(·|s)) ≥ Jπk(π(·|s)) − (1−γ) γ , so Jπ∞(π∞(·|s)) ≥ Jπ(π(·|s)) for any π ∈ Π and any s ∈ S. Thus, π∞ is the optimal policy π∗, and we can conclude that {πi} converges to the optimal policy π∗.\nA.2 PROOF OF THEOREM 2\nTheorem 2 Suppose that the policy is parameterized with parameter θ. Then, for parameterized policy πθ, two objective functions Jπθold (πθ(·|st)) and J̃πθold (πθ(·|st)) have the same gradient direction for θ at θ = θold for all st ∈ S.\nProof. Under the parameterization of πθ, the two objective functions become\nJπθold (πθ(·|st)) = β(Eat∼πθ [Q πθold (st, at) + α logR πθ,α(st, at)− α log πθ(at|st)] + (1− α)Eat∼q[log(1−Rπθ,α(st, at))− log q(at|st)]) +H(α)\nJ̃πθold (πθ(·|st)) = βEat∼πθ [Q πold(st, at) + α logR πold,α(st, at)− α log πθ(at|st)].\nWe can ignore the common Q-function and log πθ terms, and the constant terms w.r.t. θ that leads zero gradient in both objective functions. Thus, we only need to show\n∇θ[αEat∼πθ [logRπθ,α] + (1− α)Eat∼q[log(1−Rπθ,α)]] = ∇θEat∼πθ [α logRπθold ,α] (A.5)\nat θ = θold. Now, the gradient of the left term in (A.5) at θ = θold can be expressed as\n∇θ[αEat∼πθ [logRπθ,α] + (1− α)Eat∼q[log(1−Rπθ,α)]] = αEat∼πθ [logRπθ,α · ∇θ log πθ]\n+ αEat∼πθ [∇θ logRπθ,α] + (1− α)Eat∼q[∇θ log(1−Rπθ,α)] = ∇θαEat∼πθ [α logRπθold ,α]\n+ αEat∼πθ [∇θ logRπθ,α] + (1− α)Eat∼q[∇θ log(1−Rπθ,α)]. (A.6)\nHere, the gradient of the last two terms in (A.6) becomes zero as shown below:\nαEat∼πθ [∇θ logRπθ,α] + (1− α)Eat∼q[∇θ log(1−Rπθ,α)] = αEat∼πθ [∇θRπθ,α/Rπθ,α] + (1− α)Eat∼q[∇θ(1−Rπθ,α)/(1−Rπθ,α)] = αEat∼πθ [∇θRπθ,α/Rπθ,α]− (1− α)Eat∼q[∇θRπθ,α/(1−Rπθ,α)]\n= αEat∼πθ [∇θRπθ,α/Rπθ,α]− (1− α)Eat∼q [ απθ + (1− α)q\n(1− α)q · ∇θRπθ,α ] (1) = αEat∼πθ [∇θRπθ,α/Rπθ,α]− αEat∼πθ [ απθ + (1− α)q\nαπθ · ∇θRπθ,α ] = αEat∼πθ [∇θRπθ,α/Rπθ,α]− αEat∼πθ [∇θRπθ,α/Rπθ,α] = 0, (A.7)\nwhere we used an importance sampling technique at Eat∼q[f(st, at)] = Eat∼πθ [ q(at|st) πθ(at|st)f(st, at) ] for Step (1). By (A.6) and (A.7), Jπθold (πθ(·|st)) and Jπθold (πθ(·|st)) have the same gradient at θ = θold." }, { "heading": "B DETAILED DAC IMPLEMENTATION", "text": "To compute the final objective function (9), we need to estimate Qπold and Rπold,α. Qπold can be estimated by diverse policy evaluation. For estimation of Rπold,α, we use function Rα. If we set the objective function of the ratio function as J(Rα(st, ·)) = αEat∼π[logRα(st, at)] + (1 − α)Eat∼q[log(1−Rα(st, at))]. In the α = 0.5 case, Generative Adversarial Network (GAN) (Goodfellow et al., 2014) has shown that the ratio function for α = 0.5 can be estimated by maximizing J(R0.5). By a similar way, we can easily show that maximizing J(Rα) can estimate our ratio function as below:\nFor given s, J(Rα(s, ·)) = ∫ a απ(a|s) logRα(s, a) + (1 − α)q(a|s) log(1 − Rα(s, a))da. The integrand is in the form of y → a log y + b log(1 − y) with a = απ and b = (1 − α)q. For any (a, b) ∈ R2\\(0, 0), the function y → a log y+ b log(1− y) has its maximum at a/(a+ b). Thus, the optimal R∗,α maximizing J(Rα(s, ·)) is R∗,α(s, a) = απ/(απ + (1− α)q) = Rπ,α(st, at). Here, note that J(Rα) becomes just an α-skewed Jensen-Shannon (JS) divergence except some constant terms if Rα = Rπ,α.\nFor implementation we use deep neural networks to approximate the policy π, the diverse value functions Q, V , and the ratio function Rα, and their network parameters are given by θ, φ, ψ, and η, respectively. Based on Section 4.3 and we provide the practical objective (or loss) functions for parameter update as Ĵπ(θ), ĴRα(η), L̂Q(φ), and L̂V (ψ). The objective functions for the policy π and the ratio function Rα are respectively given by\nĴπ(θ) = Est∼D, at∼πθ [Qφ(st, at) + α logRαη (st, at)− α log πθ(at|st)], (B.1)\nĴRα(η) = Est∼D[αEat∼πθ [logRαη (st, at)] + (1− α)Eat∼D[log(1−Rαη (st, at))]]. (B.2) Furthermore, based on the Bellman operator, the loss functions for the value functions Q and V are given respectively given by\nL̂Q(φ) = E(st, at)∼D [ 1\n2 (Qφ(st, at)− Q̂(st, at))2\n] , (B.3)\nL̂V (ψ) = Est∼D [ 1\n2 (Vψ(st)− V̂ (st))2\n] , (B.4)\nwhere the target values are defined as\nQ̂(st, at) = 1\nβ rt + γEst+1∼P [Vψ̄(st+1)] (B.5)\nV̂ (st) = Eat∼πθ [Qφ(st, at) + α logRαη (st, at)− α logαπθ(at|st)] + (1− α)Eat∼D[log(1−Rαη (st, at))− log(1− α)q(at|st)]. (B.6)\nBy using the property of ratio function that satisfies log(1−Rπ,α)− log(1−α)q = − log(απ+(1− α)q) = logRπ,α− logαπ, we can replace the last term in V̂ (st) as (1−α)Eat∼D[logRαη (st, at)− logαπ(at|st)]. However, the probability of π for actions sampled from D can have high variance, so we clip the term in the expectation over at ∼ D by action dimension for stable learning, then the final target value becomes\nV̂ (st) = Eat∼πθ [Qφ(st, at) + α logRαη (st, at)− α logαπθ(at|st)] + (1− α)Eat∼D[clip(logRαη (st, at)− logαπ(at|st),−d, d)], (B.7)\nwhere d = dim(A) is the action dimension. We will use (B.7) for implementation. Then, note that all objective (or loss) functions does not require the explicit q, and they can be represented by using the ratio function Rα only as explained in Section 4.3.\nIn addition, Rα ∈ (0, 1) should be guaranteed in the proof of Theorem 1, and Rα ∈ (0, 1) satisfies when π and q are non-zero for all state-action pairs. For practical implementation, we clipped the ratio function as ( , 1− ) for small > 0 since some q values can be close to zero before the replay buffer stores a sufficient amount of samples. π is always non-zero since we consider Gaussian policy.\nHere, ψ̄ is the network parameter of the target value Vψ̄ updated by exponential moving average (EMA) of ψ for stable learning (Mnih et al., 2015). Combining all up to now, we propose the\ndiversity actor-critic (DAC) algorithm summarized as Algorithm 1 in Appendix C. Note that DAC becomes SAC when α = 1, and becomes standard off-policy RL without entropy regularization when α = 0.\nTo compute the gradient of Ĵπ(θ), we use the reparameterization trick proposed by (Kingma & Welling, 2013; Haarnoja et al., 2018a). Note that the policy action at ∼ πθ is the output of the policy neural network with parameter θ. So, it can be viewed as at = fθ( t; st), where f is a function parameterized by θ and t is a noise vector sampled from spherical normal distribution N . Then, the gradient of Ĵπ(θ) is represented as∇θĴπ(θ) = Est∼D, t∼N [∇a(Qφ(st, a) +α logRαη (st, a)− α log πθ(a|st))|a=fθ( t;st)∇θfθ( t; st)− α(∇θ log πθ)(fθ( t; st)|st)]. For implementation, we use two Q-functions Qφi , i = 1, 2 to reduce overestimation bias as proposed in (Fujimoto et al., 2018), and each Q-function is updated to minimize their loss function L̂Q(φi). For the policy and the value function update, the minimum of two Q-functions is used (Haarnoja et al., 2018a).\nNote that one version of SAC (Haarnoja et al., 2018b) considers adaptation of the entropy control factor β by using the Lagrangian method with constraint H(π) ≥ c. In our case, this approach can also be generalized, but it is beyond the scope of the current paper and we only consider fixed β in this paper.\nB.1 DETAILED IMPLEMENTATION OF THE α-ADAPTATION\nIn order to learn α, we parameterize α as a function of st using parameter ξ, i.e., α = αξ(st), and implement αξ(st) with a neural network. Then, ξ is updated to minimize the following loss function deduced from (10):\nL̂α(ξ) = Est∼D[αξH(πθ) +D αξ JS(πθ||q) + (1− αξ)H(q)− αξc] (B.8)\nHere, all the updates for diverse policy iteration is the same except that α is replaced with αξ(st). Then, the gradient of L̂α(ξ) with respect to ξ can be estimated as below:\nThe loss function of α is defined as L̂α(ξ) = Est∼D[αξH(πθ)+D αξ JS(πθ||q)+(1−αξ)H(q)−αξc]. The gradient of L̂α(ξ) can be computed as\n∇ξL̂α(ξ) = ∇ξEst∼D[αξH(πθ) +D αξ JS(πθ||q) + (1− αξ)H(q)− αξc]\n=∇ξEst∼D[αξEat∼πθ [− log(αξπθ + (1− αξ)q)− c] + (1− αξ)Eat∼q[− log(αξπθ + (1− αξ)q)]] =Est∼D[(∇ξαξ)(Eat∼πθ [− log(αξπθ + (1− αξ)q)− c]− Eat∼q[− log(αξπθ + (1− αξ)q)])]\n+ Est∼D[αξEat∼πθ [−∇ξ log(αξπθ + (1− αξ)q)] + (1− αξ)Eat∼q[−∇ξ log(αξπθ + (1− αξ)q)]] =Est∼D[(∇ξαξ)(Eat∼πθ [− logαξπθ + logRπθ,αξ − c]− Eat∼q[log(1−Rπθ,αξ)− log(1− αξ)q])]\n+ Est∼D [∫\nat∈A (αξπθ + (1− αξ)q)[−∇ξ log(αξπθ + (1− αξ)q)︸ ︷︷ ︸\n=0\n]\n]\n=Est∼D[(∇ξαξ)(Eat∼πθ [− logαξπθ + logRπθ,αξ − c]− Eat∼q[logRπθ,αξ − logαξπθ])] (B.9)\nNote that Rπθ,αξ can be estimated by the ratio function Rαξη . Here, we use the same clipping technique as used in (B.7) for the last term of (B.9)." }, { "heading": "C ALGORITHM", "text": "Algorithm 1 Diversity Actor Critic Initialize parameter θ, η, ψ, ψ̄, ξ, φi, i = 1, 2 for each iteration do\nSample a trajectory τ of length N by using πθ Store the trajectory τ in the buffer D for each gradient step do\nSample random minibatch of size M from D Compute Ĵπ(θ), ĴRα(η), L̂Q(φi), L̂V (ψ) from the minibatch θ ← θ + δ∇θĴπ(θ) η ← η + δ∇ηĴRα(η) φi ← φi − δ∇φiL̂Q(φi), i = 1, 2 ψ ← ψ − δ∇ψL̂V (ψ) Update ψ̄ by EMA from ψ if α-Adpatation then\nCompute L̂α(ξ) from the minibatch ξ ← ξ − δ∇ξL̂α(ξ)\nend if end for\nend for" }, { "heading": "D HYPERPARAMETER SETUP AND ENVIRONMENT DESCRIPTION", "text": "In Table D.1, we provide the detailed hyperparameter setup for DAC and the SAC baselines: SAC, and SAC-Div. Table D.2 shows the environment description, the corresponding entropy control coefficient β, threshold for sparse Mujoco tasks, and reward delay D for delayed Mujoco tasks." }, { "heading": "E SIMULATION SETUP", "text": "We compared our DAC algorithm with the SAC baselines and other RL algorithms on various types of Mujoco tasks with continuous action spaces (Todorov et al., 2012) in OpenAI GYM (Brockman et al., 2016). For fairness, both SAC/SAC-Div and DAC used a common hyperparameter setup that basically follows the setup in (Haarnoja et al., 2018a). Detailed hyperparameter setup and environment description are provided in Appendix D, and the entropy coefficient β is selected based on the ablation study in Section 6.3. For the policy space Π we considered Gaussian policy set widely considered in usual continuous RL. For the performance plots in this section, we used deterministic evaluation which generated an episode by deterministic policy for each iteration, and the shaded region in the figure represents standard deviation (1σ) from the mean." }, { "heading": "F PERFORMANCE COMPARISONS", "text": "In this section, we provide more performance plots and tables. In Section F.1, Fig. F.1 shows the mean number of discretized state visitation curve of DAC and SAC/SAC-Div. For discretization, we simply consider 2 components of observations of Mujoco tasks, which indicate the position of the agent: x, z axis position for SparseHalfCheetah, SparseHopper, and SparseWalker, and x, y axis position for SparseAnt. We discretize the position by setting the grid spacing per axis to 0.01 in range (−10, 10). Table. F.1 shows the performance on sparse Mujoco tasks. Table F.2 shows max average return for HumanoidStandup and delayed Mujoco tasks. In Section F.3, Fig. F.3 and Table. F.3 shows the performance comparison to other RL algorithms on HumanoidStandup and delayed Mujoco tasks.\nF.1 PERFORMANCE COMPARISON WITH THE SAC BASELINES\n0.0 0.2 0.4 0.6 0.8 1.0 Time Steps (1e6)\n6000\n8000\n10000\n12000\n14000\n16000\nM ea\nn Nu\nm be\nr o f V\nisi te\nd Bl\noc ks\nSAC SAC-Div DAC\n(a) SparseHalfCheetahv1\n0.0 0.2 0.4 0.6 0.8 1.0 Time Steps (1e6)\n2000\n2250\n2500\n2750\n3000\n3250\n3500\n3750\n4000\nM ea\nn Nu\nm be\nr o f V\nisi te\nd Bl\noc ks\nSAC SAC-Div DAC\n(b) SparseHopper-v1\n0.0 0.2 0.4 0.6 0.8 1.0 Time Steps (1e6)\n4000\n6000\n8000\n10000\n12000\nM ea\nn Nu\nm be\nr o f V\nisi te\nd Bl\noc ks\nSAC SAC-Div DAC\n(c) SparseWalker2d-v1\n0.0 0.5 1.0 1.5 2.0 2.5 3.0 Time Steps (1e6)\n8000\n9000\n10000\n11000\n12000\n13000\nM ea\nn Nu\nm be\nr o f V\nisi te\nd Bl\noc ks\nSAC SAC-Div DAC\n(d) SparseAnt-v1\nFigure F.1: The number of discretized state visitation on sparse Mujoco tasks\nDAC (α = 0.5) SAC SAC-Div SparseHalfCheetah 915.90±50.71 386.90±404.70 394.70±405.53 SparseHopper 896.90±10.57 900.60±5.22 901.40±4.25 SparseWalker2d 573.10±404.96 301.30±408.15 373.10±433.13 SparseAnt 935.80±37.08 870.70±121.14 963.80±42.51\nTable F.1: Max average return of DAC algorithm and SAC baselines for fixed α setup\nDAC (α = 0.5) DAC (α = 0.8) DAC (α-adapt.) SAC SAC-Div\nHumanoidS 202491.81±25222.77 170832.05 ±12344.71\n197302.37 ±43055.31\n167394.36 ±7291.99\n165548.76 ±2005.85\nDel. HalfCheetah 6071.93±1045.64 6552.06±1140.18 7594.70±1259.23 3742.33±3064.55 4080.67±3418.07 Del. Hopper 3283.77±112.04 2836.81±679.05 3428.18±69.08 2175.31±1358.39 2090.64±1383.83 Del. Walker2d 4360.43±507.58 3973.37±273.63 4067.11±257.81 3220.92±1107.91 4048.11±290.48 Del. Ant 4088.12±578.99 3535.72±1164.76 4243.19±795.49 3248.43±1454.48 3978.34±1370.23\nTable F.2: Max average return of DAC algorithms and SAC baselines for adaptive α setup\nF.2 COMPARISON TO RND AND MAXENT\nWe first compared the pure exploration performance of DAC to random network distillation (RND) (Burda et al., 2018) and MaxEnt (Hazan et al., 2019), which are state-of-the-art exploration methods, on the continuous 4-room maze task described in Section 6.1. RND adds an intrinsic reward rint,t to MDP extrinsic reward rt as rRND,t = rt+ cintrint,t based on the model prediction error rint,t = ||f̂(st+1) − f(st+1)||2 of prediction network f̂ and random target network f for given state st+1. The parameter of the target network is initially given randomly and the prediction network learns to minimize the MSE of the two models. Then, the agent goes to rare states since rare states have higher prediction errors. On the other hand, MaxEnt considers maximizing the entropy of state mixture distribution dπ mix\nby setting the reward functional in (Hazan et al., 2019) as − log dπmix(s) + cM , where dπ is a state distribution of the trajectory generated from π and cM is a smoothing constant. Here, MaxEnt mainly considers large or continous state space, so the reward functional is computed based on several projection/discretization methods. Then, MaxEnt explores the state space better than a simple random policy on various tasks with continuous state space.\nFor RND, for both the prediction network and the target network, we use MLP with 2 ReLu hidden layers of size 256, where the input dimension is equal to the state dimension and the output dimension is 20, and we use cint = 1. For MaxEnt, we compute the reward functional at each iteration by using Kernel density estimation with a bandwidth 0.1 as stated in (Hazan et al., 2019) on previous 10000 states stored in the buffer, and we use cM = 0.01. For RND and MaxEnt, we change the entropy term of SAC/DAC to the intrinsic reward and the reward functional term respectively, and we use the Gaussian policy with fixed standard deviation σ = 0.1. Fig. F.2(a) shows the mean number of state visitation curve over 30 seeds of the 4-room maze task and Fig. F.2(b) shows the corresponding state visit histogram of all seeds. As seen in Fig. F.2, DAC explores more number of states than RND and MaxEnt on continuous 4-room maze task, so it is seen that the exploration of DAC is more sample-efficient than that of RND/MaxEnt on the maze task.\n0 1 2 3 4 5 Time Steps (1e5)\n0\n500\n1000\n1500\n2000\n2500\n3000\n3500\nM ea\nn Nu\nm be\nr o f V\nisi te\nd Bl\noc ks\nRND MaxEnt DAC\n(a) Number of state visitation\n0 50 100\n0\n50\n100\nRN D\n0\n50\n100\nM ax\nEn t\n0\n50\n100\nDA C\n0\n5\n>10\nExploration Flow\n(b) State visit histogram at 5k (left) 50k (middle) 500k (right) steps\nFigure F.2: Pure exploration comparison with RND/MaxEnt\nF.3 COMPARISON TO OTHER RL ALGORITHMS\nWe also compare the performance of DAC with α-adaptation to other state-of-the-art RL algorithms. Here, we consider various on-policy RL algorithms: Proximal Policy Optimization (Schulman et al., 2017b) (PPO, a stable and popular on-policy algorithm), Actor Critic using Kronecker-factored Trust Region (Wu et al., 2017) (ACKTR, actor-critic that approximates natural gradient by using Kronecker-factored curvature), and off-policy RL algorithms: Twin Delayed Deep Deterministic Policy Gradient (Fujimoto et al., 2018) (TD3, using clipped double-Q learning for reducing overestimation); and Soft Q-Learning (Haarnoja et al., 2017) (SQL, energy based policy optimization using Stein variational gradient descent). We used implementations in OpenAI baselines (Dhariwal et al., 2017) for PPO and ACKTR, and implementations in author’s Github for other algorithms. We provide the performance results as Fig. F.3 and Table F.3.\n0 2 4 6 8 10 Time Steps (1e6)\n50000\n100000\n150000\n200000\nAv er\nag e\nEp iso\nde R\new ar\nd Su\nm\nACKTR SQL TD3 PPO SAC SAC-Div DAC( -adapt.)\n(a) HumanoidStandup-v1\n0 1 2 3 4 5 Time Steps (1e6)\n0\n2000\n4000\n6000\n8000\nAv er\nag e\nEp iso\nde R\new ar\nd Su\nm\nACKTR SQL TD3 PPO SAC SAC-Div DAC( -adapt.)\n(b) DelayedHalfCheetah-v1\n0.0 0.5 1.0 1.5 2.0 2.5 3.0 Time Steps (1e6)\n0\n500\n1000\n1500\n2000\n2500\n3000\n3500\nAv er\nag e\nEp iso\nde R\new ar\nd Su\nm\nACKTR SQL TD3 PPO SAC SAC-Div DAC( -adapt.)\n(c) DelayedHopper-v1\n0 1 2 3 4 5 Time Steps (1e6)\n0\n1000\n2000\n3000\n4000\n5000\nAv er\nag e\nEp iso\nde R\new ar\nd Su\nm\nACKTR SQL TD3 PPO SAC SAC-Div DAC( -adapt.)\n(d) DelayedWalker2d-v1\n0 1 2 3 4 5 Time Steps (1e6)\n0\n1000\n2000\n3000\n4000\n5000\nAv er\nag e\nEp iso\nde R\new ar\nd Su\nm\nACKTR SQL TD3 PPO SAC SAC-Div DAC( -adapt.)\n(e) DelayedAnt-v1\nFigure F.3: Performance comparison to other RL algorithms\nDAC PPO ACKTR SQL TD3 SAC\nHumanoidS 197302.37±43055.31 160211.90 ±3268.37\n109655.30 ±49166.15\n138996.84 ±33903.03\n58693.87 ±12269.93\n167394.36 ±7291.99\nDel. HalfCheetah 7594.70±1259.23 2247.92 ±640.69\n3295.30 ±824.05\n5673.34 ±1241.30\n4639.85 ±1393.95\n3742.33 ±3064.55\nDel. Hopper 3428.18±69.08 2740.15 ±719.63\n2864.81 ±1072.64\n2720.32 ±127.71\n2276.58 ±1471.66\n2175.31 ±1358.39\nDel. Walker2d 4067.11±257.81 2859.27 ±1938.50\n1927.32 ±1647.49\n3323.63 ±503.18\n3736.72 ±1806.37\n3220.92 ±1107.91\nDel. Ant 4243.19±795.49 1224.33 ±521.62\n2956.51 ±234.89\n6.59 ±16.42\n904.99 ±1811.78\n3248.43 ±1454.48\nTable F.3: Max average return of DAC and other RL algorithms" }, { "heading": "G ABLATION STUDIES", "text": "Here, we provide more ablation studies for remaining delayed Mujoco tasks. Fig. G.2 shows the averaged learning curves of α, DαJS , and H(π) of DAC considering α-adaptation, where the control coefficient c is −2.0d and d = dim(A). Fig. G.2 shows the performance of DAC considering α-adaptation with control coefficient c = 0,−0.5d,−1.0d, and −2.0d. Fig. G.3 shows the performance of DAC with α = 0.5 and β = 0.1, 0.2, 0.4. Fig. G.4 shows the performance of SAC, SAC-Div with KL-divergence (SAC-Div(KL)), SAC-Div with JS-divergence (SAC-Div(JS)), and DAC to see the effect of JS divergence on the performance as explained in Section 6.3. Other hyperparameters follow the default setup given in Table D.1.\nWeighting factor α\n0.0 0.5 1.0 1.5 2.0 2.5 3.0 Time Steps (1e6)\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\nVa lu\ne\nDJS ( )/dim( )\n(a) DelayedHopper-v1\n0 1 2 3 4 5 Time Steps (1e6)\n0.4\n0.2\n0.0\n0.2\n0.4\n0.6 0.8 Va lu e\nDJS ( )/dim( )\n(b) DelayedWalker2d-v1\n0 1 2 3 4 5 Time Steps (1e6)\n0.6\n0.4\n0.2\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nVa lu\ne\nDJS ( )/dim( )\n(c) DelayedAnt-v1\nFigure G.1: Ablation study for α\nControl coefficient c\n0.0 0.5 1.0 1.5 2.0 2.5 3.0 Time Steps (1e6)\n0\n500\n1000\n1500\n2000\n2500\n3000\n3500\nAv er\nag e\nEp iso\nde R\new ar\nd Su\nm\nSAC SAC-Div DAC( -adapt., c = 0.0d) DAC( -adapt., c = 0.5d) DAC( -adapt., c = 1.0d) DAC( -adapt., c = 2.0d)\n(a) DelayedHopper-v1\n0 1 2 3 4 5 Time Steps (1e6)\n0\n1000\n2000\n3000\n4000\nAv er\nag e\nEp iso\nde R\new ar\nd Su\nm\nSAC SAC-Div DAC( -adapt., c = 0.0d) DAC( -adapt., c = 0.5d) DAC( -adapt., c = 1.0d) DAC( -adapt., c = 2.0d)\n(b) DelayedWalker2d-v1\n0 1 2 3 4 5 Time Steps (1e6)\n0\n1000\n2000\n3000\n4000\n5000\nAv er\nag e\nEp iso\nde R\new ar\nd Su\nm\nSAC SAC-Div DAC( -adapt., c = 0.0d) DAC( -adapt., c = 0.5d) DAC( -adapt., c = 1.0d) DAC( -adapt., c = 2.0d)\n(c) DelayedAnt-v1\nFigure G.2: Ablation study for c\nEntropy coefficient β\n0.0 0.5 1.0 1.5 2.0 2.5 3.0 Time Steps (1e6)\n0\n500\n1000\n1500\n2000\n2500\n3000 3500 Av er ag e Ep iso de R ew ar d Su m\nSAC( = 0.2) DAC( = 0.5, = 0.1) DAC( = 0.5, = 0.2) DAC( = 0.5, = 0.4)\n(a) DelayedHopper-v1\n0 1 2 3 4 5 Time Steps (1e6)\n0\n1000\n2000\n3000\n4000\n5000\nAv er\nag e\nEp iso\nde R\new ar\nd Su\nm\nSAC( = 0.2) DAC( = 0.5, = 0.1) DAC( = 0.5, = 0.2) DAC( = 0.5, = 0.4)\n(b) DelayedWalker2d-v1\n0 1 2 3 4 5 Time Steps (1e6)\n0\n1000\n2000\n3000\n4000\nAv er\nag e\nEp iso\nde R\new ar\nd Su\nm\nSAC( = 0.2) DAC( = 0.5, = 0.1) DAC( = 0.5, = 0.2) DAC( = 0.5, = 0.4)\n(c) DelayedAnt-v1\nFigure G.3: Ablation study for β\nThe Effect of JS divergence\n0.0 0.5 1.0 1.5 2.0 2.5 3.0 Time Steps (1e6)\n0\n500\n1000\n1500\n2000\n2500\n3000\n3500\nAv er\nag e\nEp iso\nde R\new ar\nd Su\nm\nSAC SAC-Div(KL) SAC-Div(JS) DAC( -adapt.)\n(a) DelayedHopper-v1\n0 1 2 3 4 5 Time Steps (1e6)\n0\n1000\n2000\n3000\n4000\n5000\nAv er\nag e\nEp iso\nde R\new ar\nd Su\nm\nSAC SAC-Div(KL) SAC-Div(JS) DAC( -adapt.)\n(b) DelayedWalker2d-v1\n0 1 2 3 4 5 Time Steps (1e6)\n0\n1000\n2000\n3000\n4000\n5000\nAv er\nag e\nEp iso\nde R\new ar\nd Su\nm\nSAC SAC-Div(KL) SAC-Div(JS) DAC( -adapt.)\n(c) DelayedAnt-v1\nFigure G.4: Ablation study for JS divergence" } ]
2,020
null
SP:7286d578f6acf486a688b7631e16c483efb6a540
[ "The authors replace the divergence-based constraint in trust region policy optimization model with an alternate distance measure, which is added to the objective function with a multiplier (beta). In fact, the parameter beta plays a role that is similar to a Lagrange multiplier, if the new distance measure is introduced as a constraint. The authors explain the shortcomings of KL-divergence and the solutions obtained with other methods but they do not provide a sufficient discussion how and why their simple approach overcomes those concerns. For instance, why would the new measure encourage exploration and what is the effect of large beta value on this?", "This paper introduces POP3D, an on-policy policy gradient algorithm that is a variant of TRPO and PPO. While TRPO uses a particular penalty function to keep the policy from being updated too aggressively, POP3D uses an alternative objective function that lower bounds the square of the total variance divergence between two policy distributions. The authors argue that this alternative formulation results in an algorithm that is sample-efficient, like PPO, but that is more effective at keeping policy updates from overshooting. The authors also argue that this new formulation helps users to avoid the arguably challenging process of selecting penalty constants, as required (for instance) by TRPO." ]
As a recognized variant and improvement for Trust Region Policy Optimization (TRPO), proximal policy optimization (PPO) has been widely used with several advantages: efficient data utilization, easy implementation, and good parallelism. In this paper, a first-order gradient reinforcement learning algorithm called Policy Optimization with Penalized Point Probability Distance (POP3D), which is a lower bound to the square of total variance divergence, is proposed as another powerful variant. The penalty item has dual effects, prohibiting policy updates from overshooting and encouraging more explorations. By carefully controlled experiments on both discrete and continuous benchmarks, our approach is proved highly competitive to PPO.
[ { "affiliations": [], "name": "A STRONG" }, { "affiliations": [], "name": "ON-POLICY COMPETITOR" } ]
[ { "authors": [ "Marc G Bellemare", "Will Dabney", "Rémi Munos" ], "title": "A distributional perspective on reinforcement learning", "venue": "arXiv preprint arXiv:1707.06887,", "year": 2017 }, { "authors": [ "Logan Engstrom", "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Firdaus Janoos", "Larry Rudolph", "Aleksander Madry" ], "title": "Implementation matters in deep rl: A case study on ppo and trpo", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Remi Munos", "Karen Simonyan", "Volodymir Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning" ], "title": "Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": "arXiv preprint arXiv:1802.01561,", "year": 2018 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Peter Henderson", "Riashat Islam", "Philip Bachman", "Joelle Pineau", "Doina Precup", "David Meger" ], "title": "Deep reinforcement learning that matters", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Matteo Hessel", "Joseph Modayil", "Hado Van Hasselt", "Tom Schaul", "Georg Ostrovski", "Will Dabney", "Dan Horgan", "Bilal Piot", "Mohammad Azar", "David Silver" ], "title": "Rainbow: Combining improvements in deep reinforcement learning", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Dan Horgan", "John Quan", "David Budden", "Gabriel Barth-Maron", "Matteo Hessel", "Hado van Hasselt", "David Silver" ], "title": "Distributed prioritized experience replay", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Kevin P Murphy" ], "title": "Machine learning: a probabilistic perspective", "venue": "MIT press,", "year": 2012 }, { "authors": [ "Hieu Pham", "Melody Guan", "Barret Zoph", "Quoc Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameters sharing", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Tom Schaul", "John Quan", "Ioannis Antonoglou", "David Silver" ], "title": "Prioritized experience replay", "venue": "arXiv preprint arXiv:1511.05952,", "year": 2015 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "High-dimensional continuous control using generalized advantage estimation", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Mingxing Tan", "Bo Chen", "Ruoming Pang", "Vijay Vasudevan", "Mark Sandler", "Andrew Howard", "Quoc V Le" ], "title": "Mnasnet: Platform-aware neural architecture search for mobile", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Hado Van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double qlearning", "venue": "In Thirtieth AAAI conference on artificial intelligence,", "year": 2016 }, { "authors": [ "Ziyu Wang", "Tom Schaul", "Matteo Hessel", "Hado Hasselt", "Marc Lanctot", "Nando Freitas" ], "title": "Dueling network architectures for deep reinforcement learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Yuhuai Wu", "Elman Mansimov", "Roger B Grosse", "Shun Liao", "Jimmy Ba" ], "title": "Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation", "venue": "In Advances in neural information processing systems,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "With the development of deep reinforcement learning, lots of impressive results have been produced in a wide range of fields such as playing Atari game (Mnih et al., 2015; Hessel et al., 2018), controlling robotics (Lillicrap et al., 2015), Go (Silver et al., 2017), neural architecture search (Tan et al., 2019; Pham et al., 2018).\nThe basis of a reinforcement learning algorithm is generalized policy iteration (Sutton & Barto, 2018), which states two essential iterative steps: policy evaluation and improvement. Among various algorithms, policy gradient is an active branch of reinforcement learning whose foundations are Policy Gradient Theorem and the most classical algorithm REINFORCEMENT (Sutton & Barto, 2018). Since then, handfuls of policy gradient variants have been proposed, such as Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015), Asynchronous Advantage Actor-Critic (A3C) (Mnih et al., 2016), Actor-Critic using Kronecker-factored Trust Region (ACKTR) (Wu et al., 2017), and Proximal Policy Optimization (PPO) (Schulman et al., 2017).\nImproving the strategy monotonically had been nontrivial until Schulman et al. (2015) proposed Trust Region Policy Optimization (TRPO), in which Fisher vector product is utilized to cut down the computing burden. Specifically, Kullback–Leibler divergence (KLD) acts as a hard constraint in place of objective, because its corresponding coefficient is difficult to set for different problems. However, TRPO still has several drawbacks: too complicated, inefficient data usage. Quite a lot of efforts have been devoted to improving TRPO since then and the most commonly used one is PPO.\nPPO can be regarded as a first-order variant of TRPO and have obvious improvements in several facets. In particular, a pessimistic clipped surrogate objective is proposed where TRPO’s hard constraint is replaced by the clipped action probability ratio. In such a way, it constructs an unconstrained optimization problem so that any first-order stochastic gradient optimizer can be directly applied. Besides, it’s easier to be implemented and more robust against various problems, achieving an impressive result on Atari games (Brockman et al., 2016). However, the cost of data sampling is not always cheap. Haarnoja et al. (2018) design an off-policy algorithm called Soft Actor-Critic and achieves the state of the art result by encouraging better exploration using maximum entropy.\nIn this paper, we focus on the on-policy improvement to improve PPO and answer the question: how to successfully leverage penalized optimization to solve the constrained one which is formulated by Schulman et al. (2015).\n1. It proposes a simple variant of TRPO called POP3D along with a new surrogate objective containing a point probability penalty item, which is symmetric lower bound to the square of the total variance divergence of policy distributions. Specifically, it helps to stabilize\nthe learning process and encourage exploration. Furthermore, it escapes from penalty item setting headache along with penalized version TRPO, where is arduous to select one fixed value for various environments.\n2. It achieves state-of-the-art results among on-policy algorithms with a clear margin on 49 Atari games within 40 million frame steps based on two shared metrics. Moreover, it also achieves competitive results compared with PPO in the continuous domain. It dives into the mechanism of PPO’s improvement over TRPO from the perspective of solution manifold, which also plays an important role in our method.\n3. It enjoys almost all PPO’s advantages such as easy implementation, fast learning ability.\nWe provide the code and training logs to make our work reproducible." }, { "heading": "2 PRELIMINARY KNOWLEDGE AND RELATED WORK", "text": "" }, { "heading": "2.1 POLICY GRADIENT", "text": "Agents interact with the environment and receive rewards which are used to adjust their policy in turn. At state st, one agent takes strategy π and transfers to a new state st+1, rewarded rt by the environment. Maximizing discounted return (accumulated rewards) Rt is its objective. In particular, given a policy π, Rt is defined as\nRt = ∞∑ n=0 (rt + γrt+1 + γ 2rt+2 + ...+ γ nrt+n). (1)\nγ is the discounted coefficient to control future rewards, which lies in the range (0, 1). Regarding a neural network with parameter θ, the policy πθ(a|s) can be learned by maximizing Equation 1 using the back-propagation algorithm. Particularly, given Q(s, a) which represents the agent’s return in state s after taking action a, the objective function can be written as\nmax θ\nEs,a log πθ(a|s)Q(s, a). (2)\nEquation 2 lays the foundation for handfuls of policy gradient based algorithms. Another variant can be deduced by using A(s, a) = Q(s, a)− V (s) (3) to replace Q(s, a) in Equation 2 equivalently, V (s) can be any function so long as V depends on s but not a. In most cases, state value function is used for V , which not only helps to reduce variations but has clear physical meaning. Formally, it can be written as\nmax θ\nEs,a log πθ(a|s)A(s, a). (4)" }, { "heading": "2.2 ADVANTAGE ESTIMATE", "text": "A commonly used method for advantage calculation is one-step estimation, which follows A(st, at) = Q(st, at)− V (st) = rt + γV (st+1)− V (st). (5)\nHowever, a more accurate method called generalized advantage estimation is proposed in Schulman et al. (2016), where all time steps of estimation are combined and summarized using λ-based weights,. The generalized advantage estimator ÂGAE(γ,λ)t is defined by Schulman et al. (2016) as\n GAE(γ,λ) t := (1− λ) ∗ ( (1) t + λ (2) t + λ 2 (3) t + . . .) = ∞∑ l=0 (γλ)lδVt+l\nδVt+l = rt+l + γV (st+l+1)− V (st+l).\n (k) t := k−1∑ l=0 γlδVt+l = −V (st) + rt + γrt+1 + · · ·+ γk−1rt+k−1 + γkV (st+k)\n(6)\nThe parameter λ meets 0 ≤ λ ≤ 1, which controls the trade-off between bias and variance. All methods in this paper utilize ÂGAE(γ,λ)t to estimate the advantage." }, { "heading": "2.3 TRUST REGION POLICY OPTIMIZATION", "text": "Schulman et al. (2015) propose TRPO to update the policy monotonically. In particular, its mathematical form is\nmax θ Et[ πθ(at|st) πθold(at|st) Ât]− CEt[KL[πθold(·|st), πθ(·|st)]]\n= max s Ea∼πθ(a|s)[Aπθold (s, a)])\n(7)\nwhere C is the penalty coefficient, C = 2 γ(1−γ)2 .\nIn practice, the policy update steps would be too small if C is valued as Equation 7. In fact, it’s intractable to calculate C beforehand since it requires traversing all states to reach the maximum. Moreover, inevitable bias and variance will be introduced by estimating the advantages of old policy while training. Instead, a surrogate objective is maximized based on the KLD constraint between the old and new policy, which can be written as below,\nmax θ Et[ πθ(at|st) πθold(at|st) Ât]\ns.t. Et[KL[πθold(·|st), πθ(·|st)]] ≤ δ (8)\nwhere δ is the KLD upper limitation. In addition, the conjugate gradient algorithm is applied to solve Equation 8 more efficiently. Two major problems have yet to be addressed: one is its complexity even using the conjugate gradient approach, another is compatibility with architectures that involve noise or parameter sharing tricks (Schulman et al., 2017)." }, { "heading": "2.4 PROXIMAL POLICY OPTIMIZATION", "text": "To overcome the shortcomings of TRPO, PPO replaces the original constrained problem with a pessimistic clipped surrogate objective where KL constraint is implicitly imposed. The loss function can be written as\nLCLIP (θ) = Et[min(rt(θ)Ât, clip(rt(θ), 1− , 1 + )Ât)]\nrt(θ) = πθ(at|st) πθold(at|st) ,\n(9)\nwhere is a hyper-parameter to control the clipping ratio. Except for the clipped PPO version, KL penalty versions including fixed and adaptive KLD. Besides, their simulation results convince that clipped PPO performs best with an obvious margin across various domains." }, { "heading": "3 POLICY OPTIMIZATION WITH PENALIZED POINT PROBABILITY DISTANCE", "text": "Before diving into the details of POP3D, we review some drawbacks of several methods, which partly motivate us." }, { "heading": "3.1 DISADVANTAGES OF KULLBACK-LEIBLER DIVERGENCE", "text": "TRPO (Schulman et al., 2015) induced the following inequality1,\nη(πθ) ≤ Lπθold (πθ) + 2 γ (1− γ)2 α2\nα = DmaxTV (πθold , πθ)\nDmaxTV (πθold , πθ) = maxs DTV (πθold ||πθ)\n(10)\nTRPO replaces the square of total variation divergence DmaxTV (πθold , πθ) by D max KL (πθold , πθ) = maxsDKL(πθold ||πθ). 1Note that η means loss instead of return as the ICML version (Schulman et al., 2015).\nGiven a discrete distribution p and q, their total variation divergence DTV (p||q) is defined as\nDTV (p||q) := 1\n2 ∑ i |pi − qi| (11)\nin TRPO (Schulman et al., 2015). Obviously, DTV is symmetric by definition, while KLD is asymmetric. Formally, given state s, KLD of πθold(·|s) for πθ(·|s) can be written as\nDKL(πθold(·|s)||πθ(·|s)) := ∑ a πθold(a|s) ln πθold(a|s) πθ(a|s) . (12)\nSimilarly, KLD in the continuous domain can be defined simply by replacing summation with integration. The consequence of KLD’s asymmetry leads to a non-negligible difference of whether choose DKL(πθold ||πθ) or DKL(πθ||πθold). Sometimes, those two choices result in quite different solutions. Robert compared the forward and reverse KL on a distribution, one solution matches only one of the modes, and another covers both modes (Murphy, 2012). Therefore, KLD is not an ideal bound or approximation for the expected discounted cost." }, { "heading": "3.2 DISCUSSION ABOUT PESSIMISTIC PROXIMAL POLICY", "text": "In fact, PPO is called pessimistic proximal policy optimization2 in the meaning of its objective construction style. Without loss of generality, supposing At > 0 for given state st and action at, and the optimal choice is a?t . When at = a ? t , a good update policy is to increase the probability of action to a relatively high value a?t by adjusting θ. However, the clipped item clip(rt(θ), 1− , 1 + )Ât will fully contribute to the loss function by the minimum operation, which ignores further reward by zero gradients even though it’s the optimal action. Other situation with At < 0 can be analyzed in the same manner.\nHowever, if the pessimistic limitation is removed, PPO’s performance decreases dramatically (Schulman et al., 2017), which is again confirmed by our preliminary experiments. In a word, the pessimistic mechanism plays a very critical role for PPO in that it has a relatively weak preference for a good action decision at a given state, which in turn affects its learning efficiency." }, { "heading": "3.3 RESTRICTED SOLUTION MANIFOLD FOR EXACT DISTRIBUTION MATCHING", "text": "To be simple, we don’t take the model identifiability issues along with deep neural network into account here because they don’t affect the following discussion much (LeCun et al., 2015). Suppose πθ? is the optimal solution for a given environment, in most cases, more than one parameter set for θ can generate the ideal policy, especially when πθ? is learned by a deep neural network. In other words, the relationship between θ and πθ? is many to one. On the other hand, when agents interact with the environment using policy represented by neural networks, they prefer to takes the action with the highest probability. Although some strategies of enhancing exploration are applied, they don’t affect the policy much in the meaning of expectation.\nRL methods can help agents learn useful policies after fully interacting with the environment. Take Atari-Pong game for example, when an agent sees a Pong ball coming close to the right (state s1), its optimal policy is moving the racket to the right position (for example, the \"RIGHT\" action) with a distribution ps1θ1 = [0.05, 0.05, 0.1, 0.7, 0.05, 0.05]\n3. The probability of selecting \"RIGHT\" is a relatively high value such as 0.7. It’s almost impossible to push it to be 1.0 exactly since it’s produced by a softmax operation on several discrete actions. In fact, we hardly obtain the optimal solution accurately. Instead, our goal is to find a good enough policy. In this case, the policy of pushing p(RIGHT|s1) above a threshold is sufficient to be a good one. In other words, paying attention to the most critical actions is sufficient, and we don’t care much the probability value of the other non-critical actions. For example, a good policy at s1 is [?,?, ≥ 0.7, ?,?,?]. Note that πθ(a|s) is represented by a neural network parameterized using θ and a good policy for the whole game means\n2The word “pessimistic” is used by the PPO paper. 3The action space is described as [‘NOOP’, ‘FIRE’, ‘RIGHT‘, ‘LEFT’, ‘RIGHTFIRE’, ‘LEFTFIRE’].\nthat the network can perform well across the whole state space. Focusing on those critical actions at each state4 and ignoring non-critical ones can help the network learn better and more easily.\nUsing a penalty such as KLD cannot utilize this good property, because it involves all of the actions’ probabilities. Moreover, it doesn’t stop penalizing unless two distributions become exactly indifferent or the advantage item is large enough to compensate for the KLD cost. Therefore, even if θ outputs θold the same high probability for the right action, the penalization still exists. Suppose that two parameters for θ1: θ2 and θ3, where ps1θ2 = [0.01, 0.15, 0.05, 0.7, 0.01, 0.08] and p s1 θ3\n= [0.01, 0.01, 0.01, 0.7, 0.26, 0.01]. When the agent already chooses RIGHT at S1, the loss item from a good penalized distance should be small. However, DKL(πθ1(·|s1)||πθ2(·|s1))=0.15 and DKL(πθ1(·|s1)||πθ3(·|s1))=0.39. However, it’s not necessary to require the distribution of other actions (‘NOOP’, ‘FIRE’, ‘LEFT’, ‘RIGHTFIRE’, ‘LEFTFIRE’) of ps1θ2 near to p s1 θ1\n. Instead, it’s better to relax this requirement to enlarge the freedom degree of the network and focus on learning important actions. Doing this brings another advantage, the agent can explore more for non critical actions. From the perspective of the manifold, optimal parameters constitute a solution manifold. The KLD penalty will act until θ exactly locates in the solution if possible, akin to mapping a point onto a curve. Instead, if the agent concentrates only on critical actions like a human does, it’s much easier to approach the manifold in a higher dimension. This is comparable to expanding the solution manifold by at least one dimension, e.g. from curves to surfaces or from surfaces to spheres." }, { "heading": "3.4 EXPLORATION", "text": "One shared highlight in reinforcement learning is the balance between exploitation and exploration. For a policy-gradient algorithm, entropy is added in the total loss to encourage exploration in most cases. When included in the loss function, KLD penalizes the old and new policy probability mismatch for all possible actions as Equation 12 given a state s. This strict punishment for every action’s probability mismatch, which discourages exploration." }, { "heading": "3.5 POINT PROBABILITY DISTANCE", "text": "To overcome the above-mentioned shortcomings, we propose a surrogate objective with the point probability distance penalty, which is symmetric and more optimistic than PPO. In the discrete domain, when the agent takes action a, the point probability distance between πθold(·|s) and πθ(·|s) is defined by\nDapp(πθold(·|s), πθ(·|s)) = (πθold(a|s)− πθ(a|s))2. (13)\nAttention should be paid to the penalty definition item, the distance is measured by the point probability, which emphasizes its mismatch for the sampled actions for a state. Unless it would lead to confusion, we omit a for simplicity in the following sections. Undoubtedly, Dpp is symmetric by definition. Furthermore, it can be proved that Dpp is indeed a lower bound for the total variance divergence DTV . As a special case, it can be easily proved that for binary distribution, D2TV (p||q) = Dpp(p||q).\nTheorem 3.1. For two discrete probability distributions p and q with K values, then D2TV (p||q) ≥ Dapp(p||q) holds for any action a and EaDapp(p||q) is a lower bound for D2TV (p||q).\n4Note that some states don’t have critical action. Taking the Pong for example, when the ball is just shot back, the agent can choose any action.\nProof. Let pl = α, ql = β for the l-th action a, and suppose a ≥ b without loss of generalization. So,\nD2TV (p||q) = ( 1\n2 K∑ i=1 |pi − qi|)2 = ( 1 2 K∑ i=1,i6=l |pi − qi|+ 1 2 |pl − ql|)2\n≥ (1 2 | K∑ i=1,i6=l pi − qi|+ 1 2 (α− β))2 = (1 2 |1− α− (1− β)|+ 1 2 (α− β))2\n= ( 1 2 (α− β) + 1 2 (α− β))2 = Dapp(p||q)\nEaDapp(p||q) = ∑ a p(a)Dapp(p||q) ≤ ∑ a p(a)D2TV (p||q) = D2TV (p||q)\nSince 0 ≤ πθ(a|s) ≤ 1 holds for discrete action space, Dpp has a lower and upper boundary: 0 ≤ Dpp ≤ 1. Moreover, Dpp is less sensitive to action space dimension than KLD, which has a similar effect as PPO’s clipped ratio to increase robustness and enhance stability. Equation 13 stays unchanged for the continuous domain, and the only difference is πθ(a|s) represents point probability density instead of probability." }, { "heading": "3.6 POP3D", "text": "After we have defined the point probability distance, we use a new surrogate objective fθ for POP3D, which can be written as\nmax θ Et[ πθ(at|st) πθold(at|st) Ât − βDatpp(πθold(·|st), πθ(·|st))], (14)\nwhere β is the penalized coefficient. These combined advantages lead to considerable performance improvement, which escapes from the dilemma of choosing preferable penalty coefficient. Besides, we use generalized advantage estimates to calculate Ât. Algorithm 1 shows the complete iteration process of POP3D. Moreover, it possesses the same computing cost and data efficiency as PPO.\nAlgorithm 1 POP3D 1: Input: max iterations L , actors N , epochs K 2: for iteration = 1 to L do 3: for actor = 1 to N do 4: Run policy πθold for T time steps 5: Compute advantage estimations Â1, ..., ÂT 6: end for 7: for epoch = 1 to K do 8: Optimized loss objective f (θ) w.r.t θ with mini-batch size M ≤ NT , then update θold ← θ. 9: end for 10: end for" }, { "heading": "3.7 WORKING MECHANISM OF POP3D", "text": "As for the toy example in Section 3.3, DRIGHTpp (πθ1(·|s) ||πθ2(·|s)=DRIGHTpp (πθ1(·|s) ||πθ3(·|s)=0. Therefore, it can help the agent to focus on the important action. When updating θ from θold as Equation 14, the gradient f(θ) w.r.t. θ can be written as\n∇θf(θ) = ∇θπθ(at|st) πθold(at|st) Ât − 2β[πθ(at|st)− πθold(at|st)]∇θπθ(at|st)\n= ∇θπθ(at|st)[ Ât\nπθold(at|st) − 2β(πθ(at|st)− πθold(at|st))]\n= ∇θπθ(at|st)[ Ât\nπθold(at|st) − 2βδ(at|st)]\n(15)\nwhere δ(at|st) := πθ(at|st)−πθold(at|st). Suppose the agent selects at at st using πθold and obtains a positive advantage Ât , if πθ(at|st) is larger than πθ(at|st), then 2βδ(at|st) will play a damping role to avoid too greedy preference for at (i.e. too large probability), which in turn leaves more space for other actions to be explored. Other cases such as negative Ât can be analyzed similarly. The hyper-parameter β controls the damping force.\nIn the early stage of learning, πθold(at|st) is near 1/K (taking K discrete spaces for example) and the magnitude of Ât is large, while the damping force is a bit weak. Therefore, the agent learns fast. Then β shows a relative stronger force to avoid overshooting for action selection and encourage more exploration. As for the final stage, the policy changes slowly because the learning rate is low, where δ(at|st) is small and therefore it converges." }, { "heading": "3.8 RELATIONSHIP WITH PPO", "text": "To conclude this section, we take some time to see why PPO works by taking the above viewpoints into account. When we pour more attention to Equation 9, the ratio rt(θ) only involves the probability for given action a, which is chosen by policy π. In other words, all other actions’ probabilities except a are not activated, which no longer contribute to back-propagation and allow probability mismatch, which encourage exploration. This procedure behaves similarly to POP3D, which helps the network to learn more easily. Above all, POP3D is designed to conform with the regulations for overcoming above mentioned problems, and in the next section experiments from commonly used benchmarks will evaluate its performance." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 CONTROLLED EXPERIMENTS SETUP", "text": "OpenAI Gym is a well-known simulation environment to test and evaluate various reinforcement algorithms, which is composed of both discrete (Atari) and continuous (Mujoco) domains (Brockman et al., 2016). Most recent deep reinforcement learning methods such as DQN variants (Van Hasselt et al., 2016; Wang et al., 2016; Schaul et al., 2015; Bellemare et al., 2017; Hessel et al., 2018), A3C, ACKTR, PPO are evaluated using only one set of hyper-parameters5. Therefore, we evaluate POP3D’s performance on 49 Atari games(v4, discrete action space ) and 7 Mujoco (v2, continuous).\nSince PPO is a distinguished RL algorithm which defeats various methods such as A3C, A2C ACKTR, we focus on a detailed quantitative comparison with fine-tuned PPO. And we don’t consider large scale distributed algorithms Apex-DQN (Horgan et al., 2018) and IMPALA (Espeholt et al., 2018), because we concentrate on comparable and fair evaluation, while the latter is designed to apply with large scale parallelism. Nevertheless, some orthogonal improvements from those methods have the potentials to improve our method further. Furthermore, we include TRPO to acts as a baseline method. Engstrom et al. (2020) carefully study the underlying factor that helps PPO outperform TPRO. To avoid unfair comparisons, we carefully control the settings. In addition, quantitative comparisons between KLD and point probability penalty helps to convince the critical role of the latter, where the former strategy is named fixed KLD in Schulman et al. (2017) and can act as another good baseline in this context, named by BASELINE below.\n5DQN variants are evaluated in Atari environment since they are designed to solve problems about discrete action space. However, policy gradient-based algorithms can handle both continuous and discrete problems.\nIn particular, we retrained one agent for each game with fine-tuned hyper-parameters6. To avoid the problems of reproduction about reinforcement algorithms mentioned in Henderson et al. (2018), we take the following measures:\n• Use the same training steps and make use of the same amount of game frames(40M for Atari game and 10M for Mujoco).\n• Use the same neural network structures, which is the CNN model with one action head and one value head for the Atari game, and a fully-connected model with one value head and one action head which produces the mean and standard deviation of diagonal Gaussian distribution as PPO.\n• Initialize parameters using the same strategy as PPO.\n• Keep Gym wrappers from Deepmind such as reward clipping and frame stacking unchanged for Atari domain, and enable 30 no-ops at the beginning of each episode.\n• Use Adam optimizer (Kingma & Ba, 2014) and decrease α linearly from 1 to 0 for Atari domain as PPO.\nTo facilitate further comparisons with other approaches, we release the seeds and detailed results7(across the entire training process for different trials). In addition, we randomly select three seeds from {0, 10, 100, 1000, 10000} for two domains, {10,100,1000} for Atari and {0,10,100} for Mujoco in order to decrease unfavorable subjective bias stated in Henderson et al. (2018)." }, { "heading": "4.2 EVALUATION METRICS", "text": "PPO utilizes two score metrics for evaluating agents’ performance using various RL algorithms. One is the mean score of the last 100 episodes Score100, which measures how high a strategy can hit eventually. Another is the average score across all episodes Scoreall, which evaluates how fast an agent learns. In this paper, we conform to this routine and calculate individual metric by averaging three seeds in the same way." }, { "heading": "4.3 DISCRETE DOMAIN COMPARISONS", "text": "Hyper-parameters We search hyper-parameter four times for the penalty coefficient β based on four Atari games while keeping other hyper-parameters unchanged as PPO and fix β = 5.0 to train all Atari games. For BASELINE, we also search hyper-parameter four times on penalty coefficient β and choose β = 10.0. To save space, detailed hyper-parameter setting can be found in Table 6 and 7.\nThis process is not beneficial for POP3D owing to missing optimization for all hyper-parameters. There are two reasons to make this choice. On the one hand, it’s the simplest way to make a relatively fair comparison group such as keeping the same iterations and epochs within one loop to our knowledge. On the other hand, this process imposes low search requirements for time and resources. That’s to say, we can draw a conclusion that our method is at least competitive to PPO if it performs better on benchmarks.\nComparisons The final score of each game is averaged by three different seeds and the highest is in bold. As Table 1 shows, POP3D outperforms 32 across 49 Atari games given the final score, followed by PPO with 11, BASELINE with 5, and TRPO with 1. Interestingly, for games that POP3D score highest, BASELINE score worse than PPO more often than the other way round, which means that POP3D is not just an approximate version of BASELINE.\nFor another metric, POP3D wins 20 out of 49 Atari games which matches PPO with 18, followed by BASELINE with 6, and last ranked by TRPO with 5. If we measure the stability of an algorithm by the score variance of different trials, POP3D scores high with good stability across various seeds. And PPO behaves worse in Game Kangaroo and UpNDown. Interestingly, BASELINE shows a large variance for different seeds for several games such as BattleZone, Freeway, Pitfall, and Seaquest. POP3D reveals its better capacity to score high and similar fast learning ability in this domain. The detailed metric for each game is listed in Table 3 and 4.\n6We use OpenAI’s PPO and TRPO code: https://github.com/openai/baselines.git 7https://drive.google.com/file/d/1c79TqWn74mHXhLjoTWaBKfKaQOsfD2hg/view" }, { "heading": "4.4 CONTINUOUS DOMAIN COMPARISONS", "text": "Metric PPO POP3D\nScore100 1 6 Scoreall 4 3\nHyper-parameters For PPO, we use the same hyperparameter configuration as Schulman et al. (2017). Regarding POP3D, we search on two games three times and select 5.0 as the penalty coefficient. More details about hyper-parameters for PPO and POP3D are listed in Table 8. Unlike the Atari domain, we utilize the constant learning rate strategy as Schulman et al. (2017) in the continuous domain instead of the linear decrease strategy.\nComparison Results The scores are also averaged on three trials and summarized in Table 1. POP3D occupies 6 out of 7 games on Score100. Evaluation metrics of both across different games are illustrated in Table 2 and 5. In summary, both metrics indicate that POP3D is competitive to PPO in the continuous domain.\n5 CONCLUSION\nIn this paper, we introduce a new reinforcement learning algorithm called POP3D (Policy Optimization with Penalized Point Probability Distance), which acts as a TRPO variant like PPO. Compared with KLD that is an upper bound for the square of total variance divergence between two distributions, the penalized point probability distance is a symmetric lower bound. Besides, it equivalently expands the optimal solution manifold effectively while encouraging exploration, which is a similar mechanism implicitly possessed by PPO. The proposed method not only possesses several critical improvements from PPO but outperforms with a clear margin on 49 Atari games from the respective of final scores and meets PPO’s match as for fast learning ability.\nMore interestingly, it not only suffers less from the penalty item setting headache along with TRPO, where is arduous to select one fixed value for various environments but outperforms fixed KLD baseline from PPO. In summary, POP3D is highly competitive and an alternative to PPO." }, { "heading": "A SCORE TABLES AND CURVES", "text": "Mean scores of various methods for Atari domain are listed in Table 3 and 4." }, { "heading": "B EXPERIMENTS", "text": "B.1 HYPER-PARAMETERS\nB.1.1 ATARI\nPPO’s and POP3D’s hyper-parameters for Mujoco games are respectively listed in Table 6.\nB.1.2 MUJOCO\nPPO’s and POP3D’s hyper-parameters for Mujoco games are respectively listed in Table 8." } ]
2,020
null